Tuesday, July 8, 2008

Behavioral Experiment Software Survey Results

Behavioral Experiment Software Survey Results
Hi everyone,
The results of the software survey are in. We had 187 responses, but one was unanalyzable (the respondent did not specify a software package). Thanks everyone for your responses; I hope these results prove useful.
Here's a one-paragraph summary of the survey results; details below: E-Prime is the most popular package of those surveyed, but the majority of folks are using either E-Prime, DMDX, or some flavor of PsyScope. E-Prime, PsyScope, SuperLab, and ERTS are all rated as easy to build experiments with, and about equally so. DMDX and NESU are seen as slightly harder. Presentation and MatLab are notably the hardest of the commonly used packages. SuperLab was seen as easiest for novices. E-Prime and PsyScope were rated a shade harder for novices, and then from ERTS to SuperLab Pro to DMDX to NESU, novice-ease ratings dropped. Presentation and MatLab were both seen as notably difficult for novices. DMDX nets the highest satisfaction rating, with PsyScope X and E-Prime a smidge lower. MatLab, SuperLab, and Presentation rank below that, ending with NESU, PsyScope classic, and SuperLab Pro. PsyScope X has the highest "sticking with" rate, and E-Prime, SuperLab, MatLab, and DMDX all have 50%+ "sticking with" ratings. People are running away from PsyScope classic in droves, probably because the classic Mac OS is very quickly approaching end-of-life. E-Prime, both flavors of PsyScope, and DMDX are highly recommended. MatLab is also well recommended. Then it drops noticeably to Presentation, SuperLab, ERTS, and finally SuperLab Pro and NESU. A summary of two open-ended questions is also included below.
*
Details:
Of the 186 valid responses, the most commonly used software package ended up being E-Prime (57 responses, 30.6%). Here's the ranking of software packages, listing number of respondents and percentage for each:
1. E-Prime: 57, 30.6%
2. DMDX: 32, 17.2%
3. PsyScope Classic: 18, 9.7%
4. Presentation: 12, 6.5%
5. PsyScope X: 11, 5.9%
6. NESU: 8, 4.3%
7. ERTS: 6, 3.2%
8. SuperLab: 5, 2.7%
9. MatLab: 5, 2.7%
10. SuperLab Pro: 4, 2.2%
11. Linger: 4, 2.2%
12. MEL: 3, 1.6%
13. Experiment Builder: 3, 1.6%
14. EyeTrack: 2, 1.1%
The following packages had 1 response each: Authorware, C programming, Delphi Borland, DirectRT, ExBuilder, Habit, Inquisit, MacroMedia Director, PHPsurveyor, PCexpt, RSVP, WWStim, WebExp, iMovie, tscope, vision analyzer/recorder.
Bob Slevc worked to dig up links for many of these software packages. I'll put the links below the signature line.
Note that PsyScope classic and PsyScope X could be combined to have a total response count of 29 or 15.6%, which would keep it in third place behind E-Prime and DMDX. SuperLab and SuperLab Pro (which I assume are distinct) could similarly be combined for 9 responses (4.8%), which would put it right behind Presentation.
*
We asked, "How easy/hard is it for you to build an experiment with your software?" with a 7-point response scale (1 = "very easy" and 7 = "very hard"). Overall, the mean difficulty rating was 3.09 with a standard deviation of 1.43, and a median of 3. Here are the mean and median build-difficulty ratings for the packages that received at least five responses (combining the above noted PsyScope and SuperLab):
ERTS (6): 2.5, 2
E-Prime (57): 2.68, 2
PsyScope (29): 2.86, 3
SuperLab (9): 3.0, 3
NESU (8): 3.25, 3
DMDX (32): 3.38, 3.5
MatLab (5): 4.2, 4
Presentation (12): 4.54, 5
*
We also asked, "How easy/hard is it for a novice to learn how to build experiments with your software?" The mean difficulty rating was 4.12 with a standard deviation of 1.63, and a median of 4. Here are the mean and median build-difficulty ratings for the primarily used packages (for this analysis, PsyScope classic and PsyScope X were combined, because their response profiles were similar; SuperLab and SuperLab pro were different, so are reported separately):
SuperLab (5): 2.6, 3
PsyScope (29): 3.5, 3.5
E-Prime (57): 3.54, 3
ERTS (6): 4.17, 4.5
SuperLab Pro (4): 4.25, 4.5
DMDX (32): 4.88, 5
NESU (8): 5, 5.5
Presentation (12): 5.64, 6
MatLab (5): 6, 6
*
We then asked, "How satisfied are you with your current software?" with "1" meaning "Completely dissatisfied" and "7" meaning "Completely Satisfied." The mean satisfaction rating was 4.59 with a standard deviation of 1.43, and a median of 5. Here are the mean and median ratings by package:
DMDX (32): 5.09, 5
PsyScope X (11): 4.91, 5
E-Prime (57): 4.77, 5
MatLab (5): 4.4, 4
SuperLab (5): 4.2, 4
Presentation (12): 4.09, 4
NESU (8): 3.88, 4
PsyScope classic (18): 3.82, 4
SuperLab Pro (4): 3, 3
*
Two more quantitative questions. First, we asked "Are you sticking with your current software for the foreseeable future, or are you looking to change setups?" 25 respondents responded with "Don't Know," 103 with "Sticking with my current software for the foreseeable future," and 44 with "looking to change." Here's the breakdown by package, reporting percentages of those sticking with their package and those looking to change (sorted by sticking percentage):
PsyScope X (10): 90%, 10%
E-Prime (52): 71.1%, 9.6%
SuperLab (5): 60%, 40%
MatLab (5): 60%, 40%
DMDX (30): 53%, 20%
Presentation (11): 45.4%, 36.4%
ERTS (5): 40.0%, 60%
PsyScope classic (16): 37.5%, 62.5%
NESU (8): 12.5%, 62.5%
SuperLab Pro (4): 0%, 50%
*
Finally, we asked "Would you recommend your current software?" 138 people said "yes" and 32 said "no." Here's the breakdown of percent "yes" responses by package:
E-Prime (50): 92%
PsyScope X (10): 90%
PsyScope classic (16): 87.5%
DMDX (30): 86.7%
MatLab (5): 80%
Presentation (12): 63.6%
SuperLab (5): 60%
ERTS (5): 60%
SuperLab Pro (4): 50%
NESU (8): 50%
*
We also asked two complementary open-ended questions that aren't easy to summarize. One was, "What do you like about your current software? What are its strengths? What does it do well?" and the other was, "What do you not like about your current software? What are its weaknesses? What does it not do well (or at all)?" Considering the big hitters (E-Prime, DMDX, and PsyScope), my general impression of the flavor of the comments were:
E-Prime: Easy to learn, good support, user friendly, etc. But, some consider it expensive, thought it inflexible, and are worried about precision of timing.
DMDX: It's free, powerful, good timing, good user-support group, and good author support. Weaknesses were mostly regarding lack of intuitiveness and steep learning curve.
PsyScope: It's free, user-friendly, timing is accurate. But it can be buggy. PsyScope classic users are worried about using a legacy system. PsyScope X users worry about the transition to Intel-based Macs, but with some optimism.
*
The Excel file with everyone's responses is available on this page: .
Again, thanks for participating. We were thrilled to see that we actually had 187 people respond!
Best,
Vic Ferreira
Jeremy Boyd
Jeff Elman
Robert Buffington
Bob Slevc
MEL: (Note that MEL is the predecessor to E-Prime)
PsyScope Classic: http://psyscope.psy.cmu.edu/
SuperLab (Pro): http://www.superlab.com/
UPDATE: Other relevant links (Thanks to Roberto Heredia):
UPDATE 2: Alan Garnham suggested to me that SuperLab and SuperLab Pro are not distinct products (though I swear I have a memory of their being something called ‘SuperLab’ without ‘SuperLab Pro’!). If I get a chance, I’ll combine the two in the above analyses.

source:
http://lpl.ucsd.edu/LabPage/Lab_Blog/B1A6A7D2-0069-41E3-89E9-B3683FEEC758.html

Friday, July 4, 2008

To Err Is Human

To Err Is Human

By He, Jibo

Everyone knows the old saying, “To err is human”. It is unavoidable for human being to make errors in decision making, because of our limitation of time, physical, cognitive, emotional and other resources. These limitations fail all our effortful endeavors to make perfect rational decision, and maximize our expected value. The same limitations harass the stock brokers in the Wall Street (Blodget, 2004), and doctors (Groopman, 20007). No matter how smart they are and how many tools they are armed with, they cannot escape from making bad decisions.

Everyone knows! Wait! But maybe not for classical economists. Unlike ordinary human, such as us, the economists are equipped with the most advanced mathematical tools and logic ability. They assume that human are Homo economicus, with unlimited resources in time, physical and cognitive ability, as well as stable preferences (Cassidy, 2006). The human in economists’ eyes makes rational decisions, and never err.

Economists are so smart and almighty that they have successfully attracted a great amount of followers, amongst of which include the stock brokers at Wall Street, and the educators of the doctors. They all pursue the dreams of rational decisions, which the economists depict for them. So the brokers developed all kinds of complicated tools and mathematics index, for example, NASDAQ index, K-line etc. But none of them successfully predicted the disastrous depression in 1929. And it is worthy to note that the economists, the dream designers of perfect decision, have successfully predicted the 10 economic depressions for the only 5 actual depressions in the past 30 years. Under the illusions of perfect decision makers, the educators of doctors are also temped to believe that the candidate doctors are born to know how to apply their knowledge, and they would be perfect decision makers as long as they are taught the practical aspects of patient care. Techniques on how to make decisions are ignored and seldom taught in medical schools (Groopman, 2007). However, according to the survey of Croskerry, a physician at Dartmouth General Hospital, the perfect decision makers assumed by the economists actually misdiagnose fifteen percent, or even more, of the patients. And most of the misdiagnoses are the results of errors in thinking.

The consistent failures of human in decision making call doubt about the rational approach to decision making by economists. Kahneman and Tversky developed the Prospect Theory in 1979, which marked the trend of naturalistic approaches to decision making. And some other behavioral economists, neuroeconomist adopted empirical research methods to investigate into human decision making process. These well-grounded researches call on our attentions to the limitations of human, and doubted on the unrealistic assumptions of Homo economicus with unlimited abilities and resources by classical economists. Human, as decision makers, are constraint by their limited physical, cognitive resources, unstable preferences and emotions, and the impossibility to make concise prediction of risk. Rather than perfect rationality in economics, human decision making is bounded rationality as a result of all kinds of limitations. We regret loss, so we are loss aversion; we seeks to build our self-esteem by all means, so we shows confirmatory bias in decision (Blodget, 2004); we have limited memory ability, so we rely on representative and availability heuristics in decision making. Etc.

To err is human. Economists’ escape from accepting the facts that we are limited in resources does no good to help reduce human error in decision making. We have to face up to our constraints in decision making, and get aware of and trained to avoid these limitations.

REFERENCES

Cassidy, J. (2006). Mind games: what neuroeconomics tells us about money and the brain. The New Yorker. Source:http://www.newyorker.com/archive/2006/09/18/060918fa_fact

Groopman, J. (2007). What’s the trouble? How doctors think. The New Yorker. Source: http://www.newyorker.com/reporting/2007/01/29/070129fa_fact_groopman

Blodget H. (2004). The greatest Wall Street danger of all: you. Slate.

Source: http://slate.msn.com/id/2110977/

Kahneman, D. and Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2):263-292.

Wednesday, July 2, 2008

Does Short-Term Memory Load Influence Visual Search? An Oculomotor Study

ABSTRACT SUBMISSION DETAILS
Type of Submission: Poster
Submission Date: January 31, 2008
Review Status: PENDING
Title: Does Short-Term Memory Load Influence Visual Search? An Oculomotor Study
Subject Area: Cognitive
Keyword: Attention
Presenters: Jibo He, University of Illinois, Urbana/Champaign

Jason S McCarley, University of Illinois at Urbana-Champaign

Abstract:
A dual-task experiment examined the influence of STM load on visual processing and saccade targeting in visual search. Increased load altered saccade amplitudes, but did not appear to slow visual search nor to compromise foveal stimulus analysis. Results imply independence between STM maintenance and visual search.
Supporting Summary:
INTRODUCTION: A dual-task experiment examined the influence of short-term memory (STM) load on visual processing and saccade targeting in visual search. Performance of visual search under high and low memory load was compared through analysis of oculomotor data. The search task was designed to allow insight into the quality of the participants’ foveal analysis and saccade control. METHOD: Participants performed a visual search task while concurrently maintaining either a low or high memory load. The search task required participants to locate a circle (O) among a set of 35 gapped-circle (C) distractors. The memory task required participants to hold either a single alphanumeric character or six characters in STM. Each trial began with a fixation cross, followed by a memory set of either one character (low memory load) or six characters (high memory load). The visual search task was classified as coded or uncoded. In the coded condition, the Cs were oriented to face in the direction of the target, such that the participant could use distractors to guide search toward the target’s location. In the uncoded condition, distractors were oriented randomly. Comparison of performance in the coded and uncoded conditions thus provides a measure of the participants’ ability to utilize foveal analysis of distractor orientation to facilitate saccade targeting during search (Hooge & Erkelens, 1998). Memory performance was measured with a recognition test after the visual search each trial. RESULTS: Visual search was significantly more efficient under coded than uncoded conditions, as evident in changes in response times and saccade frequencies. Mathematical analysis (Wagenmakers et al., 2007) of saccade latency and accuracy data also revealed an increase in information accumulation rate for saccade target selection under coded search conditions. High memory load increased first-saccade latency each trial and led to larger saccade amplitudes, but otherwise did not hinder performance. No significant differences in memory recognition or saccade targeting accuracy across the four conditions were found. DISCUSSION: For coded search task, the benefits of cues outweighed the cost of additional cognitive processing of direction of the cues than uncoded search task. Higher STM load did not appear to slow down visual search, nor to compromise participants’ ability to guide search on the basis of distractor analysis. Results imply independence between STM maintenance and visual search.

Tuesday, July 1, 2008

'Lazy eye' treatment shows promise in adults

'Lazy eye' treatment shows promise in adults

New data, based on a finding first reported in 2006, suggest a simple and effective therapy for amblyopia. Clinical use will depend on optometry community

source: http://www.eurekalert.org/pub_releases/2008-03/uosc-et022708.php

New evidence from a laboratory study and a pilot clinical trial confirms the promise of a simple treatment for amblyopia, or “lazy eye,” according to researchers from the U.S. and China.

The treatment was effective on 20-year-old subjects. Amblyopia was considered mostly irreversible after age eight.

Many amblyopes, especially in developing countries, are diagnosed too late for conventional treatment with an eye patch. The disorder affects about nine million people in the U.S. alone.

Results from the laboratory study will be published online the week of Mar. 3 in PNAS Early Edition.

Patients seeking treatment will need to wait for eye doctors to adopt the non-surgical procedure in their clinics, said Zhong-Lin Lu, the University of Southern California neuroscientist who led the research group.

“I would be very happy to have some clinicians use the procedure to treat patients. It will take some time for them to be convinced,” Lu said.

“We also have a lot of research to do to make the procedure better.”

In a pilot clinical trial at a Beijing hospital in 2007, 28 out of 30 patients showed dramatic gains after a 10-day course of treatment, Lu said.

“After training, they start to use both eyes. Some people got to 20/20. By clinical standards, they’re completely normal. They’re not amblyopes anymore.”

The gains averaged two to three lines on a standard eye chart. Previous studies by Lu’s group found that the improvement is long-lasting, with 90 percent of vision gain retained after at least a year.

“This is a brilliant study that addresses a very important issue,” said Dennis Levi, dean of optometry at the University of California, Berkeley. Levi was not involved in the study.

“The results have important implications for the treatment of amblyopia and possibly other clinical conditions.”

The PNAS study shows that the benefit of the training protocol – which involves a very simple visual task – goes far beyond the task itself. Amblyopes trained on just one task improved their overall vision, Lu said.

The improvement was much greater for amblyopes than for normal subjects, Lu added.

“For amblyopes, the neural wiring is messed up. Any improvement you can give to the system may have much larger impacts on the system than for normals,” he said.

The Lu group’s findings also have major theoretical implications. The assumption of incurability for amblyopia rested on the notion of “critical period”: that the visual system loses its plasticity and ability to change after a certain age.

The theory of critical period arose in part from experiments on the visual system of animals by David Hubel and Torsten Wiesel of Harvard Medical School, who shared the 1981 Nobel Prize in Medicine with Roger Sperry of Caltech.

“This is a challenge to the idea of critical period,” Lu said. “The system is much more plastic than the idea of critical period implies. The fact that we can drastically change people’s vision at age 20 says something.”

A critical period still exists for certain functions, Lu added, but it might be more limited than previously thought.

“Amblyopia is a great model to re-examine the notion of critical period,” Lu said.

The first study by Lu’s group on the plasticity of amblyopic brains was published in the journal Vision Research in 2006 and attracted wide media attention.

Since then, Lu has received hundreds of emails from adult amblyopes who had assumed they were beyond help.

Berkeley’s Levi cautioned that the clinical usefulness of perceptual learning, as Lu calls his treatment, remains a “sixty-four thousand dollar question.”

“It's clear that perceptual learning in a lab setting is effective,” Levi said. “However, ultimately it needs to be adopted by clinicians and that will probably require multi-center clinical trials.”

###

Lu is collecting patients’ names for possible future clinical trials. He can be contacted at zhonglin@usc.edu.

The researchers are also working to develop a home-based treatment program.

For patients who can travel, the Chinese hospital that hosted the pilot trial may be able to provide treatment. Contact Dr. Lijuan Liu, Beijing Xiehe Hospital, at lijuan_l@yahoo.com.cn.

The other members of Lu’s group are Chang-Bing Huang and Yifeng Zhou of the Vision Research Lab at the University of Science and Technology of China, in Hefei, Anhui province (Huang is currently a postdoc in Lu’s lab at USC).

Funding for the research came from the Chinese National Natural Science Foundation and the U.S. National Eye Institute.

ABOUT AMBLYOPIA (from PNAS)

Amblyopia affects about 3 percent of the population and cannot be rectified with glasses. People with the disorder suffer a range of symptoms: poor vision in one eye, poor depth perception, difficulty seeing three-dimensional objects, and poor motion sensitivity.

Also known as lazy eye, the disorder is caused by poor transmission of images from the eye to the brain during early childhood, leading to abnormal brain development. Lazy eye is actually a misnomer because in many cases the structure of the eye is normal.

Rapid visual memory decay in mild cognitive impairment

Rapid visual memory decay in mild cognitive impairment

Source: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=547895

Zhong-Lin Lu et al. report that the rapid decay of iconic (visual short-term) memory appears to be a general characteristic of mild cognitive impairment, which often precedes the onset of Alzheimer's disease. Iconic memory typically lasts only a fraction of a second before it is either lost or stored in short-term memory, and a link between iconic memory impairment and Alzheimer's disease has been suggested. The authors tested the iconic memory of 23 young adults (average age, 20 years), 11 older adults with mild cognitive impairment (average age, 85 years), and 16 older controls (average age, 82 years). Iconic memory was characterized by using the partial report paradigm with the same visual stimuli parameters in each observation group. The researchers found that mean iconic memory duration was significantly shorter in subjects with mild cognitive impairment (0.07 s), compared with both older (0.30 s) and younger (0.34 s) controls. In a series of conventional neuropsychological tests used to assess cognitive function, the mild cognitive impairment group performed significantly worse than the older control group. In both of these groups, no significant performance differences were observed in visual and task abilities or the ability to transfer items to short-term memory. The authors suggest that testing iconic memory could be used with other measures to aid the diagnosis of Alzheimer's disease.

“Fast decay of iconic memory in observers with mild cognitive impairments” by Zhong-Lin Lu, James Neuse, Stephen Madigan, and Barbara Anne Dosher (see pages 1797-1802)

Figure 1Figure 1
Native-state conformations of α-synuclein.
Figure 2Figure 2
VP branching morphology of ERα-/- mice.
Figure 3Figure 3
Mast cell degranulation induced by light chains.
Figure 4Figure 4
Cholera phage types.

Mental Chronometry: Subtractive and Additive Factors Method

Mental Chronometry: Subtractive and Additive Factors Method

He, Jibo

(Department of Psychology, University of Illinois, Urbana/Champaign, IL, 61801, US)

Reaction time (RT) is one of the best quantitative measures in psychology, which is objective and could be compared easily among diversified tasks. RT is not only a good measure of task performance, but also can be used to probe into the mental process. Mental chronometry uses RT to confirm the existence and quantify the duration of a specific mental process. The widely use of mental chronometry benefits from the methodological breakthroughs, that is, the invention of subtractive and additive factors methods. In this reaction paper to the two materials (Johnson & Proctor, 2004; Wichens, &Holland, 1992), I will briefly summarize the two methods and comment on their usage and relative merits.

Subtractive method measures the duration of mental process by deleting a mental operation entirely from the RT task (Wichens, &Holland, 1992). Subtractive uses go-no-go task paradigm. The duration of a mental operation is the time difference of a go-response and no-go response. However, subtractive method cannot be widely used because of the dependencies of mental process with its proceeding procedures. We can use go-no-go paradigm to measure the duration of a response execution, because the deletion of response execution does not hinder other mental process as it is the end of a RT task. We cannot delete other mental process without disturbing following processes, such as perceptual encoding and response selection. The difficulty in deleting mental process should be one of the reasons why subtractive method is not widely used as the additive factors method.

Additive factors method is used to identify the existence of a proposed mental process by manipulating variables only influencing on a specific mental process. Additive factors method assumes that mental processes are executed in serial without dependencies and parallel processing. By adding variables to influencing on a target mental process, the experiment manipulation will only change this mental process without disturbing other mental process. If two manipulations influence two different mental processes, the additive effect on RT should be larger than the condition that the two manipulations operate on the same mental processes. If two operations operate on the same mental process, it will cause interactive effect on the total RT. If two operations operate on different mental processes, it will cause additive effect on the total RT. Therefore, if the time of the interactive RT task and the additive RT task differs, a specific mental process should exist, and the difference of the time for interactive RT and additive RT is the duration of this mental process.

However, despite of the success of the subtractive and additive factors methods, both of the two methods can not estimate the duration of mental process accurately enough, because the basic assumptions are not held for every RT tasks. Both of the two methods assume that mental processes are executed in serial. We can measure the time for a specific mental process by subtracting or adding/changing a mental process. However, the assumption of serial processing is not always true. Many of our mental operations are processed in parallel benefiting from the functions are allocated to independent parts of the brain or modalities of our organs. For example, we could speak while listen, smile while think etc. The estimation of RT based on the serial assumption of mental processes should over estimate the total RT for a task since some of the mental processes are executed in parallel. Another limitation of the subtractive and additive factors methods is that they assume the independence of different mental processes. That is, we can influence on only a specific mental process without disturbing other mental processes. However, actually, this assumption does not hold. The firing of a mental process usually depends on its proceeding process. For instance, we cannot carry out the process of response selection without perceptual encoding first. Therefore, if we manipulate on the stage of perceptual encoding, we do not change the duration of perceptual encoding only, we are likely to change the following response selection too. The failure of perceptual encoding will cause us fail to come up with relevant response too, which will definitely lengthen the duration of response selection.

To sum up, the invention of subtractive and additive factors methods contribute to mental chronometry, which can identify a mental process and measure its duration. But we should also be aware of its limitation in assuming serial processing and independent of different mental processes.

REFERENCES

Johnson, Addie & Robert W. Proctor. "Information Processing and the Study of Attention (excerpt)." Attention: theory and practice. Sage, 2004. 32-37.

Wickens, C.D., Hollands, J. G. (1992). Engineering Psychology and Human Performance (2nd Edition). Published by Harper Collins. Pages: 335-339.

Human Factors/Ergonomics: Using Psychology to Make a Better and Safer World

http://www.psichi.org/images/clear.gifhttp://www.psichi.org/images/clear.gif

Human Factors/Ergonomics: Using Psychology to Make a Better and Safer World


by Michael S. Wogalter and Wendy A. Rogers - North Carolina State University, Georgia Institute of Technology

Fields of Psychology

What is Human Factors/Ergonomics (HF/E), and why does the field have two names? The field of HF/E is the scientific discipline that attempts to find the best ways to design products, equipment, and systems so that people are maximally productive, satisfied, and safe. Historically, the term human factors has been used in the United States, and the term ergonomics has been used in Europe. Other terms used to describe the field are engineering psychology and applied experimental psychology. Whatever the name, HF/E is the science that brings together psychology and engineering design.
The field of HF/E is multidisciplinary and benefits from the input of experts from domains such as psychology, engineering, computer science, biomechanics, medicine, and others.
http://www.psichi.org/images/site_pages/3_1_wogalter_1.jpgFrequently, the HF/E professional plays the role of mediator between divergent interests advocating for the human point of view in the design of products, equipment, and systems by championing designs that make maximal use of the magnificent abilities that people possess and limiting the use of tasks where people could make errors.
Early contributions to the establishment of HF/E included the analysis of time and motion of people doing work, and determining human capabilities and limitations in relation to job demands. Most people credit the beginning of the field with the military during World War II. Pilots were flying their airplanes into the ground, and eventually psychologists were called in to find out why. We'd call it "human error" today, and part of the reason for the aircraft crashes was the lack of standardization between different aircraft models. The growing complexity of military hardware during this time period was revealing for the first time in history that even highly selected individuals who were given extensive training could not do the tasks that they needed to do. Pilots were not able to control their aircraft under stressful emergencies. The machines outstripped people's capabilities to use them. Investigations revealed that pilots had certain expectations of how things should work (for example, the location of the landing gear control and how to activate it), and these were frequently violated by aircraft designers (who frequently knew very little about people's abilities and limitations). Before WWII, it was assumed that people could eventually learn whatever they were given if they were trained properly. Since WWII, the field has blossomed as is evident from the examples provided in the next section.

Examples of Human Factors/Ergonomics Applications

Most people, if they even know the term ergonomics, might recognize it as dealing with chairs or possibly automotive displays. While the design of chairs and automobiles is within the purview of HF/E, the field is much broader than that. In fact, many HF/E professionals believe that nearly all aspects of daily activities are within the domain of HF/E. The field deals with the interface between people and things, whether it be a dial on a car dashboard or a control on a stove top. The fundamental philosophy of HF/E is that all products, equipment, and systems are ultimately made for people and should reflect the goals of user satisfaction, safety, and usability. Table 1 lists some examples of the type of issues on which HF/E specialists focus.
Two specific examples might serve to illustrate HF/E considerations. The first is that as commonplace as automated teller machines (ATM) have become, many older adults do not use them even though they could benefit from their convenience (see Figure 1). The goal of an HF/E specialist would be to ensure that the design of the machine was easy to use (including the design of the buttons, the wording, the spatial layout and the sequencing of the displays, etc.). Moreover, an HF/E person might suggest employing an outreach training program to assist first-time users. The ultimate HF/E solution would be, however, to make the technology so obvious that training is not necessary. Many people can't program a VCR. You might know a statistics program that could be made easier to use and understand. These are the sort of systems that could benefit from HF/E considerations.
The second example concerns pictorial symbols. Increasingly, symbols are being used to convey concepts to people who do not understand the primary language of the locale, and this is becoming increasingly important with people and companies involved with international travel and trade. In Figure 2, the pictorial on the left is from an actual sign on automatic doors like you might see at hospitals and airports. What does it mean? The slash obscures a critical feature of the underlying symbol. The pictorial could be interpreted as "Do not stand" or the opposite, "Do not walk." The interpretation the door manufacturer wanted to convey is the first one, because the doors sometimes close unexpectedly. Can you see that the pictorial symbol could be interpreted as the opposite of its intended meaning? The alternative interpretation was apparently missed by the designer. This is called a critical confusion because the meaning can create a hazard. Fortunately, most people probably do not have the chance to misinterpret this symbol. This is because whenever a person walks up to the door, the doors slide to the side, out of the way. The problem is (a) that the sensors sometimes do not pick up people standing at the threshold, and (b) that these people haven't seen the sign. People have been knocked to the ground by automatic doors that have closed unexpectedly, and for some fragile individuals that event has produced injury. An HF/E analyst would first want to "design out" the hazard (i.e., so it can't close on anyone) using, for example, better sensors and more reliable and better designed components and systems. If you can't design out the hazard, then at least you ought to guard against the hazard contacting and injuring people. When warnings are used, they ought to be designed so target audiences grasp the intended message quickly and readily with little time and effort.

Careers in HF/E

There are a wide range of opportunities in the field of HF/E:

-- aerospace systems
-- accident analysis
-- computer software and hardware design
-- communications technology
-- educational technology
-- forensic psychology
-- government research laboratories
(Air Force, Army, Navy, NASA)
-- graphics and information design
-- health and medical technology design
-- systems management
-- training development
-- university faculty
-- usability analysis
-- virtual reality
-- workplace design

HF/E is an area in which one does not necessarily need a PhD or even a master's degree to work in the field (although most human factors psychologists with bachelor's degrees have had some relevant graduate school experience). A recent salary survey of HFES members (Lovvoll, 1997)--using data from only those people reporting that their last degree was from a psychology department--shows that a decent living may be earned at all education levels, although it must be noted that the totals cut across all years of experience (see Table 2).
Another method of assessing salaries in the field is to group the job categories as shown in Table 3.

http://www.psichi.org/images/site_pages/3_1_wogalter_4.jpg

Learning More About Human Factors/Ergonomics

Many students have not heard of the field of HF/E, in part because there is often not a course in the curriculum, it is usually not covered in other courses, and many psychology professors do not know enough about it to inform their students (Martin & Wogalter, 1997). However, there are several organizations that encourage student participation and membership.
American Psychological Association (APA). Division 21 of APA is the Division of Applied Experimental and Engineering Psychology. For more information about becoming a student member of the division, contact Cathy Gaddy at cgaddy@aaas.org. You may also access information about APA's Division 21 (Applied Experimental and Engineering Psychology) on the Internet: http://www.apa.org/about/division.html#d21.
Human Factors and Ergonomics Society (HFES). HFES is the largest U.S. organization in the field with approximately 5,000 members. Nearly half of the members are psychologists, with the other members coming from fields such as engineering, computer science, system design, and others. For more information about becoming a student member of HFES, contact Diane de Mailly at hfesdm@aol.com or call (310) 394-1811. The HFES home page may be found at http://www.hfes.org. From this site you can download the complete listing of HF/E graduate programs in the U.S. and Canada. They also have a year-round job placement service.
Another way to learn more about the field of HF/E is to head for the library and browse through a textbook on the topic. You will surely be amazed by the range of topics covered. Some of the standard textbooks in the field are:

  • Proctor, R. W., & Van Zandt, T. (1994). Human factors in simple and complex systems. Boston: Allyn and Bacon.
  • Salvendy, G. (1997). Handbook of human factors and ergonomics. New York: Wiley and Sons.
  • Sanders, M. S., & McCormick, E. J. (1993). Human factors in engineering and design (7th ed.). New York: McGraw-Hill.
  • Wickens, C. D. (1992). Engineering psychology and human performance. New York: HarperCollins.

The field of HF/E is exciting, challenging, and important. Specializing in this field will enable you to get involved in the development of the future as well as to help individuals interact safely and effectively with today's technology. Although things will, undoubtedly, get more complex, potentially they can be made easier to use, helping to benefit our lives.


References

Lovvoll, D. (1997). Salary survey. HFES Bulletin, 40(5), 1-3.

Martin, D. W., & Wogalter, M. S. (1997). The exposure of undergraduate students to human factors/ergonomics instruction. Proceedings of the 41st Annual Meeting of the Human Factors and Ergonomics Society (pp. 470-473). Santa Monica, CA: HFES.


This article is part of a continuing series on the various fields of psychology and the careers available within those fields.
Correspondence concerning this article should be addressed to Michael S. Wogalter, Department of Psychology, North Carolina State University, 640 Poe Hall, Campus Box 7801, Raleigh, NC 27695-7801. Electronic mail be sent to
wogalter@ncsu.edu.


ABOUT THE AUTHORS: Michael S. Wogalter, PhD, is an associate professor of psychology at North Carolina State University, Raleigh, NC. Before coming to NCSU, he held faculty
http://www.psichi.org/images/site_pages/3_1_wogalter_2.jpgappointments at Rensselaer Polytechnic Institute and the University of Richmond. He received his PhD in human factors psychology from Rice University, an MA in human experimental psychology from the University of South Florida, and a BA in psychology from the University of Virginia.
He teaches graduate-level courses in human-computer interaction and risk communication, and undergraduate courses in ergonomics. Most of his research focuses on hazard perception, warnings, complex visual and auditory displays, and human information processing. An active member of the Human Factors and Ergonomics Society, he is currently secretary-treasurer and is a member of the Executive Council. He holds membership in several other professional organizations including APA, APS, the Ergonomics Society, and Sigma Xi. He is also on the editorial boards of the journals Human Factors, Ergonomics, Psychology & Marketing, and Occupational Ergonomics.
Wendy Rogers, PhD, is currently an associate professor in the School of Psychology at the Georgia Institute of Technology, where she is a member of the engineering psychology program. She received her BA from Southeastern Massachusetts University and her MS (1989) and PhD (1991) from Georgia Institute of Technology.
Her research interests include skill acquisition, human factors, training, attention, automaticity, individual differences, and cognitive aging. Her research is currently funded by the National Institutes of Health (National Institute on Aging) as part of the Center for Applied Cognitive Research on Aging. She serves on the editorial boards of Psychology and Aging, Experimental Aging Research, Ergonomics in Design, and Human Factors. She is immediate past president fo the Division of Appllied Experimental and Engineering Psychology (APA's Division 21). She is also the chair of Student Affairs and member of the Executive Council of the Human Factors and Ergonomics Society.


____________________________________________

Fall 1998 issue of Eye on Psi Chi (Vol. 3, No. 1, pp. 23-26), published by Psi Chi, The National Honor Society in Psychology (Chattanooga, TN). Copyright, 1998, Psi Chi, The National Honor Society in Psychology. All rights reserved.