Saturday, September 27, 2008

A Brief History of

Human Computer Interaction Technology


Brad A. Myers


Carnegie Mellon University School of Computer Science Technical
Report CMU-CS-96-163
and

Human Computer Interaction Institute
Technical Report CMU-HCII-96-103


December, 1996


Please cite this work as:

Brad A. Myers. "A Brief History of Human Computer Interaction
Technology."
ACM interactions. Vol. 5, no. 2, March, 1998. pp. 44-54.



Human Computer Interaction Institute


School of Computer Science

Carnegie Mellon University

Pittsburgh, PA 15213-3891


bam@a.gp.cs.cmu.edu



Abstract


This article summarizes the historical development of major advances in
human-computer interaction technology, emphasizing the pivotal role of
university research in the advancement of the field.




Copyright (c) 1996 -- Carnegie Mellon University




A short excerpt from this article appeared as part of "Strategic Directions
in
Human Computer Interaction," edited by Brad Myers, Jim Hollan, Isabel Cruz,
ACM Computing Surveys, 28(4), December 1996




This research was partially sponsored by NCCOSC under Contract No.
N66001-94-C-6037, Arpa Order No. B326 and partially by NSF under grant number
IRI-9319969. The views and conclusions contained in this document are those
of
the authors and should not be interpreted as representing the official
policies, either expressed or implied, of NCCOSC or the U.S. Government.


Keywords: Human Computer Interaction, History, User Interfaces,
Interaction Techniques.





1.
Introduction

Research in Human-Computer Interaction (HCI) has been spectacularly
successful,
and has fundamentally changed computing. Just one example is the ubiquitous
graphical interface used by Microsoft Windows 95, which is based on the
Macintosh, which is based on work at Xerox PARC, which in turn is based on
early research at the Stanford Research Laboratory (now SRI) and at the
Massachusetts Institute of Technology. Another example is that virtually
all
software written today employs user interface toolkits and interface builders,
concepts which were developed first at universities. Even the spectacular
growth of the World-Wide Web is a direct result of HCI research: applying
hypertext technology to browsers allows one to traverse a link across the
world
with a click of the mouse. Interface improvements more than anything else
has
triggered this explosive growth. Furthermore, the research that will lead
to
the user interfaces for the computers of tomorrow is happening at universities
and a few corporate research labs.


This paper tries to briefly summarize many of the important research
developments in Human-Computer Interaction (HCI) technology. By "research,"
I
mean exploratory work at universities and government and corporate research
labs (such as Xerox PARC) that is not directly related to products. By "HCI
technology," I am referring to the computer side of HCI. A companion article
on the history of the "human side," discussing the contributions from
psychology, design, human factors and ergonomics would also be appropriate.


A motivation for this article is to overcome the mistaken impression that
much
of the important work in Human-Computer Interaction occurred in industry,
and
if university research in Human-Computer Interaction is not supported, then
industry will just carry on anyway. This is simply not true. This paper
tries
to show that many of the most famous HCI successes developed by companies
are
deeply rooted in university research. In fact, virtually all of today's
major
interface styles and applications have had significant influence from research
at universities and labs, often with government funding. To illustrate this,
this paper lists the funding sources of some of the major advances. Without
this research, many of the advances in the field of HCI would probably not
have
taken place, and as a consequence, the user interfaces of commercial products
would be far more difficult to use and learn than they are today. As described
by Stu Card:


"Government funding of advanced human-computer interaction technologies built
the intellectual capital and trained the research teams for pioneer systems
that, over a period of 25 years, revolutionized how people interact with
computers. Industrial research laboratories at the corporate level in Xerox,
IBM, AT&T, and others played a strong role in developing this technology
and bringing it into a form suitable for the commercial arena." [6, p.
162]).


Figure 1 shows time lines for some of the technologies discussed in this
article. Of course, a deeper analysis would reveal much interaction between
the university, corporate research and commercial activity streams. It is
important to appreciate that years of research are involved in creating and
making these technologies ready for widespread use. The same will be true
for
the HCI technologies that will provide the interfaces of tomorrow.


It is clearly impossible to list every system and source in a paper of this
scope, but I have tried to represent the earliest and most influential systems.
Although there are a number of other surveys of HCI topics (see, for example
[1] [10] [33] [38]), none cover as many aspects as this one, or try to be
as
comprehensive in finding the original influences. Another useful resource
is
the video "All The Widgets," which shows the historical progression of a
number
of user interface ideas [25].


The technologies covered in this paper include fundamental interaction styles
like direct manipulation, the mouse pointing device, and windows; several
important kinds of application areas, such as drawing, text editing and
spreadsheets; the technologies that will likely have the biggest impact on
interfaces of the future, such as gesture recognition, multimedia, and 3D;
and
the technologies used to create interfaces using the other technologies,
such as user interface management systems, toolkits, and interface builders.






Figure 1: Approximate time lines showing where work was performed
on
some major technologies discussed in this article.



2.
Basic Interactions

  • Direct Manipulation of graphical objects: The now ubiquitous
    direct
    manipulation interface, where visible objects on the screen are directly
    manipulated with a pointing device, was first demonstrated by Ivan Sutherland
    in Sketchpad [44], which was his 1963 MIT PhD thesis. SketchPad supported
    the
    manipulation of objects using a light-pen, including grabbing objects, moving
    them, changing size, and using constraints. It contained the seeds of myriad
    important interface ideas. The system was built at Lincoln Labs with support
    from the Air Force and NSF. William Newman's Reaction Handler [30], created
    at
    Imperial College, London (1966-67) provided direct manipulation of graphics,
    and introduced "Light Handles," a form of graphical potentiometer, that was
    probably the first "widget." Another early system was AMBIT/G (implemented
    at
    MIT's Lincoln Labs, 1968, ARPA funded). It employed, among other interface
    techniques, iconic representations, gesture recognition, dynamic menus with
    items selected using a pointing device, selection of icons by pointing, and
    moded and mode-free styles of interaction. David Canfield Smith coined the
    term "icons" in his 1975 Stanford PhD thesis on Pygmalion [41] (funded by
    ARPA
    and NIMH) and Smith later popularized icons as one of the chief designers
    of
    the Xerox Star [42]. Many of the interaction techniques popular in direct
    manipulation interfaces, such as how objects and text are selected, opened,
    and
    manipulated, were researched at Xerox PARC in the 1970's. In particular,
    the
    idea of "WYSIWYG" (what you see is what you get) originated there with systems
    such as the Bravo text editor and the Draw drawing program [10] The concept
    of
    direct manipulation interfaces for everyone was envisioned by Alan Kay of
    Xerox
    PARC in a 1977 article about the "Dynabook" [16]. The first commercial systems
    to make extensive use of Direct Manipulation were the Xerox Star (1981) [42],
    the Apple Lisa (1982) [51] and Macintosh (1984) [52]. Ben Shneiderman at
    the
    University of Maryland coined the term "Direct Manipulation" in 1982 and
    identified the components and gave psychological foundations [40].


  • The Mouse: The mouse was developed at Stanford Research Laboratory
    (now SRI) in 1965 as part of the NLS project (funding from ARPA, NASA, and
    Rome
    ADC) [9] to be a cheap replacement for light-pens, which had been used at
    least
    since 1954 [10, p. 68]. Many of the current uses of the mouse were
    demonstrated by Doug Engelbart as part of NLS in a movie created in 1968
    [8].
    The mouse was then made famous as a practical input device by Xerox PARC
    in the
    1970's. It first appeared commercially as part of the Xerox Star (1981),
    the
    Three Rivers Computer Company's PERQ (1981) [23], the Apple Lisa (1982),
    and
    Apple Macintosh (1984).


  • Windows: Multiple tiled windows were demonstrated in Engelbart's
    NLS
    in 1968 [8]. Early research at Stanford on systems like COPILOT (1974) [46]
    and at MIT with the EMACS text editor (1974) [43] also demonstrated tiled
    windows. Alan Kay proposed the idea of overlapping windows in his 1969
    University of Utah PhD thesis [15] and they first appeared in 1974 in his
    Smalltalk system [11] at Xerox PARC, and soon after in the InterLisp system
    [47]. Some of the first commercial uses of windows were on Lisp Machines
    Inc.
    (LMI) and Symbolics Lisp Machines (1979), which grew out of MIT AI Lab
    projects. The Cedar Window Manager from Xerox PARC was the first major tiled
    window manager (1981) [45], followed soon by the Andrew window manager [32]
    by
    Carnegie Mellon University's Information Technology Center (1983, funded
    by
    IBM). The main commercial systems popularizing windows were the Xerox Star
    (1981), the Apple Lisa (1982), and most importantly the Apple Macintosh (1984).
    The early versions of the Star and Microsoft Windows were tiled, but eventually
    they supported overlapping windows like the Lisa and Macintosh. The X Window
    System, a current international standard, was developed at MIT in 1984 [39].
    For a survey of window managers, see [24].

3.
Application Types

  • Drawing programs: Much of the current technology was
    demonstrated in
    Sutherland's 1963 Sketchpad system. The use of a mouse for graphics was
    demonstrated in NLS (1965). In 1968 Ken Pulfer and Grant Bechthold at the
    National Research Council of Canada built a mouse out of wood patterned after
    Engelbart's and used it with a key-frame animation system to draw all the
    frames of a movie. A subsequent movie, "Hunger" in 1971 won a number of
    awards, and was drawn using a tablet instead of the mouse (funding by the
    National Film Board of Canada) [3]. William Newman's Markup (1975) was the
    first drawing program for Xerox PARC's Alto, followed shortly by Patrick
    Baudelaire's Draw which added handling of lines and curves [10, p. 326].
    The
    first computer painting program was probably Dick Shoup's "Superpaint" at
    PARC
    (1974-75).


  • Text Editing: In 1962 at the Stanford Research Lab, Engelbart
    proposed, and later implemented, a word processor with automatic word wrap,
    search and replace, user-definable macros, scrolling text, and commands to
    move, copy, and delete characters, words, or blocks of text. Stanford's
    TVEdit
    (1965) was one of the first CRT-based display editors that was widely used
    [48]. The Hypertext Editing System [50, p. 108] from Brown University had
    screen editing and formatting of arbitrary-sized strings with a lightpen
    in
    1967 (funding from IBM). NLS demonstrated mouse-based editing in 1968.
    TECO
    from MIT was an early screen-editor (1967) and EMACS [43] developed from
    it in
    1974. Xerox PARC's Bravo [10, p. 284] was the first WYSIWYG editor-formatter
    (1974). It was designed by Butler Lampson and Charles Simonyi who had started
    working on these concepts around 1970 while at Berkeley. The first commercial
    WYSIWYG editors were the Star, LisaWrite and then MacWrite. For a survey
    of
    text editors, see [22] [50, p. 108].


  • Spreadsheets: The initial spreadsheet was VisiCalc which was developed
    by Frankston and Bricklin (1977-8) for the Apple II while they were students
    at
    MIT and the Harvard Business School. The solver was based on a
    dependency-directed backtracking algorithm by Sussman and Stallman at the
    MIT
    AI Lab.


  • HyperText: The idea for hypertext (where documents are linked
    to
    related documents) is credited to Vannevar Bush's famous MEMEX idea from
    1945
    [4]. Ted Nelson coined the term "hypertext" in 1965 [29]. Engelbart's NLS
    system [8] at the Stanford Research Laboratories in 1965 made extensive use
    of
    linking (funding from ARPA, NASA, and Rome ADC). The "NLS Journal" [10,
    p.
    212] was one of the first on-line journals, and it included full linking
    of
    articles (1970). The Hypertext Editing System, jointly designed by Andy
    van
    Dam, Ted Nelson, and two students at Brown University (funding from IBM)
    was
    distributed extensively [49]. The University of Vermont's PROMIS (1976)
    was
    the first Hypertext system released to the user community. It was used to
    link
    patient and patient care information at the University of Vermont's medical
    center. The ZOG project (1977) from CMU was another early hypertext system,
    and was funded by ONR and DARPA [36]. Ben Shneiderman's Hyperties was the
    first system where highlighted items in the text could be clicked on to go
    to
    other pages (1983, Univ. of Maryland) [17]. HyperCard from Apple (1988)
    significantly helped to bring the idea to a wide audience. There have been
    many other hypertext systems through the years. Tim Berners-Lee used the
    hypertext idea to create the World Wide Web in 1990 at the government-funded
    European Particle Physics Laboratory (CERN). Mosaic, the first popular
    hypertext browser for the World-Wide Web was developed at the Univ. of
    Illinois' National Center for Supercomputer Applications (NCSA). For a more
    complete history of HyperText, see [31].


  • Computer Aided Design (CAD): The same 1963 IFIPS conference at
    which
    Sketchpad was presented also contained a number of CAD systems, including
    Doug
    Ross's Computer-Aided Design Project at MIT in the Electronic Systems Lab
    [37]
    and Coons' work at MIT with SketchPad [7]. Timothy Johnson's pioneering
    work
    on the interactive 3D CAD system Sketchpad 3 [13] was his 1963 MIT MS thesis
    (funded by the Air Force). The first CAD/CAM system in industry was probably
    General Motor's DAC-1 (about 1963).


  • Video Games: The first graphical video game was probably SpaceWar
    by
    Slug Russel of MIT in 1962 for the PDP-1 [19, p. 49] including the first
    computer joysticks. The early computer Adventure game was created by Will
    Crowther at BBN, and Don Woods developed this into a more sophisticated
    Adventure game at Stanford in 1966 [19, p. 132]. Conway's game of LIFE was
    implemented on computers at MIT and Stanford in 1970. The first popular
    commercial game was Pong (about 1976).

4.
Up-and-Coming Areas

  • Gesture Recognition: The first pen-based input device,
    the RAND
    tablet, was funded by ARPA. Sketchpad used light-pen gestures (1963).
    Teitelman in 1964 developed the first trainable gesture recognizer. A very
    early demonstration of gesture recognition was Tom Ellis' GRAIL system on
    the
    RAND tablet (1964, ARPA funded). It was quite common in light-pen-based
    systems to include some gesture recognition, for example in the AMBIT/G system
    (1968 -- ARPA funded). A gesture-based text editor using proof-reading symbols
    was developed at CMU by Michael Coleman in 1969. Bill Buxton at the University
    of Toronto has been studying gesture-based interactions since 1980. Gesture
    recognition has been used in commercial CAD systems since the 1970s, and
    came
    to universal notice with the Apple Newton in 1992.


  • Multi-Media: The FRESS project at Brown used multiple windows
    and
    integrated text and graphics (1968, funding from industry). The Interactive
    Graphical Documents project at Brown was the first hypermedia (as opposed
    to
    hypertext) system, and used raster graphics and text, but not video (1979-1983,
    funded by ONR and NSF). The Diamond project at BBN (starting in 1982, DARPA
    funded) explored combining multimedia information (text, spreadsheets,
    graphics, speech). The Movie Manual at the Architecture Machine Group (MIT)
    was one of the first to demonstrate mixed video and computer graphics in
    1983
    (DARPA funded).


  • 3-D: The first 3-D system was probably Timothy Johnson's 3-D CAD
    system mentioned above (1963, funded by the Air Force). The "Lincoln Wand"
    by
    Larry Roberts was an ultrasonic 3D location sensing system, developed at
    Lincoln Labs (1966, ARPA funded). That system also had the first interactive
    3-D hidden line elimination. An early use was for molecular modelling [18].
    The late 60's and early 70's saw the flowering of 3D raster graphics research
    at the University of Utah with Dave Evans, Ivan Sutherland, Romney, Gouraud,
    Phong, and Watkins, much of it government funded. Also, the
    military-industrial flight simulation work of the 60's - 70's led the way
    to
    making 3-D real-time with commercial systems from GE, Evans&Sutherland,
    Singer/Link (funded by NASA, Navy, etc.). Another important center of current
    research in 3-D is Fred Brooks' lab at UNC (e.g. [2]).


  • Virtual Reality and "Augmented Reality": The original work on
    VR was
    performed by Ivan Sutherland when he was at Harvard (1965-1968, funding
    by Air
    Force, CIA, and Bell Labs). Very important early work was by Tom Furness
    when
    he was at Wright-Patterson AFB. Myron Krueger's early work at the University
    of Connecticut was influential. Fred Brooks' and Henry Fuch's groups at
    UNC
    did a lot of early research, including the study of force feedback (1971,
    funding from US Atomic Energy Commission and NSF). Much of the early research
    on head-mounted displays and on the DataGlove was supported by NASA.


  • Computer Supported Cooperative Work. Doug Engelbart's 1968
    demonstration of NLS [8] included the remote participation of multiple people
    at various sites (funding from ARPA, NASA, and Rome ADC). Licklider and
    Taylor
    predicted on-line interactive communities in an 1968 article [20] and
    speculated about the problem of access being limited to the privileged.
    Electronic mail, still the most widespread multi-user software, was enabled
    by
    the ARPAnet, which became operational in 1969, and by the Ethernet from Xerox
    PARC in 1973. An early computer conferencing system was Turoff's EIES system
    at the New Jersey Institute of Technology (1975).


  • Natural language and speech: The fundamental research for speech
    and
    natural language understanding and generation has been performed at CMU,
    MIT,
    SRI, BBN, IBM, AT&T Bell Labs and BellCore, much of it government funded.
    See, for example, [34] for a survey of the early work.

5.
Software Tools and Architectures

The area of user interface software tools is quite active now, and
many
companies are selling tools. Most of today's applications are implemented
using various forms of software tools. For a more complete survey and
discussion of UI tools, see [26].


  • UIMSs and Toolkits: (There are software libraries and tools
    that
    support creating interfaces by writing code.) The first User Interface
    Management System (UIMS) was William Newman's Reaction Handler [30] created
    at
    Imperial College, London (1966-67 with SRC funding). Most of the early work
    was done at universities (Univ. of Toronto with Canadian government funding,
    George Washington Univ. with NASA, NSF, DOE, and NBS funding, Brigham Young
    University with industrial funding, etc.). The term "UIMS" was coined by
    David
    Kasik at Boeing (1982) [14]. Early window managers such as Smalltalk (1974)
    and InterLisp, both from Xerox PARC, came with a few widgets, such as popup
    menus and scrollbars. The Xerox Star (1981) was the first commercial system
    to
    have a large collection of widgets. The Apple Macintosh (1984) was the first
    to actively promote its toolkit for use by other developers to enforce a
    consistent interface. An early C++ toolkit was InterViews [21], developed
    at
    Stanford (1988, industrial funding). Much of the modern research is being
    performed at universities, for example the Garnet (1988) [28] and Amulet
    (1994) [27] projects at CMU (ARPA funded), and subArctic at Georgia Tech
    (1996,
    funding by Intel and NSF).


  • Interface Builders: (These are interactive tools that allow interfaces
    composed of widgets such as buttons, menus and scrollbars to be placed using
    a
    mouse.) The Steamer project at BBN (1979-85; ONR funding) demonstrated many
    of
    the ideas later incorporated into interface builders and was probably the
    first
    object-oriented graphics system. Trillium [12] was developed at Xerox PARC
    in
    1981. Another early interface builder was the MenuLay system [5] developed
    by
    Bill Buxton at the University of Toronto (1983, funded by the Canadian
    Government). The Macintosh (1984) included a "Resource Editor" which allowed
    widgets to be placed and edited. Jean-Marie Hullot created "SOS Interface"
    in
    Lisp for the Macintosh while working at INRIA (1984, funded by the French
    government) which was the first modern "interface builder." Hullot built
    this
    into a commercial product in 1986 and then went to work for NeXT and created
    the NeXT Interface Builder (1988), which popularized this type of tool.
    Now
    there are literally hundreds of commercial interface builders.


  • Component Architectures: The idea of creating interfaces by connecting
    separately written components was first demonstrated in the Andrew project
    [32]
    by Carnegie Mellon University's Information Technology Center (1983, funded
    by
    IBM). It is now being widely popularized by Microsoft's OLE and Apple's
    OpenDoc architectures.

6.
Discussion

It is clear that all of the most important innovations in Human-Computer
Interaction have benefited from research at both corporate research labs
and
universities, much of it funded by the government. The conventional style
of
graphical user interfaces that use windows, icons, menus and a mouse and
are in
a phase of standardization, where almost everyone is using the same, standard
technology and just making minute, incremental changes. Therefore, it is
important that university, corporate, and government-supported research
continue, so that we can develop the science and technology needed for the
user
interfaces of the future.


Another important argument in favor of HCI research in universities is that
computer science students need to know about user interface issues. User
interfaces are likely to be one of the main value-added competitive advantages
of the future, as both hardware and basic software become commodities. If
students do not know about user interfaces, they will not serve industry
needs.
It seems that only through computer science does HCI research disseminate
out
into products. Furthermore, without appropriate levels of funding of academic
HCI research, there will be fewer PhD graduates in HCI to perform research
in
corporate labs, and fewer top-notch graduates in this area will be interested
in being professors, so the needed user interface courses will not be
offered.


As computers get faster, more of the processing power is being devoted to
the
user interface. The interfaces of the future will use gesture recognition,
speech recognition and generation, "intelligent agents," adaptive interfaces,
video, and many other technologies now being investigated by research groups
at
universities and corporate labs [35]. It is imperative that this research
continue and be well-supported.

ACKNOWLEDGMENTS

I must thank a large number of people who responded to posts of earlier
versions of this article on the announcements.chi mailing list for their
very
generous help, and to Jim Hollan who helped edit the short excerpt of this
article. Much of the information in this article was supplied by (in
alphabetical order): Stacey Ashlund, Meera M. Blattner, Keith Butler, Stuart
K.
Card, Bill Curtis, David E. Damouth, Dan Diaper, Dick Duda, Tim T.K. Dudley,
Steven Feiner, Harry Forsdick, Bjorn Freeman-Benson, John Gould, Wayne Gray,
Mark Green, Fred Hansen, Bill Hefley, D. Austin Henderson, Jim Hollan,
Jean-Marie Hullot, Rob Jacob, Bonnie John, Sandy Kobayashi, T.K. Landauer,
John
Leggett, Roger Lighty, Marilyn Mantei, Jim Miller, William Newman, Jakob
Nielsen, Don Norman, Dan Olsen, Ramesh Patil, Gary Perlman, Dick Pew, Ken
Pier,
Jim Rhyne, Ben Shneiderman, John Sibert, David C. Smith, Elliot Soloway,
Richard Stallman, Ivan Sutherland, Dan Swinehart, John Thomas, Alex Waibel,
Marceli Wein, Mark Weiser, Alan Wexelblat, and Terry Winograd. Editorial
comments were also provided by the above as well as Ellen Borison, Rich
McDaniel, Rob Miller, Bernita Myers, Yoshihiro Tsujino, and the reviewers.

References


1. Baecker, R., et al., "A Historical and Intellectual Perspective,"
in
Readings in Human-Computer Interaction: Toward the Year 2000, Second
Edition
, R. Baecker, et al., Editors. 1995, Morgan Kaufmann
Publishers, Inc.: San Francisco. pp. 35-47.


2. Brooks, F. "The Computer "Scientist" as Toolsmith--Studies in Interactive
Computer Graphics," in IFIP Conference Proceedings. 1977. pp.
625-634.


3. Burtnyk, N. and Wein, M., "Computer Generated Key Frame Animation."
Journal Of the Society of Motion Picture and Television Engineers,
1971.
8(3): pp. 149-153.


4. Bush, V., "As We May Think." The Atlantic Monthly, 1945.
176(July): pp. 101-108. Reprinted and discussed in
interactions,
3(2), Mar 1996, pp. 35-67.


5. Buxton, W., et al. "Towards a Comprehensive User Interface Management
System," in Proceedings SIGGRAPH'83: Computer Graphics. 1983. Detroit,
Mich. 17. pp. 35-42.


6. Card, S.K., "Pioneers and Settlers: Methods Used in Successful User
Interface Design," in Human-Computer Interface Design: Success Stories,
Emerging Methods, and Real-World Context
, M. Rudisill, et al.,
Editors. 1996, Morgan Kaufmann Publishers: San Francisco. pp. 122-169.


7. Coons, S. "An Outline of the Requirements for a Computer-Aided Design
System," in AFIPS Spring Joint Computer Conference. 1963. 23.
pp. 299-304.


8. Engelbart, D. and English, W., "A Research Center for Augmenting Human
Intellect." Reprinted in ACM SIGGRAPH Video Review, 1994.,
1968.
106


9. English, W.K., Engelbart, D.C., and Berman, M.L., "Display Selection
Techniques for Text Manipulation." IEEE Transactions on Human Factors
in Electronics
, 1967. HFE-8(1)


10. Goldberg, A., ed. A History of Personal Workstations. 1988,
Addison-Wesley Publishing Company: New York, NY. 537.


11. Goldberg, A. and Robson, D. "A Metaphor for User Interface Design," in
Proceedings of the 12th Hawaii International Conference on System
Sciences.
1979. 1. pp. 148-157.


12. Henderson Jr, D.A. "The Trillium User Interface Design Environment,"
in
Proceedings SIGCHI'86: Human Factors in Computing Systems. 1986. Boston,
MA. pp. 221-227.


13. Johnson, T. "Sketchpad III: Three Dimensional Graphical Communication
with
a Digital Computer," in AFIPS Spring Joint Computer Conference. 1963.
23. pp. 347-353.


14. Kasik, D.J. "A User Interface Management System," in Proceedings
SIGGRAPH'82: Computer Graphics.
1982. Boston, MA. 16. pp. 99-106.


15. Kay, A., The Reactive Engine. PhD Thesis, Electrical Engineering
and
Computer Science University of Utah, 1969,


16. Kay, A., "Personal Dynamic Media." IEEE Computer, 1977.
10(3): pp. 31-42.


17. Koved, L. and Shneiderman, B., "Embedded menus: Selecting items in
context." Communications of the ACM, 1986. 4(29): pp.
312-318.


18. Levinthal, C., "Molecular Model-Building by Computer." Scientific
American
, 1966. 214(6): pp. 42-52.


19. Levy, S., Hackers: Heroes of the Computer Revolution. 1984, Garden
City, NY: Anchor Press/Doubleday.


20. Licklider, J.C.R. and Taylor, R.W., "The computer as Communication
Device." Sci. Tech., 1968. April: pp. 21-31.


21. Linton, M.A., Vlissides, J.M., and Calder, P.R., "Composing user interfaces
with InterViews." IEEE Computer, 1989. 22(2): pp. 8-22.


22. Meyrowitz, N. and Van Dam, A., "Interactive Editing Systems: Part 1 and
2." ACM Computing Surveys, 1982. 14(3): pp. 321-352.


23. Myers, B.A., "The User Interface for Sapphire." IEEE Computer
Graphics and Applications
, 1984. 4(12): pp. 13-23.


24. Myers, B.A., "A Taxonomy of User Interfaces for Window Managers."
IEEE Computer Graphics and Applications, 1988. 8(5): pp. 65-84.


25. Myers, B.A., "All the Widgets." SIGGRAPH Video Review,
1990.
57


26. Myers, B.A., "User Interface Software Tools." ACM Transactions
on
Computer Human Interaction
, 1995. 2(1): pp. 64-103.


27. Myers, B.A., et al., The Amulet V2.0 Reference Manual .
Carnegie Mellon University Computer Science Department Report, Number, Feb,
1996. System available from http://www.cs.cmu.edu/~amulet.


28. Myers, B.A., et al., "Garnet: Comprehensive Support for Graphical,
Highly-Interactive User Interfaces." IEEE Computer, 1990.
23(11): pp. 71-85.


29. Nelson, T. "A File Structure for the Complex, the Changing, and the
Indeterminate," in Proceedings ACM National Conference. 1965. pp.
84-100.


30. Newman, W.M. "A System for Interactive Graphical Programming," in AFIPS
Spring Joint Computer Conference.
1968. 28. pp. 47-54.


31. Nielsen, J., Multimedia and Hypertext: the Internet and Beyond.
1995, Boston: Academic Press Professional.


32. Palay, A.J., et al. "The Andrew Toolkit - An Overview," in
Proceedings Winter Usenix Technical Conference. 1988. Dallas, Tex.
pp.
9-21.


33. Press, L., "Before the Altair: The History of Personal Computing."
Communications of the ACM, 1993. 36(9): pp. 27-33.


34. Reddy, D.R., "Speech Recognition by Machine: A Review," in Readings
in
Speech Recognition
, A. Waibel and K.-F. Lee, Editors. 1990, Morgan
Kaufmann: San Mateo, CA. pp. 8-38.


35. Reddy, R., "To Dream the Possible Dream (Turing Award Lecture)."
Communications of the ACM, 1996. 39(5): pp. 105-112.


36. Robertson, G., Newell, A., and Ramakrishna, K., ZOG: A Man-Machine
Communication Philosophy
. Carnegie Mellon University Technical Report
Report, Number, August, 1977.


37. Ross, D. and Rodriguez, J. "Theoretical Foundations for the Computer-Aided
Design System," in AFIPS Spring Joint Computer Conference. 1963.
23. pp. 305-322.


38. Rudisill, M., et al., Human-Computer Interface Design: Success
Stories, Emerging Methods, and Real-World Context.
1996, San Francisco:
Morgan Kaufmann Publishers.


39. Scheifler, R.W. and Gettys, J., "The X Window System." ACM
Transactions on Graphics
, 1986. 5(2): pp. 79-109.


40. Shneiderman, B., "Direct Manipulation: A Step Beyond Programming
Languages." IEEE Computer, 1983. 16(8): pp. 57-69.


41. Smith, D.C., Pygmalion: A Computer Program to Model and Stimulate
Creative Thought.
1977, Basel, Stuttgart: Birkhauser Verlag. PhD Thesis,
Stanford University Computer Science Department, 1975.


42. Smith, D.C., et al. "The Star User Interface: an Overview," in
Proceedings of the 1982 National Computer Conference. 1982. AFIPS.
pp.
515-528.


43. Stallman, R.M., Emacs: The Extensible, Customizable, Self-Documenting
Display Editor
. MIT Artificial Intelligence Lab Report, Number, Aug,
1979, 1979.


44. Sutherland, I.E. "SketchPad: A Man-Machine Graphical Communication System,"
in AFIPS Spring Joint Computer Conference. 1963. 23. pp.
329-346.


45. Swinehart, D., et al., "A Structural View of the Cedar Programming
Environment." ACM Transactions on Programming Languages and
Systems
, 1986. 8(4): pp. 419-490.


46. Swinehart, D.C., Copilot: A Multiple Process Approach to Interactive
Programming Systems.
PhD Thesis, Computer Science Department Stanford
University, 1974, SAIL Memo AIM-230 and CSD Report STAN-CS-74-412.


47. Teitelman, W., "A Display Oriented Programmer's Assistant."
International Journal of Man-Machine Studies, 1979. 11: pp.
157-187. Also Xerox PARC Technical Report CSL-77-3, Palo Alto, CA, March
8,
1977.


48. Tolliver, B., TVEdit . Stanford Time Sharing Memo Report, Number,
March, 1965.


49. van Dam, A., et al. "A Hypertext Editing System for the 360,"
in
Proceedings Conference in Computer Graphics. 1969. University of
Illinois.


50. van Dam, A. and Rice, D.E., "On-line Text Editing: A Survey."
Computing Surveys, 1971. 3(3): pp. 93-114.


51. Williams, G., "The Lisa Computer System." Byte Magazine,
1983. 8(2): pp. 33-50.


52. Williams, G., "The Apple Macintosh Computer." Byte, 1984.
9(2): pp. 30-54.











Friday, September 26, 2008


---------Making Gabor-----Matlab------


source:http://faculty.washington.edu/ionefine/html/MakingAGabor.html


 


-------function summary-----------


  LINSPACE Linearly spaced vector.
    LINSPACE(X1, X2) generates a row vector of 100 linearly
    equally spaced points between X1 and X2.
 
    LINSPACE(X1, X2, N) generates N points between X1 and X2.
    For N < 2, LINSPACE returns X2.


 


Now to create a Gabor (a sinusoid windowed by a Gaussian) becomes a piece of cake. We simply multiply the two-dimensional Gaussian window by the two-dimensional Gabor.


------------------------------------


Contents



Making a Gabor


Here we are going to use some of the things we have learned to make a Gabor filter. These are regularly used for image processing. They are also popular as stimuli among vision scientists.

sd=0.3; % standard deviation
x=linspace(-1,1,100);
y=(1/sqrt(2*pi*sd)).*exp(-.5*((x/sd).^2)); % create a normal distribution
y=y./max(y); % scale it so it goes from 0-1

So y is a 1-d Gaussian with a standard deviation of sd=0.3. It has a minimum value of 0 and a maximum value of 1. We can plot this and see what it looks like.


plotting 1D Gaussian

plot(x, y);

create 2D Gaussian


Now we use the outer product to create a two-dimensional Gaussian filter, with a maximum value of 1 and a minimum of 0

filt=(y'*y);
disp(min(filt(:)));
disp(max(filt(:)));
  1.4962e-005

1


view 2D Gaussian


And then we can look what that two dimensional Gaussian looks like. We are going to allow the colormap to take 256 possible values of gray. That means we want filt to vary between 1 and 256.

colormap(gray(256));
newmax=256;
newmin=1;
delta = (newmax-newmin);
scaled_filt = delta*filt + newmin;
image(scaled_filt);

Using the Gaussian filter to filter other images


We can then use this Gaussian window to filter any image we want. Let’s say we want to filter a sinusoidal grating. We actually use much the same set of tricks.


create 1-D sinusoid

sf=6; % spatial freq in cycles per image
y2=sin(x*pi*sf);
y2=scaleif(y2, 0, 1);
plot(x, y2);
ans =

2


Create 1-D sinusoidal grating


This now gives us a 1-dimensional sinusoidal grating with a frequency of 6 cycles per image. We will scale it so it has a minimum value of 0 and a maximum value of 1. If we calculate the outer product of this sinusoid with itself we get a weird checkerboard.

img=(y2'*y2);
colormap(gray(256));
scaled_img = delta*img + newmin;
image(scaled_img);

Create 2-D sinusoid


What we want to do is calculate the outer product of the one-dimensional sinusoid with a vector of ones.

y3=ones(size(y2));
img=(y3'*y2);
colormap(gray(256))
scaled_img = delta*img + newmin;
image(scaled_img)
% Once again, img has a minimum of 0 and a maximum of 1
disp(max(img(:)))
disp(min(img(:)))
     1

0


Create a Gabor


Now to create a Gabor (a sinusoid windowed by a Gaussian) becomes a piece of cake. We simply multiply the two-dimensional Gaussian window by the two-dimensional Gabor.

gabor=img.*filt;
scaled_gabor=delta.*gabor+1;
colormap(gray(256))
image(scaled_gabor);

Note that throughout this process we have worked with two versions of the Gaussian and the Gabor. The original versions (filt, img) were scaled between 0 and 1. We did this to keep the nice property that when the Gaussian filter was at its maximum, the brightness of the grating didn’t change, and when the Gaussian filter was at 0 the Gabor was also at 0. But when we used image to look at these matrices we converted them to range between 1-256 so as to match the colormap.






 

source:http://psychtoolbox.org/wikka.php?wakka=PsychtoolboxTutorial

Psychtoolbox-3 Tutorial



It would be great if someone could extend this tutorial for new features and standard operating procedures for version 3.

Until then, the following downloadable PDF file may give you a coarse overview over PTB-3's new features:
Talk slides of Psychtoolbox presentation, given at ECVP 2007 Arezzo


Strengths of Matlab & Psychtoolbox



Interpretive general language & a good interface to hardware

Unlike most software packages for experimental psychology and visual psychophysics, the Psychtoolbox is very general. It doesn't contain specific support for generating gratings, running trials, etc. Rather, it provides the user of a high-level interpreted language (Matlab) with a well-defined interface to the graphics hardware (frame buffer and lookup table) that drives the display. In this sense, it is a very generic tool. The power comes from the fact that once you can write arbitrary matrices into the frame buffer and lookup table as fast as the machine can go, everything else is easy to program in Matlab.

Matlab is a high level interpreted language with extensive support for numerical calculation (The MathWorks, 1993). The Psychophysics Toolbox provides a Matlab interface to the computer’s hardware. The core Psychtoolbox routines provide access to the display frame buffer and color lookup table, allow synchronization with the vertical blanking, support millisecond timing, and facilitate the collection of observer responses.

The Psychtoolbox doesn't limit the user—if the experiment can be run on the hardware, it can be run with the Psychtoolbox. In comparison, other environments for creating perception experiments provide very evolved support for specific experiments. Our experience with software packages not based on a general-purpose programming language has been that the very first thing we wanted to do turned out to be impossible.

We think the Matlab-Psychtoolbox combination has four winning features that we’d recommend for any experiment-design environment:



  • A general purpose language (Matlab) allows you to do new things.
  • For programs that use hardware intensely (e.g. display, keyboard), an interpreted environment (e.g. Matlab) speeds up software development greatly because simple tests can be performed immediately.
  • The key Psychtoolbox routines are C code, callable as functions from Matlab, that encapsulate the hardware, presenting a simple software interface to the user that provides full control. (In particular, the Psychtoolbox Screen.mex function provides a consistent high-performance user interface to the display, overcoming differences in synchronization behavior among graphics drivers from many manufacturers, within and between Mac and Win platforms.)
  • The Psychtoolbox Rush function allows you to run an arbitrary bit of code with little or no interruption. We call this "hogging the machine", blocking interrupts for the few seconds of a critical stimulus presentation.


The Psychtoolbox also provides interfaces for timing, sound, keyboard, and the serial port. And it includes many useful Matlab routines, such as color space transformations (Brainard, 1995; Brainard, Pelli, and Robson, 2002) and the QUEST (Watson and Pelli, 1983; Pelli and Farell, 1995) threshold seeking algorithm.



2. Transforming numbers into movies

Using a high-level platform-independent language like Matlab, it's easy to produce a matrix of numbers specifying the desired luminances of all the pixels in the displayed image. Today's off-the-shelf personal computers can copy those numbers from memory to video memory quickly enough to show a new image on every frame of a CRT monitor. However, high-level languages generally provide only rudimentary control of the vital transformations from number to color, and of the rate at which successive images are displayed.

That is where the Psychtoolbox comes in, providing simple but powerful functions to control the pixel transformation and timing synchronization of the computer-display interface.

Here’s a quick sketch of how computers display images. (See Brainard, Pelli, and Robson, 2002, for a fuller treatment.) Once the matrix of numbers has been loaded into frame buffer memory, the subsequent transformation from number to luminance (or color) is complicated, but usefully simplified to three steps. First, at video rates (e.g. 100 million pixels per second), each number passes through a lookup table, typically one 8-bit number in and three 8-to-10-bit numbers out, each driving an 8-to-10-bit digital-to-analog converter. Second, the three analog video signals drive the three guns of a color CRT. The luminance of light emitted by each monitor phosphor is proportional to the corresponding gun's beam current, which is an accelerating function of drive voltage; this is called the monitor's "gamma" function. Third, the luminous image is blurred by the point spread function of the beam.

Most graphics cards have adjustable pixel size, typically 8, 16, or 32 bits per pixel. Furthermore, while most have 8-bit Digital to Analog Converters (DACs), a few have 9- or 10-bit DACs. Many users write to ask what these numbers of bits mean. In the pixmap, each pixel is assigned a certain number of bits, 8, 16 or 32. The number of bits per pixel determines how many different colors you can have in one frame: 256, thousands, or millions. When you actually display an image, the pixel value is used as in index into a lookup table on your graphics card. The values in the lookup table are typically 8 bits per channel, but some cards have 9 or 10 bits per channel. Those values, output from the lookup table then drive digital to analog converters (DACs) with a corresponding precision, 8 to 10 bits. In 8-bit mode you can select any 256 colors. Within the lookup table, each color is specified by three 8-10 bit numbers. If instead you use the 32-bit mode (millions of colors) then the pixel is considered to be made up of three 8 bit values, one per channel (plus 8 bits of padding), each of which goes through a one-channel lookup table, again with 8-10 bit outputs. 16-bit mode is rarely useful. In that mode 5 bits are assigned to each channel (plus 1 bit of padding), allowing only 32 values per gun. Again, we have a longer treatment of this issue in our Display Characterization chapter (Brainard, Pelli, and Robson, 2002).
3. Toolbox overview

The basic idea is that you use Matlab to compute images or movies, and use new Matlab functions provided by the Psychtoolbox for accurate display. The Psychtoolbox routines treat the computer (Mac or Windows) as a display device: a frame buffer with a color lookup table. (To read about how to use frame buffers for visual psychophysics, see our psychophysics bibliography.)

The software has three layers. First, there is Matlab code that you write and some Matlab utilities that we supply, e.g. to compute color lookup tables and implement the QUEST staircase procedure. Second, there are a set of Matlab extensions (MEX or DLL files) that are written in C but callable from within Matlab. Third, the extension files, in turn, use OpenGL for graphics output and operating system facilities for other input and output.

The Screen mex file is the heart of the Psychophysics Toolbox, providing many subfunctions (selected by a text argument) that control the display screen(s). Experiments typically begin with a call to Screen('OpenWindow') and end with a call to Screen('CloseAll'). Anywhere in between, you may copy an image from a Matlab matrix onto the screen using Screen('PutImage') and change the lookup table using Screen('LoadClut') or (even better) Screen('LoadNormalizedGammaTable'). Typically you'll create a window on each screen that you're using in your experiment. Copying within or between windows is very fast. And you can create an unlimited number of offscreen windows (in memory, not visible) that can then be shown, one after another, as a movie, by copying to an onscreen window. Other Screen functions display text and dialogs and provide frame-accurate timing.

You can use the Screen function to write Matlab scripts that intermix graphics operations, calculations, and wait for observer responses. If you run the routines interactively from the command window, there will be a certain level of chaos as Matlab's windows overwrite parts of the experimental window. Still, this mode can be useful for debugging, especially if you restrict the window sizes to avoid overlap, or you have a second monitor.

Operations such as synching to vertical blanking and writing color lookup tables depend on the kind of video card(s) you have, and their video drivers. New versions of the computer operating system often include new video drivers. The Psychtoolbox provides a uniform interface, but you should check the timing on your computer, by running ScreenTest.m.

Note that Matlab has a number of built-in graphics commands, like BAR, that can draw into Matlab "figure" windows. Those commands won't draw into a Screen window. Use Matlab commands to draw into Matlab figures; use Screen to draw into Screen windows. For example, if you have an open Screen window, you can draw a black filled rectangle (10x25) in it by saying: Screen(window,'FillRect',BlackIndex(window),[0,0,25,10]). You can erase the whole window by overwriting with white: Screen(window,'FillRect').

Priority

A major challenge in doing psychophysics on modern personal computers is that operating systems are becoming more and more aggressive about stealing time away from your display code to do other things. Priority is a function that allows to protect your Matlab code (to some degree) from interruption. This allows you to keep your computer running more or less normally, with lots of background processes, yet grab complete control for the periods of time that it takes to present your stimuli.

Other Psychophysics routines

In addition to Screen and Priority, there are routines to satisfy all the needs of psychophysical experiments: Unbuffered keyboard i/o via KbCheck, KbWait, KbStrokeWait, KbName etc., mouse i/o via SetMouse, GetMouse and GetClicks, serial i/o via IOPort, timing via GetSecs and WaitSecs, sounds via PsychPortAudio and Snd, and threshold-estimation via the Quest staircase procedure. Other routines interface to the PhotoResearch PR-650 color meter, save images as EPS files, interface with Eyetrackers or EEG systems. An overview of the basic routines can be found by typing help PsychBasic. A high level overview over all categories of functions can be found by typing help PsychToolbox.

4. Displaying a grating with Screen.mex

Let's write a program to display a grating. We’ll open up a window on the screen, write a Matlab matrix into our window (i.e. into the frame buffer), and then close the window. These functions, Screen('OpenWindow'), Screen('PutImage'), and Screen('Close'), along with functions to load the lookup table and sync to the vertical blanking, are the heart of the Psychtoolbox.

First we open a full-screen window on screen 0 (the main monitor):


whichScreen = 0;
window = Screen(whichScreen, 'OpenWindow');



This full-screen window contains the entire screen, every pixel. There is no menu bar, title bar or border. You can ask Screen to open as many windows as you like, on as many monitors as you have, but usually you’ll want just a full-screen window on one monitor.

Next we figure out what numbers will produce white, gray, and black, and fill the whole window with gray.


white = WhiteIndex(window); % pixel value for white
black = BlackIndex(window); % pixel value for black
gray = (white+black)/2;
inc = white-gray;
Screen(window, 'FillRect', gray);



Now we use Matlab functions to compute a gabor patch (a grating vignetted by a gaussian envelope), and put that image into our window.


[x,y] = meshgrid(-100:100, -100:100);
m = exp(-((x/50).^2)-((y/50).^2)) .* sin(0.03*2*pi*x);
Screen(window, 'PutImage', gray+inc*m);



Now we ask the system to make our stimulus image visible at the beginning of the next video refresh interval, by "Flipping" it onto the visible display:


Screen(window, 'Flip');



Now we pause, displaying the grating until the observer presses any key, and finally close the window.


KbWait;
Screen('CloseAll');



That’s a complete program that will run on any computer with Matlab and the Psychtoolbox. It’s a slightly abbreviated version of GratingDemo.m, which is included in the Psychtoolbox.

In the above, the call to Screen('PutImage') slowly translates the Matlab double precision matrix into the pixmap format of the frame buffer. You won't want to do that while showing a movie. In that case you’d create a texture, which is allocated in your computer’s memory, and store the Matlab image matrix into it:


w = Screen(window, 'MakeTexture', gray+inc*m);



Now, whenever you like, you can blit (i.e. copy) very quickly (up to 80 GB/s depending on your graphics hardware—run ScreenTest to time your hardware) from the texture to onscreen graphics memory:


Screen('DrawTexture', window, w);



If you create multiple textures in advance, then you can show them, one after another, one per frame, to create a movie. The 'Flip' command will automatically synchronize to the vertical blanking of your display device:


for i = 1:100
    Screen('DrawTexture', window, w(i));
    Screen(window,'Flip');
end



The Psychtoolbox program MovieDemo.m illustrates this.

Incidentally, if you display stimuli on the main screen, as we often do, then the Screen window will hide the main menu bar and obscure Matlab’s command window. That can be a problem if your program stops (perhaps due to an error) before closing the window. The keyboard will seem to be dead because its output is directed to the front most window, which belongs to Screen not Matlab, so Matlab won’t be aware of your typing. It’s ok. Remain calm. Typing Ctrl-C will stop your program if hasn't stopped already. Typing command-zero (on the Mac) or Alt-Tab (on Windows) will bring Matlab’s command window forward. That will restore keyboard input. The screen might still be hard to make out, if you’ve been playing with the lookup table. Typing


clear Screen



will cause Matlab to flush Screen.mex. Screen.mex, as part of its exit procedure, cleans up everything it did, closing all its windows and restoring the lookup table of all its displays. And everything will be hunky dory again. Remember the magic incantations: command-zero (Mac) or ALT-tab (Win) to bring the command window forward, and "clear screen" to restore the displays to normal.

5. Examples

You got yourself a computer, bought Matlab, and installed the Psychtoolbox. Now you want to get your experiment running. Where to begin?

George Sperling pointed out to us recently that writing software from scratch is hard. It's much easier to edit an already working program that does something similar. The PsychDemos folder includes a variety of short programs that show how to do various specific things, including synthesizing and displaying a movie. Type


help PsychDemos


at the Matlab prompt for a list.

The Psychtoolbox website includes a Library page with links to programs written by other users. We invite everyone to send software to the Psychtoolbox forum, which automatically archives your message and enclosure. (Please include the keyword DONATE in the subject, so we can all search the forum for software.) We add links on the Library page to programs in the forum that appear to have enduring value.

By the way, don’t shortchange yourself. Buy enough memory ($0.4/MB) and disk space ($3/GB) as well as a recent graphics card to work comfortably.

6. Online help

The Psychtoolbox has no manual. Matlab has manuals, but we hardly ever use them. Instead we use the HELP command. Typing


will list a variety of broad topics on which Matlab offers help. You can ask for help on any function, including Matlab’s functions and any function in the Psychtoolbox. For example,


help help/techdoc/ref/meshgrid.html">meshgrid



will explain how the Matlab function meshgrid works. Similarly,


help Screen



will give a brief synopsis of Screen. If you type


help Psychtoolbox



you will get an overview of the hierarchical organization of the Psychtoolbox. For any of the subdirectories listed, you can get a synopsis of the functions in that subdirectory. So


help PsychBasic



will give you a synopsis of the core toolbox routines. The HELP facility is a fast way to explore Matlab and the Psychtoolbox, and we use it all the time.

Some of the Psychtoolbox functions, like Screen, have a large number of subfunctions, making it impractical to include all the information in the HELP display. Simply typing


Screen



will give you a synopsis of all the Screen subfunctions. For more detail on a specific subfunction, call Screen itself, adding a question mark to the sub function name.


Screen('CopyWindow?')



will type out helpful text for 'CopyWindow'. You can omit the parentheses and quote marks, because Matlab considers this


Screen CopyWindow?


equivalent to the above.

In our help text for specific functions, we've mostly followed Mathworks's help-text conventions. But note that we designate optional arguments to function calls by embracing them with square brackets. You're not meant to include these brackets when you actually call the function. For example, "help Snd" will tell you this:err = Snd(command,[sig],[rate]). What this means is that the "command" argument is required and the "sig" and "rate" arguments are optional. Thus, a typical call to Snd looks like this, and has no brackets: Snd('Play','Quack'). If you would like to force an optional argument explicitly to its default, you can typically pass the empty matrix. This is useful for functions with more than one optional argument where you'd like to (e.g.) accept the default on the first but explicitly pass the second.

Matlab usually ignores case (at least on Mac and Win platforms), except in variable names and built-in functions. The Psychtoolbox, by default, ignores case, but this is a user-settable preference. Although lazy typists can type everything in lower case, keep in mind that this practice may lead to portability problems somewhere down the line.


Another helpful tool is lookfor. Suppose you want to convert a variable of cell type to something else, such as a matrix. However, you have no idea what the function might be called.

generates a list of all the functions with cell in the name. cell2mat is an obvious choice, and inspecting the list quickly teaches you about various cell related functions while you are working with the cell type.

7. Parallelize

Matlab is quick. Running on a 250 MHz PowerBook G3, the loop overhead is only 1 microsecond per iteration (after the first). Because it’s an interpreted language, it takes time (7 microseconds) to process each statement. However, one statement can perform a large number of elementary operations, e.g. adding two 100 element matrices requires 100 adds. Matlab does the elementary operations very efficiently. The large 76:1 ratio of the 7 microsecond statement overhead to the 0.09 microseconds per elementary operation (~ + - * / = = & | sin sign) is a defining characteristic of the language. You can run the Psychtoolbox program SpeedTest to assess these parameters for your own computer.

The implication, worth remembering, is that the run time of statements that operate on fewer than 76 elements is mostly spent processing the statement, not the elements. An example may help.


x = ones(10);
y = ones(10);



This creates two 100 element arrays. (They’re 10x10 square matrices.) In languages, like C or BASIC, that lack matrix operations, one would add x and y by writing a loop.


for i = 1:100
    z(i) = x(i) + y(i);
end



That works in Matlab too, but it runs very slowly, taking 800 microseconds (i.e. 8 microseconds per iteration, for 100 iterations). The right way to do this in Matlab is to operate on the whole matrix at once,


z = x + y;



which takes just 16 microseconds. That’s 50 times faster!

Again, the thing to remember is that the run time of statements that operate on fewer than 76 elements is mostly spent processing the statement, not the elements. An important part of learning Matlab is learning how to operate on lots of elements at once, as in the above example.

All of the timing above is for compiled code. Matlab compiles functions and loops before executing them, so you’ll usually benefit from the compilation without having to think about it. The one case you should avoid is calling a script (i.e. not a function) repeatedly; you should convert that script into a function.

8. Use the debugger

Matlab has a great built-in debugger, allowing you to step through your program, examine and modify variables, and set breakpoints. However, in the Mac version, the way you start it up is confusing, at least the first time you do it, which discourages many people enough that they never discover how useful the debugger is.

Be warned that, on the Mac, the debugger has slight difficulties with files that are in the Matlab toolbox folder (which includes the Psychtoolbox) and that the debugger may give a spurious error beep if you choose to debug a file whose name has any uppercase characters. For best results, debug a file outside the toolbox folder with a filename that’s entirely lowercase. Later on, once you've got the hang of using the debugger, you can ignore this restriction, but, as a beginner, it'll be less confusing to respect it.

Suppose you just wrote a function called foo.m, and you’ve got the file open, in a window called "foo.m". Click on the debugger icon (a green bug) in the window’s title bar. This will open the debugger window, which (confusingly) is also called "foo.m". Note the debugger's flow control icons at the left end of the title bar. Now set a breakpoint somewhere in your program by clicking one of the dashes "—" that appear in the left margin of the window, next to each statement. Clicking the dash turns it into a red dot , a breakpoint. (You can set multiple breakpoints, if you like.)

Sometimes when you try to set a breakpoint, you'll get a beep and no red dot. This usually means that Matlab is having trouble finding your file. (Which is sad, considering that it's got the file open.) Setting a breakpoint seems to be implemented in effect by issuing a command like "dbstop foo 17". This will fail if foo is neither in Matlab's path (a list of folders of likely places to find stuff) nor the current directory. You fix this by using the Matlab CD command to set the current directory to be the folder that contains the file your debugging, foo.m. If that succeeded, you should be able to open foo by typing "edit foo" in the command window. Now you should be able to set breakpoints without difficulty.

Now you’re ready to run your program. You’d have thought that you could just click something to say "Go!". No such luck. You must now go back to the Matlab command window. Using the keyboard shortcut, type command-zero (on a Mac) or xxx (on Win) to bring the command window forward. Now run foo, by typing its name:


foo



The program will begin execution and halt when it gets to your breakpoint. The command window will display a special prompt


K>>



indicating that you’re in the debugger. You can issue any Matlab command you like. Mostly you’ll simply type variable names to see what values they have. You can resume execution by typing


but, instead, you’ll probably find it more convenient to go back to the debugger window by typing the shortcut command-4 (Mac) or xxx (Win), and use the flow-control icons in the title bar. You can single step , descend into a subroutine , ascend to the calling program , continue , or stop . You can also add or remove breakpoints . When you’re done, you should remove all the breakpoints.

9. Measuring threshold

You can measure whatever you like, but it is often useful to measure the stimulus intensity that yields a criterion level of observer performance (Pelli and Farell, 1995). The Psychtoolbox includes Matlab code implementing the QUEST procedure for estimating threshold.

Experiments are usually organized as a run (e.g. 40 to 100) of trials. Each trial presents stimuli to the observer and waits for a response. Each trial takes several seconds. To measure threshold you’ll write a loop, with one iteration per trial.

Before starting the loop, you’ll initialize QUEST, giving it a rough guess for the value of threshold. You may also want to ask for the observer’s name and so on.

Within that loop are the guts of your experiment. Typically you might call QUEST to ask it to suggest a good contrast to test at, based on the initial guess and all the observer’s responses so far. Then you’d compute an appropriate stimulus and display it briefly in a window. If you’re using a two-interval forced choice paradigm you’ll have two intervals, announce by beeps, and display the signal in only one of them. Then you’ll wait for the observer’s response, typically a keypress or mouse click. Finally, tell QUEST what contrast you actually tested at and whether the observer’s response was right or wrong. The Psychtoolbox demo program ContrastThreshDemo illustrates how QUEST is used in the toolbox environment. We recommend discarding the observer’s first response, just in case he or she wasn’t quite ready.

Finally, after the last trial, you’ll report QUEST’s threshold estimate and confidence interval.

Judging from email queries we’ve received from users, the most common beginner’s mistake is to forget to leave things in the same state at the end of the trial as they were at the beginning. If you open a window at the beginning of the trial (on- or off-screen) then close it at the end. Otherwise you’ll eat up memory fast, adding yet another window on each trial. The symptom of this programming error is that the experiment works perfectly for a few trials but eventually fails, when it runs out of memory.

We suggest that you avoid opening and closing windows (whether on- or off-screen) within a trial because it’s slow. It’s better to open all the windows you’ll need ahead of time and then just use them on each trial. Finally, after the last trial, you should close them all.

10. Calibration

Everyone says that you should calibrate your monitor so that you’ll know what you’re displaying, but rarely is software and a photometric instrument provided to help you do it. The Psychtoolbox, being free software, doesn’t include the instrument, but it does include software, in PsychCal, which should help, though it still isn’t as well documented as we’d like. Our measure page has some suggestions on what to buy. You may wish to read our chapter on display calibration (Brainard, Pelli, and Robson, 2002).

11. use psychotoolbox and fMRI

There are many ways you can interface matlab with your EEG or MRI system. Here is an example on how to make it work in fMRI.

A - How the MRI is set

The MRI trigger is converted via a ForbInterface unit (Current Designs) into a TTL send to the mouse port.
The subject response is received via the same Forbinterface and plugged into your computer. Responses are perceived as keyboard keys.

B - Basis of the program

Make sure responses, MRI trigger (mousse) and timing are correct using the priorityLevel function
e.g. priorityLevel=MaxPriority(['GetSecs'],['KbCheck'],['KbWait'],['GetClicks']);

Get the starting point of the MRI with GetSecs, load your stimuli after each MRI pulse (mouse click) and record the timing
e.g.


MRIstart = GetSecs;
WaitTTL = GetClicks;
if WaitTTL == 1
    t = GetSecs; t = t-MRIstart;
    % .. do something here ..
end



in the core of the experiment one can collect responses with KbCheck
e.g.


start = GetSecs;
timeSecs = KbWait;
[keyDown, secs, keyCode] = KbCheck;
stop = GetSecs;
rt_catch(nbtrial_catch) = stop - start;
        
success = 0;
while success == 0
    pressed = 0;
    while pressed == 0
        [pressed, secs, kbData] = KbCheck;
    end
    for i = 1:length(keysWanted)
        if kbData(keysWanted(i)) == 1
            success = 1;
            keyPressed = keysWanted(i);
            break;
        end
    end
end



Finally, depending on your TR, the PPT will return an error message related to timing issues (too long delays)
One can use WaitSecs between events (and after you collect the subject response) to make sure everything is ok
e.g. WaitSecs(TR-TA); % given that you set the acquisition time (time to acquire a volume - TA) and repetition time (time between volumes - TR)

ATTENTION: for an unknown reason FlushEvents overloads so it is not used here.

12. Good luck!

References

Brainard, D. H. (1995) Colorimetry. In Handbook of Optics: Volume 1. Fundamentals, Techniques, and Design. M. Bass (ed.). McGraw-Hill, New York, 26.1-54.

Brainard, D. H. (1997) The Psychophysics Toolbox. Spatial Vision 10:433-436 (PDF)

Brainard, D. H., Pelli, D.G., and Robson, T. (2002). Display characterization. In the Encyclopedia of Imaging Science and Technology. J. Hornak (ed.), Wiley. 172-188. (PDF)

The MathWorks (1993) Matlab User's Guide. The MathWorks, Inc., Natick, MA.

Pelli, D.G. (1997) The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision 10:437-442. (HTML)

Pelli, D. G. and Farell, B. (1995) Psychophysical methods. In Handbook of Optics: Volume 1. Fundamentals, Techniques, and Design. M. Bass (ed.). McGraw-Hill, New York, 29.1-13.

Pelli, D. G. and Zhang, L. (1991) Accurate control of contrast on microcomputer displays. Vision Research 31, 1337-1350. [pdf]

Watson, A. B. and Pelli, D. G. (1983) QUEST: a Bayesian adaptive psychometric method. Perception and Psychophysics 33, 113-120

To read PDF files you may want to download the free Acrobat reader from Adobe.

Thanks to George Sperling for suggesting that a tutorial and examples would be useful.


David Brainard, Denis Pelli & Allen Ingling.
psychtoolbox@yahoogroups.com

19 September 2000


Page was generated in 1.4863 seconds




 

source:http://psychtoolbox.org/wikka.php?wakka=PsychtoolboxTutorial

Psychtoolbox-3 Tutorial



It would be great if someone could extend this tutorial for new features and standard operating procedures for version 3.

Until then, the following downloadable PDF file may give you a coarse overview over PTB-3's new features:
Talk slides of Psychtoolbox presentation, given at ECVP 2007 Arezzo


Strengths of Matlab & Psychtoolbox



Interpretive general language & a good interface to hardware

Unlike most software packages for experimental psychology and visual psychophysics, the Psychtoolbox is very general. It doesn't contain specific support for generating gratings, running trials, etc. Rather, it provides the user of a high-level interpreted language (Matlab) with a well-defined interface to the graphics hardware (frame buffer and lookup table) that drives the display. In this sense, it is a very generic tool. The power comes from the fact that once you can write arbitrary matrices into the frame buffer and lookup table as fast as the machine can go, everything else is easy to program in Matlab.

Matlab is a high level interpreted language with extensive support for numerical calculation (The MathWorks, 1993). The Psychophysics Toolbox provides a Matlab interface to the computer’s hardware. The core Psychtoolbox routines provide access to the display frame buffer and color lookup table, allow synchronization with the vertical blanking, support millisecond timing, and facilitate the collection of observer responses.

The Psychtoolbox doesn't limit the user—if the experiment can be run on the hardware, it can be run with the Psychtoolbox. In comparison, other environments for creating perception experiments provide very evolved support for specific experiments. Our experience with software packages not based on a general-purpose programming language has been that the very first thing we wanted to do turned out to be impossible.

We think the Matlab-Psychtoolbox combination has four winning features that we’d recommend for any experiment-design environment:



  • A general purpose language (Matlab) allows you to do new things.
  • For programs that use hardware intensely (e.g. display, keyboard), an interpreted environment (e.g. Matlab) speeds up software development greatly because simple tests can be performed immediately.
  • The key Psychtoolbox routines are C code, callable as functions from Matlab, that encapsulate the hardware, presenting a simple software interface to the user that provides full control. (In particular, the Psychtoolbox Screen.mex function provides a consistent high-performance user interface to the display, overcoming differences in synchronization behavior among graphics drivers from many manufacturers, within and between Mac and Win platforms.)
  • The Psychtoolbox Rush function allows you to run an arbitrary bit of code with little or no interruption. We call this "hogging the machine", blocking interrupts for the few seconds of a critical stimulus presentation.


The Psychtoolbox also provides interfaces for timing, sound, keyboard, and the serial port. And it includes many useful Matlab routines, such as color space transformations (Brainard, 1995; Brainard, Pelli, and Robson, 2002) and the QUEST (Watson and Pelli, 1983; Pelli and Farell, 1995) threshold seeking algorithm.



2. Transforming numbers into movies

Using a high-level platform-independent language like Matlab, it's easy to produce a matrix of numbers specifying the desired luminances of all the pixels in the displayed image. Today's off-the-shelf personal computers can copy those numbers from memory to video memory quickly enough to show a new image on every frame of a CRT monitor. However, high-level languages generally provide only rudimentary control of the vital transformations from number to color, and of the rate at which successive images are displayed.

That is where the Psychtoolbox comes in, providing simple but powerful functions to control the pixel transformation and timing synchronization of the computer-display interface.

Here’s a quick sketch of how computers display images. (See Brainard, Pelli, and Robson, 2002, for a fuller treatment.) Once the matrix of numbers has been loaded into frame buffer memory, the subsequent transformation from number to luminance (or color) is complicated, but usefully simplified to three steps. First, at video rates (e.g. 100 million pixels per second), each number passes through a lookup table, typically one 8-bit number in and three 8-to-10-bit numbers out, each driving an 8-to-10-bit digital-to-analog converter. Second, the three analog video signals drive the three guns of a color CRT. The luminance of light emitted by each monitor phosphor is proportional to the corresponding gun's beam current, which is an accelerating function of drive voltage; this is called the monitor's "gamma" function. Third, the luminous image is blurred by the point spread function of the beam.

Most graphics cards have adjustable pixel size, typically 8, 16, or 32 bits per pixel. Furthermore, while most have 8-bit Digital to Analog Converters (DACs), a few have 9- or 10-bit DACs. Many users write to ask what these numbers of bits mean. In the pixmap, each pixel is assigned a certain number of bits, 8, 16 or 32. The number of bits per pixel determines how many different colors you can have in one frame: 256, thousands, or millions. When you actually display an image, the pixel value is used as in index into a lookup table on your graphics card. The values in the lookup table are typically 8 bits per channel, but some cards have 9 or 10 bits per channel. Those values, output from the lookup table then drive digital to analog converters (DACs) with a corresponding precision, 8 to 10 bits. In 8-bit mode you can select any 256 colors. Within the lookup table, each color is specified by three 8-10 bit numbers. If instead you use the 32-bit mode (millions of colors) then the pixel is considered to be made up of three 8 bit values, one per channel (plus 8 bits of padding), each of which goes through a one-channel lookup table, again with 8-10 bit outputs. 16-bit mode is rarely useful. In that mode 5 bits are assigned to each channel (plus 1 bit of padding), allowing only 32 values per gun. Again, we have a longer treatment of this issue in our Display Characterization chapter (Brainard, Pelli, and Robson, 2002).
3. Toolbox overview

The basic idea is that you use Matlab to compute images or movies, and use new Matlab functions provided by the Psychtoolbox for accurate display. The Psychtoolbox routines treat the computer (Mac or Windows) as a display device: a frame buffer with a color lookup table. (To read about how to use frame buffers for visual psychophysics, see our psychophysics bibliography.)

The software has three layers. First, there is Matlab code that you write and some Matlab utilities that we supply, e.g. to compute color lookup tables and implement the QUEST staircase procedure. Second, there are a set of Matlab extensions (MEX or DLL files) that are written in C but callable from within Matlab. Third, the extension files, in turn, use OpenGL for graphics output and operating system facilities for other input and output.

The Screen mex file is the heart of the Psychophysics Toolbox, providing many subfunctions (selected by a text argument) that control the display screen(s). Experiments typically begin with a call to Screen('OpenWindow') and end with a call to Screen('CloseAll'). Anywhere in between, you may copy an image from a Matlab matrix onto the screen using Screen('PutImage') and change the lookup table using Screen('LoadClut') or (even better) Screen('LoadNormalizedGammaTable'). Typically you'll create a window on each screen that you're using in your experiment. Copying within or between windows is very fast. And you can create an unlimited number of offscreen windows (in memory, not visible) that can then be shown, one after another, as a movie, by copying to an onscreen window. Other Screen functions display text and dialogs and provide frame-accurate timing.

You can use the Screen function to write Matlab scripts that intermix graphics operations, calculations, and wait for observer responses. If you run the routines interactively from the command window, there will be a certain level of chaos as Matlab's windows overwrite parts of the experimental window. Still, this mode can be useful for debugging, especially if you restrict the window sizes to avoid overlap, or you have a second monitor.

Operations such as synching to vertical blanking and writing color lookup tables depend on the kind of video card(s) you have, and their video drivers. New versions of the computer operating system often include new video drivers. The Psychtoolbox provides a uniform interface, but you should check the timing on your computer, by running ScreenTest.m.

Note that Matlab has a number of built-in graphics commands, like BAR, that can draw into Matlab "figure" windows. Those commands won't draw into a Screen window. Use Matlab commands to draw into Matlab figures; use Screen to draw into Screen windows. For example, if you have an open Screen window, you can draw a black filled rectangle (10x25) in it by saying: Screen(window,'FillRect',BlackIndex(window),[0,0,25,10]). You can erase the whole window by overwriting with white: Screen(window,'FillRect').

Priority

A major challenge in doing psychophysics on modern personal computers is that operating systems are becoming more and more aggressive about stealing time away from your display code to do other things. Priority is a function that allows to protect your Matlab code (to some degree) from interruption. This allows you to keep your computer running more or less normally, with lots of background processes, yet grab complete control for the periods of time that it takes to present your stimuli.

Other Psychophysics routines

In addition to Screen and Priority, there are routines to satisfy all the needs of psychophysical experiments: Unbuffered keyboard i/o via KbCheck, KbWait, KbStrokeWait, KbName etc., mouse i/o via SetMouse, GetMouse and GetClicks, serial i/o via IOPort, timing via GetSecs and WaitSecs, sounds via PsychPortAudio and Snd, and threshold-estimation via the Quest staircase procedure. Other routines interface to the PhotoResearch PR-650 color meter, save images as EPS files, interface with Eyetrackers or EEG systems. An overview of the basic routines can be found by typing help PsychBasic. A high level overview over all categories of functions can be found by typing help PsychToolbox.

4. Displaying a grating with Screen.mex

Let's write a program to display a grating. We’ll open up a window on the screen, write a Matlab matrix into our window (i.e. into the frame buffer), and then close the window. These functions, Screen('OpenWindow'), Screen('PutImage'), and Screen('Close'), along with functions to load the lookup table and sync to the vertical blanking, are the heart of the Psychtoolbox.

First we open a full-screen window on screen 0 (the main monitor):


whichScreen = 0;
window = Screen(whichScreen, 'OpenWindow');



This full-screen window contains the entire screen, every pixel. There is no menu bar, title bar or border. You can ask Screen to open as many windows as you like, on as many monitors as you have, but usually you’ll want just a full-screen window on one monitor.

Next we figure out what numbers will produce white, gray, and black, and fill the whole window with gray.


white = WhiteIndex(window); % pixel value for white
black = BlackIndex(window); % pixel value for black
gray = (white+black)/2;
inc = white-gray;
Screen(window, 'FillRect', gray);



Now we use Matlab functions to compute a gabor patch (a grating vignetted by a gaussian envelope), and put that image into our window.


[x,y] = meshgrid(-100:100, -100:100);
m = exp(-((x/50).^2)-((y/50).^2)) .* sin(0.03*2*pi*x);
Screen(window, 'PutImage', gray+inc*m);



Now we ask the system to make our stimulus image visible at the beginning of the next video refresh interval, by "Flipping" it onto the visible display:


Screen(window, 'Flip');



Now we pause, displaying the grating until the observer presses any key, and finally close the window.


KbWait;
Screen('CloseAll');



That’s a complete program that will run on any computer with Matlab and the Psychtoolbox. It’s a slightly abbreviated version of GratingDemo.m, which is included in the Psychtoolbox.

In the above, the call to Screen('PutImage') slowly translates the Matlab double precision matrix into the pixmap format of the frame buffer. You won't want to do that while showing a movie. In that case you’d create a texture, which is allocated in your computer’s memory, and store the Matlab image matrix into it:


w = Screen(window, 'MakeTexture', gray+inc*m);



Now, whenever you like, you can blit (i.e. copy) very quickly (up to 80 GB/s depending on your graphics hardware—run ScreenTest to time your hardware) from the texture to onscreen graphics memory:


Screen('DrawTexture', window, w);



If you create multiple textures in advance, then you can show them, one after another, one per frame, to create a movie. The 'Flip' command will automatically synchronize to the vertical blanking of your display device:


for i = 1:100
    Screen('DrawTexture', window, w(i));
    Screen(window,'Flip');
end



The Psychtoolbox program MovieDemo.m illustrates this.

Incidentally, if you display stimuli on the main screen, as we often do, then the Screen window will hide the main menu bar and obscure Matlab’s command window. That can be a problem if your program stops (perhaps due to an error) before closing the window. The keyboard will seem to be dead because its output is directed to the front most window, which belongs to Screen not Matlab, so Matlab won’t be aware of your typing. It’s ok. Remain calm. Typing Ctrl-C will stop your program if hasn't stopped already. Typing command-zero (on the Mac) or Alt-Tab (on Windows) will bring Matlab’s command window forward. That will restore keyboard input. The screen might still be hard to make out, if you’ve been playing with the lookup table. Typing


clear Screen



will cause Matlab to flush Screen.mex. Screen.mex, as part of its exit procedure, cleans up everything it did, closing all its windows and restoring the lookup table of all its displays. And everything will be hunky dory again. Remember the magic incantations: command-zero (Mac) or ALT-tab (Win) to bring the command window forward, and "clear screen" to restore the displays to normal.

5. Examples

You got yourself a computer, bought Matlab, and installed the Psychtoolbox. Now you want to get your experiment running. Where to begin?

George Sperling pointed out to us recently that writing software from scratch is hard. It's much easier to edit an already working program that does something similar. The PsychDemos folder includes a variety of short programs that show how to do various specific things, including synthesizing and displaying a movie. Type


help PsychDemos


at the Matlab prompt for a list.

The Psychtoolbox website includes a Library page with links to programs written by other users. We invite everyone to send software to the Psychtoolbox forum, which automatically archives your message and enclosure. (Please include the keyword DONATE in the subject, so we can all search the forum for software.) We add links on the Library page to programs in the forum that appear to have enduring value.

By the way, don’t shortchange yourself. Buy enough memory ($0.4/MB) and disk space ($3/GB) as well as a recent graphics card to work comfortably.

6. Online help

The Psychtoolbox has no manual. Matlab has manuals, but we hardly ever use them. Instead we use the HELP command. Typing


will list a variety of broad topics on which Matlab offers help. You can ask for help on any function, including Matlab’s functions and any function in the Psychtoolbox. For example,


help help/techdoc/ref/meshgrid.html">meshgrid



will explain how the Matlab function meshgrid works. Similarly,


help Screen



will give a brief synopsis of Screen. If you type


help Psychtoolbox



you will get an overview of the hierarchical organization of the Psychtoolbox. For any of the subdirectories listed, you can get a synopsis of the functions in that subdirectory. So


help PsychBasic



will give you a synopsis of the core toolbox routines. The HELP facility is a fast way to explore Matlab and the Psychtoolbox, and we use it all the time.

Some of the Psychtoolbox functions, like Screen, have a large number of subfunctions, making it impractical to include all the information in the HELP display. Simply typing


Screen



will give you a synopsis of all the Screen subfunctions. For more detail on a specific subfunction, call Screen itself, adding a question mark to the sub function name.


Screen('CopyWindow?')



will type out helpful text for 'CopyWindow'. You can omit the parentheses and quote marks, because Matlab considers this


Screen CopyWindow?


equivalent to the above.

In our help text for specific functions, we've mostly followed Mathworks's help-text conventions. But note that we designate optional arguments to function calls by embracing them with square brackets. You're not meant to include these brackets when you actually call the function. For example, "help Snd" will tell you this:err = Snd(command,[sig],[rate]). What this means is that the "command" argument is required and the "sig" and "rate" arguments are optional. Thus, a typical call to Snd looks like this, and has no brackets: Snd('Play','Quack'). If you would like to force an optional argument explicitly to its default, you can typically pass the empty matrix. This is useful for functions with more than one optional argument where you'd like to (e.g.) accept the default on the first but explicitly pass the second.

Matlab usually ignores case (at least on Mac and Win platforms), except in variable names and built-in functions. The Psychtoolbox, by default, ignores case, but this is a user-settable preference. Although lazy typists can type everything in lower case, keep in mind that this practice may lead to portability problems somewhere down the line.


Another helpful tool is lookfor. Suppose you want to convert a variable of cell type to something else, such as a matrix. However, you have no idea what the function might be called.

generates a list of all the functions with cell in the name. cell2mat is an obvious choice, and inspecting the list quickly teaches you about various cell related functions while you are working with the cell type.

7. Parallelize

Matlab is quick. Running on a 250 MHz PowerBook G3, the loop overhead is only 1 microsecond per iteration (after the first). Because it’s an interpreted language, it takes time (7 microseconds) to process each statement. However, one statement can perform a large number of elementary operations, e.g. adding two 100 element matrices requires 100 adds. Matlab does the elementary operations very efficiently. The large 76:1 ratio of the 7 microsecond statement overhead to the 0.09 microseconds per elementary operation (~ + - * / = = & | sin sign) is a defining characteristic of the language. You can run the Psychtoolbox program SpeedTest to assess these parameters for your own computer.

The implication, worth remembering, is that the run time of statements that operate on fewer than 76 elements is mostly spent processing the statement, not the elements. An example may help.


x = ones(10);
y = ones(10);



This creates two 100 element arrays. (They’re 10x10 square matrices.) In languages, like C or BASIC, that lack matrix operations, one would add x and y by writing a loop.


for i = 1:100
    z(i) = x(i) + y(i);
end



That works in Matlab too, but it runs very slowly, taking 800 microseconds (i.e. 8 microseconds per iteration, for 100 iterations). The right way to do this in Matlab is to operate on the whole matrix at once,


z = x + y;



which takes just 16 microseconds. That’s 50 times faster!

Again, the thing to remember is that the run time of statements that operate on fewer than 76 elements is mostly spent processing the statement, not the elements. An important part of learning Matlab is learning how to operate on lots of elements at once, as in the above example.

All of the timing above is for compiled code. Matlab compiles functions and loops before executing them, so you’ll usually benefit from the compilation without having to think about it. The one case you should avoid is calling a script (i.e. not a function) repeatedly; you should convert that script into a function.

8. Use the debugger

Matlab has a great built-in debugger, allowing you to step through your program, examine and modify variables, and set breakpoints. However, in the Mac version, the way you start it up is confusing, at least the first time you do it, which discourages many people enough that they never discover how useful the debugger is.

Be warned that, on the Mac, the debugger has slight difficulties with files that are in the Matlab toolbox folder (which includes the Psychtoolbox) and that the debugger may give a spurious error beep if you choose to debug a file whose name has any uppercase characters. For best results, debug a file outside the toolbox folder with a filename that’s entirely lowercase. Later on, once you've got the hang of using the debugger, you can ignore this restriction, but, as a beginner, it'll be less confusing to respect it.

Suppose you just wrote a function called foo.m, and you’ve got the file open, in a window called "foo.m". Click on the debugger icon (a green bug) in the window’s title bar. This will open the debugger window, which (confusingly) is also called "foo.m". Note the debugger's flow control icons at the left end of the title bar. Now set a breakpoint somewhere in your program by clicking one of the dashes "—" that appear in the left margin of the window, next to each statement. Clicking the dash turns it into a red dot , a breakpoint. (You can set multiple breakpoints, if you like.)

Sometimes when you try to set a breakpoint, you'll get a beep and no red dot. This usually means that Matlab is having trouble finding your file. (Which is sad, considering that it's got the file open.) Setting a breakpoint seems to be implemented in effect by issuing a command like "dbstop foo 17". This will fail if foo is neither in Matlab's path (a list of folders of likely places to find stuff) nor the current directory. You fix this by using the Matlab CD command to set the current directory to be the folder that contains the file your debugging, foo.m. If that succeeded, you should be able to open foo by typing "edit foo" in the command window. Now you should be able to set breakpoints without difficulty.

Now you’re ready to run your program. You’d have thought that you could just click something to say "Go!". No such luck. You must now go back to the Matlab command window. Using the keyboard shortcut, type command-zero (on a Mac) or xxx (on Win) to bring the command window forward. Now run foo, by typing its name:


foo



The program will begin execution and halt when it gets to your breakpoint. The command window will display a special prompt


K>>



indicating that you’re in the debugger. You can issue any Matlab command you like. Mostly you’ll simply type variable names to see what values they have. You can resume execution by typing


but, instead, you’ll probably find it more convenient to go back to the debugger window by typing the shortcut command-4 (Mac) or xxx (Win), and use the flow-control icons in the title bar. You can single step , descend into a subroutine , ascend to the calling program , continue , or stop . You can also add or remove breakpoints . When you’re done, you should remove all the breakpoints.

9. Measuring threshold

You can measure whatever you like, but it is often useful to measure the stimulus intensity that yields a criterion level of observer performance (Pelli and Farell, 1995). The Psychtoolbox includes Matlab code implementing the QUEST procedure for estimating threshold.

Experiments are usually organized as a run (e.g. 40 to 100) of trials. Each trial presents stimuli to the observer and waits for a response. Each trial takes several seconds. To measure threshold you’ll write a loop, with one iteration per trial.

Before starting the loop, you’ll initialize QUEST, giving it a rough guess for the value of threshold. You may also want to ask for the observer’s name and so on.

Within that loop are the guts of your experiment. Typically you might call QUEST to ask it to suggest a good contrast to test at, based on the initial guess and all the observer’s responses so far. Then you’d compute an appropriate stimulus and display it briefly in a window. If you’re using a two-interval forced choice paradigm you’ll have two intervals, announce by beeps, and display the signal in only one of them. Then you’ll wait for the observer’s response, typically a keypress or mouse click. Finally, tell QUEST what contrast you actually tested at and whether the observer’s response was right or wrong. The Psychtoolbox demo program ContrastThreshDemo illustrates how QUEST is used in the toolbox environment. We recommend discarding the observer’s first response, just in case he or she wasn’t quite ready.

Finally, after the last trial, you’ll report QUEST’s threshold estimate and confidence interval.

Judging from email queries we’ve received from users, the most common beginner’s mistake is to forget to leave things in the same state at the end of the trial as they were at the beginning. If you open a window at the beginning of the trial (on- or off-screen) then close it at the end. Otherwise you’ll eat up memory fast, adding yet another window on each trial. The symptom of this programming error is that the experiment works perfectly for a few trials but eventually fails, when it runs out of memory.

We suggest that you avoid opening and closing windows (whether on- or off-screen) within a trial because it’s slow. It’s better to open all the windows you’ll need ahead of time and then just use them on each trial. Finally, after the last trial, you should close them all.

10. Calibration

Everyone says that you should calibrate your monitor so that you’ll know what you’re displaying, but rarely is software and a photometric instrument provided to help you do it. The Psychtoolbox, being free software, doesn’t include the instrument, but it does include software, in PsychCal, which should help, though it still isn’t as well documented as we’d like. Our measure page has some suggestions on what to buy. You may wish to read our chapter on display calibration (Brainard, Pelli, and Robson, 2002).

11. use psychotoolbox and fMRI

There are many ways you can interface matlab with your EEG or MRI system. Here is an example on how to make it work in fMRI.

A - How the MRI is set

The MRI trigger is converted via a ForbInterface unit (Current Designs) into a TTL send to the mouse port.
The subject response is received via the same Forbinterface and plugged into your computer. Responses are perceived as keyboard keys.

B - Basis of the program

Make sure responses, MRI trigger (mousse) and timing are correct using the priorityLevel function
e.g. priorityLevel=MaxPriority(['GetSecs'],['KbCheck'],['KbWait'],['GetClicks']);

Get the starting point of the MRI with GetSecs, load your stimuli after each MRI pulse (mouse click) and record the timing
e.g.


MRIstart = GetSecs;
WaitTTL = GetClicks;
if WaitTTL == 1
    t = GetSecs; t = t-MRIstart;
    % .. do something here ..
end



in the core of the experiment one can collect responses with KbCheck
e.g.


start = GetSecs;
timeSecs = KbWait;
[keyDown, secs, keyCode] = KbCheck;
stop = GetSecs;
rt_catch(nbtrial_catch) = stop - start;
        
success = 0;
while success == 0
    pressed = 0;
    while pressed == 0
        [pressed, secs, kbData] = KbCheck;
    end
    for i = 1:length(keysWanted)
        if kbData(keysWanted(i)) == 1
            success = 1;
            keyPressed = keysWanted(i);
            break;
        end
    end
end



Finally, depending on your TR, the PPT will return an error message related to timing issues (too long delays)
One can use WaitSecs between events (and after you collect the subject response) to make sure everything is ok
e.g. WaitSecs(TR-TA); % given that you set the acquisition time (time to acquire a volume - TA) and repetition time (time between volumes - TR)

ATTENTION: for an unknown reason FlushEvents overloads so it is not used here.

12. Good luck!

References

Brainard, D. H. (1995) Colorimetry. In Handbook of Optics: Volume 1. Fundamentals, Techniques, and Design. M. Bass (ed.). McGraw-Hill, New York, 26.1-54.

Brainard, D. H. (1997) The Psychophysics Toolbox. Spatial Vision 10:433-436 (PDF)

Brainard, D. H., Pelli, D.G., and Robson, T. (2002). Display characterization. In the Encyclopedia of Imaging Science and Technology. J. Hornak (ed.), Wiley. 172-188. (PDF)

The MathWorks (1993) Matlab User's Guide. The MathWorks, Inc., Natick, MA.

Pelli, D.G. (1997) The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision 10:437-442. (HTML)

Pelli, D. G. and Farell, B. (1995) Psychophysical methods. In Handbook of Optics: Volume 1. Fundamentals, Techniques, and Design. M. Bass (ed.). McGraw-Hill, New York, 29.1-13.

Pelli, D. G. and Zhang, L. (1991) Accurate control of contrast on microcomputer displays. Vision Research 31, 1337-1350. [pdf]

Watson, A. B. and Pelli, D. G. (1983) QUEST: a Bayesian adaptive psychometric method. Perception and Psychophysics 33, 113-120

To read PDF files you may want to download the free Acrobat reader from Adobe.

Thanks to George Sperling for suggesting that a tutorial and examples would be useful.


David Brainard, Denis Pelli & Allen Ingling.
psychtoolbox@yahoogroups.com

19 September 2000


Page was generated in 1.4863 seconds