A Brief History of
Human Computer Interaction Technology
Human Computer Interaction Technology
Brad A. Myers
Carnegie Mellon University School of Computer Science Technical
Report CMU-CS-96-163
and
Human Computer Interaction Institute
Technical Report CMU-HCII-96-103
December, 1996
Please cite this work as:
Brad A. Myers. "A Brief History of Human Computer Interaction
Technology."
ACM interactions. Vol. 5, no. 2, March, 1998. pp. 44-54.
Human Computer Interaction Institute
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3891
bam@a.gp.cs.cmu.edu
Abstract
This article summarizes the historical development of major advances in
human-computer interaction technology, emphasizing the pivotal role of
university research in the advancement of the field.
Copyright (c) 1996 -- Carnegie Mellon University
A short excerpt from this article appeared as part of "Strategic Directions
in
Human Computer Interaction," edited by Brad Myers, Jim Hollan, Isabel Cruz,
ACM Computing Surveys, 28(4), December 1996
This research was partially sponsored by NCCOSC under Contract No.
N66001-94-C-6037, Arpa Order No. B326 and partially by NSF under grant number
IRI-9319969. The views and conclusions contained in this document are those
of
the authors and should not be interpreted as representing the official
policies, either expressed or implied, of NCCOSC or the U.S. Government.
Keywords: Human Computer Interaction, History, User Interfaces,
Interaction Techniques.
1.
Introduction
Research in Human-Computer Interaction (HCI) has been spectacularly
successful,
and has fundamentally changed computing. Just one example is the ubiquitous
graphical interface used by Microsoft Windows 95, which is based on the
Macintosh, which is based on work at Xerox PARC, which in turn is based on
early research at the Stanford Research Laboratory (now SRI) and at the
Massachusetts Institute of Technology. Another example is that virtually
all
software written today employs user interface toolkits and interface builders,
concepts which were developed first at universities. Even the spectacular
growth of the World-Wide Web is a direct result of HCI research: applying
hypertext technology to browsers allows one to traverse a link across the
world
with a click of the mouse. Interface improvements more than anything else
has
triggered this explosive growth. Furthermore, the research that will lead
to
the user interfaces for the computers of tomorrow is happening at universities
and a few corporate research labs.
This paper tries to briefly summarize many of the important research
developments in Human-Computer Interaction (HCI) technology. By "research,"
I
mean exploratory work at universities and government and corporate research
labs (such as Xerox PARC) that is not directly related to products. By "HCI
technology," I am referring to the computer side of HCI. A companion article
on the history of the "human side," discussing the contributions from
psychology, design, human factors and ergonomics would also be appropriate.
A motivation for this article is to overcome the mistaken impression that
much
of the important work in Human-Computer Interaction occurred in industry,
and
if university research in Human-Computer Interaction is not supported, then
industry will just carry on anyway. This is simply not true. This paper
tries
to show that many of the most famous HCI successes developed by companies
are
deeply rooted in university research. In fact, virtually all of today's
major
interface styles and applications have had significant influence from research
at universities and labs, often with government funding. To illustrate this,
this paper lists the funding sources of some of the major advances. Without
this research, many of the advances in the field of HCI would probably not
have
taken place, and as a consequence, the user interfaces of commercial products
would be far more difficult to use and learn than they are today. As described
by Stu Card:
"Government funding of advanced human-computer interaction technologies built
the intellectual capital and trained the research teams for pioneer systems
that, over a period of 25 years, revolutionized how people interact with
computers. Industrial research laboratories at the corporate level in Xerox,
IBM, AT&T, and others played a strong role in developing this technology
and bringing it into a form suitable for the commercial arena." [6, p.
162]).
Figure 1 shows time lines for some of the technologies discussed in this
article. Of course, a deeper analysis would reveal much interaction between
the university, corporate research and commercial activity streams. It is
important to appreciate that years of research are involved in creating and
making these technologies ready for widespread use. The same will be true
for
the HCI technologies that will provide the interfaces of tomorrow.
It is clearly impossible to list every system and source in a paper of this
scope, but I have tried to represent the earliest and most influential systems.
Although there are a number of other surveys of HCI topics (see, for example
[1] [10] [33] [38]), none cover as many aspects as this one, or try to be
as
comprehensive in finding the original influences. Another useful resource
is
the video "All The Widgets," which shows the historical progression of a
number
of user interface ideas [25].
The technologies covered in this paper include fundamental interaction styles
like direct manipulation, the mouse pointing device, and windows; several
important kinds of application areas, such as drawing, text editing and
spreadsheets; the technologies that will likely have the biggest impact on
interfaces of the future, such as gesture recognition, multimedia, and 3D;
and
the technologies used to create interfaces using the other technologies,
such as user interface management systems, toolkits, and interface builders.
Figure 1: Approximate time lines showing where work was performed
on
some major technologies discussed in this article.
2.
Basic Interactions
- Direct Manipulation of graphical objects: The now ubiquitous
direct
manipulation interface, where visible objects on the screen are directly
manipulated with a pointing device, was first demonstrated by Ivan Sutherland
in Sketchpad [44], which was his 1963 MIT PhD thesis. SketchPad supported
the
manipulation of objects using a light-pen, including grabbing objects, moving
them, changing size, and using constraints. It contained the seeds of myriad
important interface ideas. The system was built at Lincoln Labs with support
from the Air Force and NSF. William Newman's Reaction Handler [30], created
at
Imperial College, London (1966-67) provided direct manipulation of graphics,
and introduced "Light Handles," a form of graphical potentiometer, that was
probably the first "widget." Another early system was AMBIT/G (implemented
at
MIT's Lincoln Labs, 1968, ARPA funded). It employed, among other interface
techniques, iconic representations, gesture recognition, dynamic menus with
items selected using a pointing device, selection of icons by pointing, and
moded and mode-free styles of interaction. David Canfield Smith coined the
term "icons" in his 1975 Stanford PhD thesis on Pygmalion [41] (funded by
ARPA
and NIMH) and Smith later popularized icons as one of the chief designers
of
the Xerox Star [42]. Many of the interaction techniques popular in direct
manipulation interfaces, such as how objects and text are selected, opened,
and
manipulated, were researched at Xerox PARC in the 1970's. In particular,
the
idea of "WYSIWYG" (what you see is what you get) originated there with systems
such as the Bravo text editor and the Draw drawing program [10] The concept
of
direct manipulation interfaces for everyone was envisioned by Alan Kay of
Xerox
PARC in a 1977 article about the "Dynabook" [16]. The first commercial systems
to make extensive use of Direct Manipulation were the Xerox Star (1981) [42],
the Apple Lisa (1982) [51] and Macintosh (1984) [52]. Ben Shneiderman at
the
University of Maryland coined the term "Direct Manipulation" in 1982 and
identified the components and gave psychological foundations [40]. - The Mouse: The mouse was developed at Stanford Research Laboratory
(now SRI) in 1965 as part of the NLS project (funding from ARPA, NASA, and
Rome
ADC) [9] to be a cheap replacement for light-pens, which had been used at
least
since 1954 [10, p. 68]. Many of the current uses of the mouse were
demonstrated by Doug Engelbart as part of NLS in a movie created in 1968
[8].
The mouse was then made famous as a practical input device by Xerox PARC
in the
1970's. It first appeared commercially as part of the Xerox Star (1981),
the
Three Rivers Computer Company's PERQ (1981) [23], the Apple Lisa (1982),
and
Apple Macintosh (1984). - Windows: Multiple tiled windows were demonstrated in Engelbart's
NLS
in 1968 [8]. Early research at Stanford on systems like COPILOT (1974) [46]
and at MIT with the EMACS text editor (1974) [43] also demonstrated tiled
windows. Alan Kay proposed the idea of overlapping windows in his 1969
University of Utah PhD thesis [15] and they first appeared in 1974 in his
Smalltalk system [11] at Xerox PARC, and soon after in the InterLisp system
[47]. Some of the first commercial uses of windows were on Lisp Machines
Inc.
(LMI) and Symbolics Lisp Machines (1979), which grew out of MIT AI Lab
projects. The Cedar Window Manager from Xerox PARC was the first major tiled
window manager (1981) [45], followed soon by the Andrew window manager [32]
by
Carnegie Mellon University's Information Technology Center (1983, funded
by
IBM). The main commercial systems popularizing windows were the Xerox Star
(1981), the Apple Lisa (1982), and most importantly the Apple Macintosh (1984).
The early versions of the Star and Microsoft Windows were tiled, but eventually
they supported overlapping windows like the Lisa and Macintosh. The X Window
System, a current international standard, was developed at MIT in 1984 [39].
For a survey of window managers, see [24].
3.
Application Types
- Drawing programs: Much of the current technology was
demonstrated in
Sutherland's 1963 Sketchpad system. The use of a mouse for graphics was
demonstrated in NLS (1965). In 1968 Ken Pulfer and Grant Bechthold at the
National Research Council of Canada built a mouse out of wood patterned after
Engelbart's and used it with a key-frame animation system to draw all the
frames of a movie. A subsequent movie, "Hunger" in 1971 won a number of
awards, and was drawn using a tablet instead of the mouse (funding by the
National Film Board of Canada) [3]. William Newman's Markup (1975) was the
first drawing program for Xerox PARC's Alto, followed shortly by Patrick
Baudelaire's Draw which added handling of lines and curves [10, p. 326].
The
first computer painting program was probably Dick Shoup's "Superpaint" at
PARC
(1974-75). - Text Editing: In 1962 at the Stanford Research Lab, Engelbart
proposed, and later implemented, a word processor with automatic word wrap,
search and replace, user-definable macros, scrolling text, and commands to
move, copy, and delete characters, words, or blocks of text. Stanford's
TVEdit
(1965) was one of the first CRT-based display editors that was widely used
[48]. The Hypertext Editing System [50, p. 108] from Brown University had
screen editing and formatting of arbitrary-sized strings with a lightpen
in
1967 (funding from IBM). NLS demonstrated mouse-based editing in 1968.
TECO
from MIT was an early screen-editor (1967) and EMACS [43] developed from
it in
1974. Xerox PARC's Bravo [10, p. 284] was the first WYSIWYG editor-formatter
(1974). It was designed by Butler Lampson and Charles Simonyi who had started
working on these concepts around 1970 while at Berkeley. The first commercial
WYSIWYG editors were the Star, LisaWrite and then MacWrite. For a survey
of
text editors, see [22] [50, p. 108]. - Spreadsheets: The initial spreadsheet was VisiCalc which was developed
by Frankston and Bricklin (1977-8) for the Apple II while they were students
at
MIT and the Harvard Business School. The solver was based on a
dependency-directed backtracking algorithm by Sussman and Stallman at the
MIT
AI Lab. - HyperText: The idea for hypertext (where documents are linked
to
related documents) is credited to Vannevar Bush's famous MEMEX idea from
1945
[4]. Ted Nelson coined the term "hypertext" in 1965 [29]. Engelbart's NLS
system [8] at the Stanford Research Laboratories in 1965 made extensive use
of
linking (funding from ARPA, NASA, and Rome ADC). The "NLS Journal" [10,
p.
212] was one of the first on-line journals, and it included full linking
of
articles (1970). The Hypertext Editing System, jointly designed by Andy
van
Dam, Ted Nelson, and two students at Brown University (funding from IBM)
was
distributed extensively [49]. The University of Vermont's PROMIS (1976)
was
the first Hypertext system released to the user community. It was used to
link
patient and patient care information at the University of Vermont's medical
center. The ZOG project (1977) from CMU was another early hypertext system,
and was funded by ONR and DARPA [36]. Ben Shneiderman's Hyperties was the
first system where highlighted items in the text could be clicked on to go
to
other pages (1983, Univ. of Maryland) [17]. HyperCard from Apple (1988)
significantly helped to bring the idea to a wide audience. There have been
many other hypertext systems through the years. Tim Berners-Lee used the
hypertext idea to create the World Wide Web in 1990 at the government-funded
European Particle Physics Laboratory (CERN). Mosaic, the first popular
hypertext browser for the World-Wide Web was developed at the Univ. of
Illinois' National Center for Supercomputer Applications (NCSA). For a more
complete history of HyperText, see [31]. - Computer Aided Design (CAD): The same 1963 IFIPS conference at
which
Sketchpad was presented also contained a number of CAD systems, including
Doug
Ross's Computer-Aided Design Project at MIT in the Electronic Systems Lab
[37]
and Coons' work at MIT with SketchPad [7]. Timothy Johnson's pioneering
work
on the interactive 3D CAD system Sketchpad 3 [13] was his 1963 MIT MS thesis
(funded by the Air Force). The first CAD/CAM system in industry was probably
General Motor's DAC-1 (about 1963). - Video Games: The first graphical video game was probably SpaceWar
by
Slug Russel of MIT in 1962 for the PDP-1 [19, p. 49] including the first
computer joysticks. The early computer Adventure game was created by Will
Crowther at BBN, and Don Woods developed this into a more sophisticated
Adventure game at Stanford in 1966 [19, p. 132]. Conway's game of LIFE was
implemented on computers at MIT and Stanford in 1970. The first popular
commercial game was Pong (about 1976).
4.
Up-and-Coming Areas
- Gesture Recognition: The first pen-based input device,
the RAND
tablet, was funded by ARPA. Sketchpad used light-pen gestures (1963).
Teitelman in 1964 developed the first trainable gesture recognizer. A very
early demonstration of gesture recognition was Tom Ellis' GRAIL system on
the
RAND tablet (1964, ARPA funded). It was quite common in light-pen-based
systems to include some gesture recognition, for example in the AMBIT/G system
(1968 -- ARPA funded). A gesture-based text editor using proof-reading symbols
was developed at CMU by Michael Coleman in 1969. Bill Buxton at the University
of Toronto has been studying gesture-based interactions since 1980. Gesture
recognition has been used in commercial CAD systems since the 1970s, and
came
to universal notice with the Apple Newton in 1992. - Multi-Media: The FRESS project at Brown used multiple windows
and
integrated text and graphics (1968, funding from industry). The Interactive
Graphical Documents project at Brown was the first hypermedia (as opposed
to
hypertext) system, and used raster graphics and text, but not video (1979-1983,
funded by ONR and NSF). The Diamond project at BBN (starting in 1982, DARPA
funded) explored combining multimedia information (text, spreadsheets,
graphics, speech). The Movie Manual at the Architecture Machine Group (MIT)
was one of the first to demonstrate mixed video and computer graphics in
1983
(DARPA funded). - 3-D: The first 3-D system was probably Timothy Johnson's 3-D CAD
system mentioned above (1963, funded by the Air Force). The "Lincoln Wand"
by
Larry Roberts was an ultrasonic 3D location sensing system, developed at
Lincoln Labs (1966, ARPA funded). That system also had the first interactive
3-D hidden line elimination. An early use was for molecular modelling [18].
The late 60's and early 70's saw the flowering of 3D raster graphics research
at the University of Utah with Dave Evans, Ivan Sutherland, Romney, Gouraud,
Phong, and Watkins, much of it government funded. Also, the
military-industrial flight simulation work of the 60's - 70's led the way
to
making 3-D real-time with commercial systems from GE, Evans&Sutherland,
Singer/Link (funded by NASA, Navy, etc.). Another important center of current
research in 3-D is Fred Brooks' lab at UNC (e.g. [2]). - Virtual Reality and "Augmented Reality": The original work on
VR was
performed by Ivan Sutherland when he was at Harvard (1965-1968, funding
by Air
Force, CIA, and Bell Labs). Very important early work was by Tom Furness
when
he was at Wright-Patterson AFB. Myron Krueger's early work at the University
of Connecticut was influential. Fred Brooks' and Henry Fuch's groups at
UNC
did a lot of early research, including the study of force feedback (1971,
funding from US Atomic Energy Commission and NSF). Much of the early research
on head-mounted displays and on the DataGlove was supported by NASA. - Computer Supported Cooperative Work. Doug Engelbart's 1968
demonstration of NLS [8] included the remote participation of multiple people
at various sites (funding from ARPA, NASA, and Rome ADC). Licklider and
Taylor
predicted on-line interactive communities in an 1968 article [20] and
speculated about the problem of access being limited to the privileged.
Electronic mail, still the most widespread multi-user software, was enabled
by
the ARPAnet, which became operational in 1969, and by the Ethernet from Xerox
PARC in 1973. An early computer conferencing system was Turoff's EIES system
at the New Jersey Institute of Technology (1975). - Natural language and speech: The fundamental research for speech
and
natural language understanding and generation has been performed at CMU,
MIT,
SRI, BBN, IBM, AT&T Bell Labs and BellCore, much of it government funded.
See, for example, [34] for a survey of the early work.
5.
Software Tools and Architectures
The area of user interface software tools is quite active now, and
many
companies are selling tools. Most of today's applications are implemented
using various forms of software tools. For a more complete survey and
discussion of UI tools, see [26].
- UIMSs and Toolkits: (There are software libraries and tools
that
support creating interfaces by writing code.) The first User Interface
Management System (UIMS) was William Newman's Reaction Handler [30] created
at
Imperial College, London (1966-67 with SRC funding). Most of the early work
was done at universities (Univ. of Toronto with Canadian government funding,
George Washington Univ. with NASA, NSF, DOE, and NBS funding, Brigham Young
University with industrial funding, etc.). The term "UIMS" was coined by
David
Kasik at Boeing (1982) [14]. Early window managers such as Smalltalk (1974)
and InterLisp, both from Xerox PARC, came with a few widgets, such as popup
menus and scrollbars. The Xerox Star (1981) was the first commercial system
to
have a large collection of widgets. The Apple Macintosh (1984) was the first
to actively promote its toolkit for use by other developers to enforce a
consistent interface. An early C++ toolkit was InterViews [21], developed
at
Stanford (1988, industrial funding). Much of the modern research is being
performed at universities, for example the Garnet (1988) [28] and Amulet
(1994) [27] projects at CMU (ARPA funded), and subArctic at Georgia Tech
(1996,
funding by Intel and NSF). - Interface Builders: (These are interactive tools that allow interfaces
composed of widgets such as buttons, menus and scrollbars to be placed using
a
mouse.) The Steamer project at BBN (1979-85; ONR funding) demonstrated many
of
the ideas later incorporated into interface builders and was probably the
first
object-oriented graphics system. Trillium [12] was developed at Xerox PARC
in
1981. Another early interface builder was the MenuLay system [5] developed
by
Bill Buxton at the University of Toronto (1983, funded by the Canadian
Government). The Macintosh (1984) included a "Resource Editor" which allowed
widgets to be placed and edited. Jean-Marie Hullot created "SOS Interface"
in
Lisp for the Macintosh while working at INRIA (1984, funded by the French
government) which was the first modern "interface builder." Hullot built
this
into a commercial product in 1986 and then went to work for NeXT and created
the NeXT Interface Builder (1988), which popularized this type of tool.
Now
there are literally hundreds of commercial interface builders. - Component Architectures: The idea of creating interfaces by connecting
separately written components was first demonstrated in the Andrew project
[32]
by Carnegie Mellon University's Information Technology Center (1983, funded
by
IBM). It is now being widely popularized by Microsoft's OLE and Apple's
OpenDoc architectures.
6.
Discussion
It is clear that all of the most important innovations in Human-Computer
Interaction have benefited from research at both corporate research labs
and
universities, much of it funded by the government. The conventional style
of
graphical user interfaces that use windows, icons, menus and a mouse and
are in
a phase of standardization, where almost everyone is using the same, standard
technology and just making minute, incremental changes. Therefore, it is
important that university, corporate, and government-supported research
continue, so that we can develop the science and technology needed for the
user
interfaces of the future.
Another important argument in favor of HCI research in universities is that
computer science students need to know about user interface issues. User
interfaces are likely to be one of the main value-added competitive advantages
of the future, as both hardware and basic software become commodities. If
students do not know about user interfaces, they will not serve industry
needs.
It seems that only through computer science does HCI research disseminate
out
into products. Furthermore, without appropriate levels of funding of academic
HCI research, there will be fewer PhD graduates in HCI to perform research
in
corporate labs, and fewer top-notch graduates in this area will be interested
in being professors, so the needed user interface courses will not be
offered.
As computers get faster, more of the processing power is being devoted to
the
user interface. The interfaces of the future will use gesture recognition,
speech recognition and generation, "intelligent agents," adaptive interfaces,
video, and many other technologies now being investigated by research groups
at
universities and corporate labs [35]. It is imperative that this research
continue and be well-supported.
ACKNOWLEDGMENTS
I must thank a large number of people who responded to posts of earlier
versions of this article on the announcements.chi mailing list for their
very
generous help, and to Jim Hollan who helped edit the short excerpt of this
article. Much of the information in this article was supplied by (in
alphabetical order): Stacey Ashlund, Meera M. Blattner, Keith Butler, Stuart
K.
Card, Bill Curtis, David E. Damouth, Dan Diaper, Dick Duda, Tim T.K. Dudley,
Steven Feiner, Harry Forsdick, Bjorn Freeman-Benson, John Gould, Wayne Gray,
Mark Green, Fred Hansen, Bill Hefley, D. Austin Henderson, Jim Hollan,
Jean-Marie Hullot, Rob Jacob, Bonnie John, Sandy Kobayashi, T.K. Landauer,
John
Leggett, Roger Lighty, Marilyn Mantei, Jim Miller, William Newman, Jakob
Nielsen, Don Norman, Dan Olsen, Ramesh Patil, Gary Perlman, Dick Pew, Ken
Pier,
Jim Rhyne, Ben Shneiderman, John Sibert, David C. Smith, Elliot Soloway,
Richard Stallman, Ivan Sutherland, Dan Swinehart, John Thomas, Alex Waibel,
Marceli Wein, Mark Weiser, Alan Wexelblat, and Terry Winograd. Editorial
comments were also provided by the above as well as Ellen Borison, Rich
McDaniel, Rob Miller, Bernita Myers, Yoshihiro Tsujino, and the reviewers.
References
1. Baecker, R., et al., "A Historical and Intellectual Perspective,"
in
Readings in Human-Computer Interaction: Toward the Year 2000, Second
Edition, R. Baecker, et al., Editors. 1995, Morgan Kaufmann
Publishers, Inc.: San Francisco. pp. 35-47.
2. Brooks, F. "The Computer "Scientist" as Toolsmith--Studies in Interactive
Computer Graphics," in IFIP Conference Proceedings. 1977. pp.
625-634.
3. Burtnyk, N. and Wein, M., "Computer Generated Key Frame Animation."
Journal Of the Society of Motion Picture and Television Engineers,
1971.
8(3): pp. 149-153.
4. Bush, V., "As We May Think." The Atlantic Monthly, 1945.
176(July): pp. 101-108. Reprinted and discussed in
interactions,
3(2), Mar 1996, pp. 35-67.
5. Buxton, W., et al. "Towards a Comprehensive User Interface Management
System," in Proceedings SIGGRAPH'83: Computer Graphics. 1983. Detroit,
Mich. 17. pp. 35-42.
6. Card, S.K., "Pioneers and Settlers: Methods Used in Successful User
Interface Design," in Human-Computer Interface Design: Success Stories,
Emerging Methods, and Real-World Context, M. Rudisill, et al.,
Editors. 1996, Morgan Kaufmann Publishers: San Francisco. pp. 122-169.
7. Coons, S. "An Outline of the Requirements for a Computer-Aided Design
System," in AFIPS Spring Joint Computer Conference. 1963. 23.
pp. 299-304.
8. Engelbart, D. and English, W., "A Research Center for Augmenting Human
Intellect." Reprinted in ACM SIGGRAPH Video Review, 1994.,
1968.
106
9. English, W.K., Engelbart, D.C., and Berman, M.L., "Display Selection
Techniques for Text Manipulation." IEEE Transactions on Human Factors
in Electronics, 1967. HFE-8(1)
10. Goldberg, A., ed. A History of Personal Workstations. 1988,
Addison-Wesley Publishing Company: New York, NY. 537.
11. Goldberg, A. and Robson, D. "A Metaphor for User Interface Design," in
Proceedings of the 12th Hawaii International Conference on System
Sciences. 1979. 1. pp. 148-157.
12. Henderson Jr, D.A. "The Trillium User Interface Design Environment,"
in
Proceedings SIGCHI'86: Human Factors in Computing Systems. 1986. Boston,
MA. pp. 221-227.
13. Johnson, T. "Sketchpad III: Three Dimensional Graphical Communication
with
a Digital Computer," in AFIPS Spring Joint Computer Conference. 1963.
23. pp. 347-353.
14. Kasik, D.J. "A User Interface Management System," in Proceedings
SIGGRAPH'82: Computer Graphics. 1982. Boston, MA. 16. pp. 99-106.
15. Kay, A., The Reactive Engine. PhD Thesis, Electrical Engineering
and
Computer Science University of Utah, 1969,
16. Kay, A., "Personal Dynamic Media." IEEE Computer, 1977.
10(3): pp. 31-42.
17. Koved, L. and Shneiderman, B., "Embedded menus: Selecting items in
context." Communications of the ACM, 1986. 4(29): pp.
312-318.
18. Levinthal, C., "Molecular Model-Building by Computer." Scientific
American, 1966. 214(6): pp. 42-52.
19. Levy, S., Hackers: Heroes of the Computer Revolution. 1984, Garden
City, NY: Anchor Press/Doubleday.
20. Licklider, J.C.R. and Taylor, R.W., "The computer as Communication
Device." Sci. Tech., 1968. April: pp. 21-31.
21. Linton, M.A., Vlissides, J.M., and Calder, P.R., "Composing user interfaces
with InterViews." IEEE Computer, 1989. 22(2): pp. 8-22.
22. Meyrowitz, N. and Van Dam, A., "Interactive Editing Systems: Part 1 and
2." ACM Computing Surveys, 1982. 14(3): pp. 321-352.
23. Myers, B.A., "The User Interface for Sapphire." IEEE Computer
Graphics and Applications, 1984. 4(12): pp. 13-23.
24. Myers, B.A., "A Taxonomy of User Interfaces for Window Managers."
IEEE Computer Graphics and Applications, 1988. 8(5): pp. 65-84.
25. Myers, B.A., "All the Widgets." SIGGRAPH Video Review,
1990.
57
26. Myers, B.A., "User Interface Software Tools." ACM Transactions
on
Computer Human Interaction, 1995. 2(1): pp. 64-103.
27. Myers, B.A., et al., The Amulet V2.0 Reference Manual .
Carnegie Mellon University Computer Science Department Report, Number, Feb,
1996. System available from http://www.cs.cmu.edu/~amulet.
28. Myers, B.A., et al., "Garnet: Comprehensive Support for Graphical,
Highly-Interactive User Interfaces." IEEE Computer, 1990.
23(11): pp. 71-85.
29. Nelson, T. "A File Structure for the Complex, the Changing, and the
Indeterminate," in Proceedings ACM National Conference. 1965. pp.
84-100.
30. Newman, W.M. "A System for Interactive Graphical Programming," in AFIPS
Spring Joint Computer Conference. 1968. 28. pp. 47-54.
31. Nielsen, J., Multimedia and Hypertext: the Internet and Beyond.
1995, Boston: Academic Press Professional.
32. Palay, A.J., et al. "The Andrew Toolkit - An Overview," in
Proceedings Winter Usenix Technical Conference. 1988. Dallas, Tex.
pp.
9-21.
33. Press, L., "Before the Altair: The History of Personal Computing."
Communications of the ACM, 1993. 36(9): pp. 27-33.
34. Reddy, D.R., "Speech Recognition by Machine: A Review," in Readings
in
Speech Recognition, A. Waibel and K.-F. Lee, Editors. 1990, Morgan
Kaufmann: San Mateo, CA. pp. 8-38.
35. Reddy, R., "To Dream the Possible Dream (Turing Award Lecture)."
Communications of the ACM, 1996. 39(5): pp. 105-112.
36. Robertson, G., Newell, A., and Ramakrishna, K., ZOG: A Man-Machine
Communication Philosophy . Carnegie Mellon University Technical Report
Report, Number, August, 1977.
37. Ross, D. and Rodriguez, J. "Theoretical Foundations for the Computer-Aided
Design System," in AFIPS Spring Joint Computer Conference. 1963.
23. pp. 305-322.
38. Rudisill, M., et al., Human-Computer Interface Design: Success
Stories, Emerging Methods, and Real-World Context. 1996, San Francisco:
Morgan Kaufmann Publishers.
39. Scheifler, R.W. and Gettys, J., "The X Window System." ACM
Transactions on Graphics, 1986. 5(2): pp. 79-109.
40. Shneiderman, B., "Direct Manipulation: A Step Beyond Programming
Languages." IEEE Computer, 1983. 16(8): pp. 57-69.
41. Smith, D.C., Pygmalion: A Computer Program to Model and Stimulate
Creative Thought. 1977, Basel, Stuttgart: Birkhauser Verlag. PhD Thesis,
Stanford University Computer Science Department, 1975.
42. Smith, D.C., et al. "The Star User Interface: an Overview," in
Proceedings of the 1982 National Computer Conference. 1982. AFIPS.
pp.
515-528.
43. Stallman, R.M., Emacs: The Extensible, Customizable, Self-Documenting
Display Editor . MIT Artificial Intelligence Lab Report, Number, Aug,
1979, 1979.
44. Sutherland, I.E. "SketchPad: A Man-Machine Graphical Communication System,"
in AFIPS Spring Joint Computer Conference. 1963. 23. pp.
329-346.
45. Swinehart, D., et al., "A Structural View of the Cedar Programming
Environment." ACM Transactions on Programming Languages and
Systems, 1986. 8(4): pp. 419-490.
46. Swinehart, D.C., Copilot: A Multiple Process Approach to Interactive
Programming Systems. PhD Thesis, Computer Science Department Stanford
University, 1974, SAIL Memo AIM-230 and CSD Report STAN-CS-74-412.
47. Teitelman, W., "A Display Oriented Programmer's Assistant."
International Journal of Man-Machine Studies, 1979. 11: pp.
157-187. Also Xerox PARC Technical Report CSL-77-3, Palo Alto, CA, March
8,
1977.
48. Tolliver, B., TVEdit . Stanford Time Sharing Memo Report, Number,
March, 1965.
49. van Dam, A., et al. "A Hypertext Editing System for the 360,"
in
Proceedings Conference in Computer Graphics. 1969. University of
Illinois.
50. van Dam, A. and Rice, D.E., "On-line Text Editing: A Survey."
Computing Surveys, 1971. 3(3): pp. 93-114.
51. Williams, G., "The Lisa Computer System." Byte Magazine,
1983. 8(2): pp. 33-50.
52. Williams, G., "The Apple Macintosh Computer." Byte, 1984.
9(2): pp. 30-54.