Thursday, November 20, 2008
Eprime Learning materials
Monday, November 10, 2008
SurfLogger: A Logging Browser and Data Processing Method
SurfLogger
SurfLogger , developed by He, Jibo at University of Illinois, is an automated data logging tool for web-based studies. It is written in python, free, open-source, cross-platform, and easy to modify. This page is devoted to information and resources about SurfLogger. Features: SurfLogger is a useful tool for collecting data for web-based researches. With its great features of automated data logging, free, open-source, cross-platform, and no dependence on other browsers, SurfLogger can free many researchers from the financial and time cost in data collecting. SurfLogger is expected to contribute more to the increasing interest in web-based researches. Technical Specifics: SurfLogger is written in Python, a scripting language, and the GUI (Graphical User Interface) is created with wxPython, which is a Python bundle of wxWidget. SurfLogger can record a variety of user actions with the web pages and the browsers. SurfLogger produces two files, logfile.txt and urlfile.txt. Logfile.txt stores action IDs (natural numbers assigned to each action, used to track the record to the responding actions), the time for each actions, interaction with the browsers (such as, clicking on the Back, Forward,Home, etc. buttons), and mouse coordination when clicking. The time record could be used to compute the time of completion for each task. The number of button press on the browsers could be used as a measure of effort in carrying out the task. SurfLogger also captures the images of each screen when the web page refreshes. Marking the mouse coordination on the screen captures could tell us what links the users clicked at. Urlfile.txt stores action IDs and URLs (Uniform Resource Locator). Action IDs are used to synchronize the record in logfile.txt and urlfile.txt. URL record is stored in a separate file because the abundant information it can provides. I will give an example about how to extract information from urlfile.txt in case study section of this paper. SurfLogger also calls external software to record the whole process of user actions. Currently, I used Michael Urman’s Screen Recorder named cankiri as my external software for recording, because it is also written in python and shares the same spirit of open source. With video record, the researchers could know more about users’ actions. If quality of recording is emphasized, SurfLogger could easily switch to call other recording software, and only one line of the code has to change to refer to the path of the external software. Case Study To demonstrate how SurfLogger could benefit web-based research, I will briefly explain the usability analysis of IGroup as a case study (Wang, Jing, He, and Yang, 2007). IGroup is an image search engine, presenting the results in semantic clusters. To test whether IGroup can increase search efficiency compared to MSN, we developed the predecessor of SurfLogger, which functioned similarly like SurfLogger, but less flexible. We developed a measure of Search Effort to compare IGroup and MSN objectively. Search Effort was defined as the number of query input, and number of links and cluster names clicked by the users. Query input, links and cluster names clicked were extracted from URLs recorded by our automated logging tool. A sample URL recorded in this study was listed as follows: Wednesday, August 30, 2006 3:06:54 PM http://msra-vss50-b/igroup2/search.aspx?q=Disney#g,14,1,-1 The characters in bold, “Disney”, “14”, and “1” were the input query, ID of cluster name, and result page. The information could be extracted from the URL by simple text processing. For code of data reduction, URL extraction and source code of SurfLogger please refer to my project page of SurfLogger. Resources: 1. Paper Potential users could get more information and description about SurfLogger from my draft paper [click to download] . This paper will be presented at SCiP (Society of Computer in Psychology)' 2008 at Chicago. 2. Codes nSource code qSurfLogger.pyw and run.py qDependencies: nOr download the dependency bundle at here nExecutable program 3. Data Processing Methods: To be added soon. :-P Rights You are free to use this tool for non-commercial purpose. Use by commercial companies or organizations must get permission from the author, He, Jibo. Download nSource code: qSurfLogger.pyw and run.py qDependencies: nExecutable program qSurfLogger.exe q Contact Feel free to contact me for help and information. If you would like to help me with the development, do please let me know. Department of Psychology, Beckman Institute for Advanced Science and Technology University of Illinois, Urbana Champaign, 603 East Daniel St., Champaign, IL 61820 Tel: 217-244-4461(office) 217-244-6763(lab) Email: hejibo@gmail.com MSN: hejibopku@hotmail.com |
Thursday, October 30, 2008
####Matlab Programming Notes####
####Matlab Programming Notes####
An Introduction to Matlab
Help for Psychtoolbox
- For beginners: Psychtoolbox Tutorial and Ione Fine's Psychtoolbox Tutorial.
- The Psychtoolbox includes built-in help available from the MATLAB command line.
- Enter "help Psychtoolbox" at the MATLAB prompt for a list of Psychtoolbox function categories.
- To display a list of Psychtoolbox functions within a category ask for help with the category, e.g. "help PsychBasic."
- To get help for a Psychtoolbox function ask for help on the function, e.g. "help GetChar."
- Psychtoolbox functions such as Screen,which accept a subcommand as an argument, include built-in help for there subcommands.
- For a list of all subcommands issue the function call with no arguments, e.g. enter "Screen" at the command line.
- To display documentation for a subcommand invoke the subcommand with a trailing question mark, e.g. Screen('OpenWindow?')
- For a list of all subcommands issue the function call with no arguments, e.g. enter "Screen" at the command line.
- Enter "help Psychtoolbox" at the MATLAB prompt for a list of Psychtoolbox function categories.
- Read our answers to frequently asked questions.
- Your friends and colleagues might help. Check out the Psychtoolbox forum.
Help for MATLAB
- Typing "doc" in MATLAB will activate their browser-based help system, which is quite handy.
- You can search the Mathworks web site.
- There's an active MATLAB newsgroup (mostly Windows and unix users).
- Typing "help" and "help help" at the MATLAB command line will list help topics and explain MATLAB help.
3. CRT Monitors and VGA:
–High-level language •slow to interpret by machine •easy to understand by humans
MATLAB is optimized for matrix-based calculations
•lookfor string
Second dimension = number of columns
>> e=['world','hello']
e =
worldhello
>> e=['world';'hello']
e =
hello
u(n) = u(n-1) x (n+1)
with u(1) = 1
function
[u]=u(n)
u(1)=1;
for
i=2:n
u(i)=u(i-1)*(i+1);
end
---solution 2:uofi.m----
u=zeros(10,1);
u(1)=1;
for
n=2:10
u(n,1)=u(n-1,1).*(n+1);
end
u
It would be even nicer, if we asked the user the first value of the series. –
–Use ‘input’ – –Syntax:
VAR = input(‘your text here’); – (EVALUATED INPUT)
–For entering strings of characters
VAR = input(‘Your text here’,’s’); – (NOT EVALUATED INPUT)
SWITCH:
syntax
•More generally, if testing the value inside variable 'n'
•switch n
• case n1
• …
• case n2
• …
• otherwise
• …
•end
type 'q'
%Write a program that asks user for PIN number, until user gets it right.
correctPin = 1234;
true =1;
counter=0;
while true
guessPin=input('Please input your PIN,type q to quit:\n')
counter=counter+1;
if guessPin==correctPin
disp('you input the correct PIN\n')
break;
elseif guessPin=='q'
disp('you quited successfully\n')
break;
else
disp('you input the wrong PIN\n')
end
if counter==3
disp('you failed for three times!\n')
break
end
end
find
–FINDS a specific value in an array (or matrix) and returns the
–SYNTAX:
indeces = find(expression);
–returns indeces for which expression is true.
Functions
•- can take input (passing variables)
•- can return outputs.
-variables are internal.
-NAME OF M-FILE AND FUNCTION HAVE TO BE THE SAME.
------example of function:-----------
Persistent Variables
•If you want a function to keep accessing a variable, every time you run it, declare the variable as persistent.
GLOBAL variables:
• -> can be used ANYWHERE…
• modified anywhere too!
•
•PERSISTENT variables:
• -> allow you to have some values persist in memory from one function use to the next.
• -> UNTOUCHABLES!! outside of function.
•
• -> beware of what's lurking under the surface.
For further reading…
•> anonymous functions (do not require an m-file)
• > f = @(arglist)expression
• > sqr = @(x) x.^2;
•> subfunctions are created within a function
•> private functions: only visible to their parent directory scripts.
•> nested functions: share variables.
SAVING AND READING MATRICES
•save NAME
–saves ALL the variables in your workplace, regardless of differences in format, on file NAME.mat
–Type:
–> clear all %CHECK WORKSPACE
– %CHECK CURRENT DIR.
–> load Worlds
save NAME var1 var2
–specifies which variables to save in file NAME.mat
•if you want to ADD stuff to a current mat file:
–save NAME var1 -append
wilcard *
•save avariables a* %saves all variables starting with a in file avariables.mat
reading text is different than reading numbers, but you can transform one into the other:
–num2str: transforms numbers into strings
–int2str: transforms integers into strings
–mat2str: transforms 2D matrices into strings
–char: convert to character array.
–print to screen: sprintf
–print to file: fprintf
–read from string: sscanf
–read from file: fscanf
---------file operation---------
fid = fopen(filename)
–fclose(fid) %0 means success.
firstline = fgetl(fid)
•%don’t suppress output.
–nextline = fgets(fid)
% fgetl reads a line but does NOT copy the end of line character to the string. fgets does.
feof(fid) : 0 while not end of file , 1 once it is found.
fprintf(fileID, PRINTING FORMAT, variable).
•Most common conversion characters.
–%c single character
–%d decimal notation
–%s string of characters
–INSIDE PRINTING area:
•\n new line
Rather than printing the text to the screen, let’s transfer it to another file.
–newfid = fopen(‘newgoldi.txt’,’w’);
–fprintf(newfid, ‘%s \n’, newline);
–
–‘w’ means write to this new file.
–‘r’ means open to read.
–‘rt’ reads as text.
–‘a’ means append (add at the end).
FIND SPACE CHARACTERS IN FIRST LINE.
–> spaces = find(firstline == ' ')
strcmp: compare whether two strings are the same
Intermixing text and variables
•fprintf('This is trial %2d.\n', trial);
•for count=1:10
–fprintf('This is trial %2d, and condition %d\n.',trial(count), condition(count));
–end;
Exercise --result display:
•Create a three column matrix with:
•first column: numbers from 1-10.
•second column: alternating 0-1.
•third column: random number between 150 and 1000.
•WRITE TO screen:
–think trial number, condition, RT.
Answer--
data = zeros(10,3);
data(:,1)=1:10;
data(:,2)=mod(data(:,1),2);
data(:,3)=rand(1,10)*850 +150;
%writes data column-wise.
%Treats matrix as comma-delimited list.
%CONTINUES EXECUTION until all the specified variables HAVE BEEN PRINTED.
%what we want is:
%data': the transposition of data
fprintf('%2d %d %3.1f\n',data');
Last issue.
•How do you print a ' or % or \ with fprintf?
ex: it's a beautiful day!
ex: I'm 100% certain 2\4=2.
•
•Answer: you double the escape character to make it printable (page17)
•> fprintf('I''m 100%% certain 2\\4=2.')
###########IMAGE###########
IMAGES (1) p. 22
•Let's play with Matlab's demo:
•Type:
•> clear all
•> load durer %check workspace.
•> image(X) %what happened?
•> colormap(map) %ruminate on this…
•> axis equal
•> axis image %same as equal, but image fits tightly
•> axis off %turns off tick marks
Other types of images
•You can load TIFF, JPEG, BMP… with
•imread
•[X,map] = imread(filename,ext);
Write images to files
•Let's make and save a random b/w mask image.
• imwrite(matrix,'nameoffile','extension')
%imwrite(matrix,colormap,'nof','ext') for indexed images
mask = rand(400,400);
imwrite(mask,'mask','bmp');
clear all
[x,map]=imread('mask','bmp');
image(x);
colormap(map);
•Let's make and save a random color-noise mask image -unindexed
•> mask2 = rand(400,400,3); %why 3?
•> imwrite(mask2,'mask2','jpg');
•> clear all;
•> input('click key when ready');
•> X = imread('mask2','jpg');
•> image(X);
•> colormap(map);
•> input('click key when ready');
•> colormap(hot);
Matlab homework5
1. durernoise.m
Create a program that creates 5 different versions of the durer image with increasing levels of noise, using the same grayscale(256) CLUT, all in one single image (BMP).
Level of noise is measured the deviation of the noise from the actual value of the pixel.
use the following as an example:
>> z = mod(X + (rand(size(X)).*32 - 16),128);
>> image(z)
%Goal:
%Create a program that creates 5 different versions of the durer image with
%increasing levels of noise, using the same grayscale(256) CLUT, all in one
%single image (BMP).
load durer;
colormap(gray(256));
% create the image with noise
NoiseLevel1=mod(X+(rand(size(X)).*4-2),128);
NoiseLevel2=mod(X+(rand(size(X)).*8-4),128);
NoiseLevel3=mod(X+(rand(size(X)).*16-8),128);
NoiseLevel4=mod(X+(rand(size(X)).*32-16),128);
NoiseLevel5=mod(X+(rand(size(X)).*64-32),128);
combine=[NoiseLevel1'; NoiseLevel2';NoiseLevel3';NoiseLevel4';NoiseLevel5'];
image(combine');
% to polish the graph
axis off;
axis image;
xlabel 'Figure 1. Durer image with increasing levels of noise from left to right';
% write image
imwrite(combine',gray(256),'Durer.bmp','bmp');
Indexed image:
load durer; %gives me X and map
%I am going to create five matrices Xnoi which will be copies of X with
%increasing noise
Xnoi = X + rand(648,509).*60 -30; %noise here is created by randomly
%varying the luminance of a pixel
%the total range of luminance is
%124, so 30 is about 1/4 of that.
%PROBLEM: some indeces will be lower than 1 or larger than 128...
% so we correct for that. However you would like to do it!
below = find(Xnoi < 1); %find values of Xnoi that go below the colormap %index of 1
Xnoi(below) = 1; %for those values, we reassign a low luminance %value.
% the same in all cases!
above=find(Xnoi>128); % we do the same for indeces larger than 128.
Xnoi(above)= 128;
2. rgbnoise.m
Create a programm that takes the visionlab jpg logo and presents it in 2 different levels of black and white noise and two different levels of color noise, all in one single image that includes the untouched original. 3. Submit both images and corresponding script files.
www.psych.uiuc.edu/~alleras/courseImages.htm
Intermixing text and variables
textread
function.
•
•SYNTAX:
•A = textread('filename') transforms data in filename into Matrix A.
•ONLY WORKS WITH HOMOGENEOUS Matrices.
= v ns = "urn:schemas-microsoft-com:vml" />= o ns = "urn:schemas-microsoft-com:office:office" />= p ns = "urn:schemas-microsoft-com:office:powerpoint" />
•SYNTAX:
•[A,B,C] = textread('filename','%s%d%f')
•reads each column into a variable, of specified type.
strings are saved in "cell" arrays (multidimensional arrays whose elements are copies of other arrays, here a table of strings of different sizes).
names(1) is the cell itself
•so trash = name(1) makes trash a cell
•names{1} refers to the value in the cell
•so trash = name{1} makes trash a character array
•names{1}(j) is the jth element in the character array stored in the cell 1.
USE strcmp(string1,string2)
which is true if string1==string2.
name2f = input('what student?','s');
numstu = size(name,1); %number of rows
for findex=1:numstu
if (strcmp(name2f,names{findex}))
whichisit =findex;
end;
end;
Matlab homework6
for
vowel=['a' 'e' 'i' 'o' 'u']
string = [
'let' vowel '.gif'] %concatenate file name string
[letter,map]= imread(eval(
' string '));
image(letter);
colormap(map);
axis
off;
axis
equal;
input(
'Ready for next? \n');
end
;
for n=1:10
Avoid loops.
Psychtoolbox Win 2.54 (20 February 2004) requires Matlab 6.5 (Student or regular) or better.
Download zip archive (3.1 MB)
IMPORTANT UPDATES TO Win 2.54:
After you download Win Psychtoolbox 2.54, replace bug-ridden files with improved versions listed below:
WaitSecs.dll - Get it here.
CopyText.dll - Get it here.
1. Structures
Multidimensional array elements accessed by textual designators. Each field can contain any type of Matlab data (numbers, strings, cells, etc).
Type:
Data.trial = 1;
Data.setsize = 3;
Data.tgtword = 'doctor';
Data.rt = 541;
Data.resp =1;
Data
Using the "struct" function:
Data = struct('label1', dummy1, 'label2',dummy2, etc);
Creates the structure:
data.label1 = dummy1
data.label2 = dummy2
Data(64) = struct('label1', dummy1, 'label2', dummy2, etc);
Access each element like a vector:
Data(34).label1 = 234;
data(n) =struct('field1',value1,'field2',value2)
initializes only the nth value. Others are set to empty matrices.
data = repmat(struct('trial',1,'rt',-1),1,64);
This way ALL values are initialized with values specified in the struct function.
Or: Use values saved in a cell array.
> a = cell(3,1);
> a{1} = 'bob';
> a{2} = 'where are you?';
> a{3} = 'I am here';
> data = struct('line',a);
What's data(2).line?
Planning an experiment
section 1:SETUP VARIABLES
-Initialize CONSTANTS (refresh rate)
-Initialize Variables (with comments so you know what each variable does)
-Load big files (images, sounds, mex…)
Section 2:BALANCE CONDITIONS
BALANCE your design:
-define conditions
-how many trials for each condition
Section 3: TRIAL LOOP
3.1 DRAW constant images
(fixation, blank screens)
3.2 Start Trial Loop
3.2.1 Draw trial specific stimuli (if any)
3.2.2 Present stimuli
3.2.3 Get response
3.2.4 Classify response (error?)
3.2.5 ERASE ANY TRIAL SPECIFIC STIMULI
Section 4:Save DATA
Open a subject file and write data to it. (personal preference).
save data after each block/trial to avoid data loss or software crash.
Section 5:CLEAN UP
Clear all the variables you used and
the images you created, and close
any opened files.
AND DON'T FORGET TO COMMENT AS MUCH AS YOU CAN!!!
AND DON'T FORGET SECTION 0: Comments at beginning of file for
"help".
The Psychophysics Toolbox
A set of functions to:
-Interact with Monitor (pg. 27-32)
-Interact with Keyboard and mouse
-Interact with your OS (Here, Windows XP)
Screen function:
-Function that helps us interact with our monitor.
- Many "sub-functions".
First thing:
OpenWindow:
[windowPtr,rect]=Screen(0,'OpenWindow',[color],[rectangle],[pixelSize]);
windowptr: a pointer to the space in memory we are allocating to work on this window (kinda like fid=fopen(..) keeps track of a file), can be called "fixation display",'practice' etc.
rect: (if specified, and I suggest you do) gives you the coordinates of the window you’ll be using in pixels, on the format [Xtop-left, Ytop-left, Xbotton-right, Ybottom-right]. e.g. [0 0 1280 1024]
0: refers to the main monitor (where you'll be presenting stimuli).
color: you want the window to be: if one number: an index (CLUT, between 0-255), or a RGB triplet [r g b]. Later we'll talk about
changing the CLUT.
rectangle: ignored in windows
pixelsize: you can set the pixelsize for your screens (8 bit -> 256 colors, 24 bit…). Default is unchanged.
Close Window
Two ways:
To close all windows:
Screen('CloseAll');
To close a specific window:
Screen(windowPtr,'Close');
VERY IMPORTANT!!!
"Hello World " to psychtoolbox
warning
off MATLAB:DeprecatedLogicalAPI
[windowPtr,rect]=Screen(0,
'OpenWindow',255); % CLUT 255 for white background, 0 for black
Screen(windowPtr,
'DrawText','Hello World',500,350,100);
KbWait; %wait for the user to push a key
Screen(
'CloseAll');
DETERMINE HOW LONG IT TOOK MATLAB
TO OPEN THE WINDOW AND WRITE TEXT
TO IT.
Use: GetSecs (returns seconds since computer was turned on);
OffScreenWindows
So, we can work "offline", prepare our screens and then quickly copy
them to the screen.
image = Screen(windowPtr,'OpenoffScreenWindow',color,smallerrect)
image is pointer to refer to this offscreen window
windowPtr is pointer to the monitor window (to which all windows are related)
smallerrect is size of offscreen window (can be smaller!)
So, Once we have our offscreen window, we THEN copy it to our main window.
Screen('CopyWindow', srcWinPtr,DestWindoPtr, [srcRect],[dstRect]);
srcWinPtr: is the windowPointer (name) of the window you want to copy (source)
Bugs: The color template in open window subcommand is BGR instead of RGB. The following command will create blue background.
[window,rect]=Screen(0,'OpenWindow',[255 0 0]);
image = Screen(window,
'OpenoffScreenWindow',[255 0 0],[0 0 100 100]);
HideCursor;
beep=MakeBeep(100,5);
Snd('Open');
PutImage
Screen(windowPtr,'PutImage', imagearray,[rect],[copymode]);
imagearray is a matrix like the ones we created a few weeks ago:MxN or MxNx3 (if 16 or 32 bits graphic card)
Help on Screen functions?
Type:
Screen(‘NameOfFunction?’)
Try:
Screen(‘PutImage?’)
Beeps and Wav sounds
Matlab can play wav sounds and beeps WHILE doing something else
(nice for experiments).
Just like we 'Open' a window, we need to 'Open' a sound channel:
Snd('Open');
and 'close' it when you are done:
Snd('Close');
Beeps
You can create beeps with MakeBeep, which creates a vector that will be interpreted by your sound card (just like a matrix of numbers is interpreted by your graphics card as an image).
beep = MakeBeep(frequency,duration,samplingrate);
Then you can play that beep:
Snd('Play',beep,samplingrate);
Wav files
Windows Audio files:
-> wavplay(y,FS) plays sound recorded in y vector at the sampling frequency specified in FS (same sound will be different if played faster (larger FS) or slower (smaller FS). For stereo playback, Y would be N-by-2 matrix (left, right channels).
->important:
wavplay(y,FS,'async') allows you to play that sound while continuing to do stuff in Matlab (non-blocking call).
Synchronizing windows and monitor
To do so, we use the 'WaitBlanking' command in Screen.
Calling:
Screen(windowptr,'WaitBlanking')
will wait until your gun moves to the top of the monitor. (only true for CRT monitors and analog LCDs)
counting time with refresh rates:
->IF you want to present a stimulus for a specific amount of time, count time in refreshes.
TIME
counting time with GetSecs:
Cool and accurate. You can do something like:
t1=GetSecs;
t2=t1;
Screen('CopyWindow'…); %or whatever…
while ((t2-t1)< presentationTime)
t2=GetSecs;
end;
%This syntax is equivalent to WaitSecs.
tic (start counting time)
toc (count the time. )
###Get the refresh rate of monitor#######
warning off MATLAB:DeprecatedLogicalAPI
[windowptr,rectangle]=screen(0,'OpenWindow',255);
Screen(windowptr,'WaitBlanking')
t1=GetSecs;
t2=t1;
counter=0;
while ((t2-t1)< 1)
Screen(windowptr,'WaitBlanking')
counter=counter+1;
t2=GetSecs;
end;
counter
Screen('CloseAll');
frames=FrameRate(window)%present stimuli 100ms. to be accurate, the multiply of the refreshrate
Screen(window,'WaitBlanking',floor(frames./10))
Keyboard Management
List of useful functions:
KbCheck: status of keyboard
KbWait: waits for Keypress (returns GetSecs)
GetChar: waits for character
CharAvail: checks event queue for characters
FlushEvents: help manage event queue
EventAvail: checks for events
KbCheck
IS USEFUL TO CLEAN THE KEYBOARD
BUFFER!!
while KbCheck end;
will "clean" your buffer
(Technically, wait until it is clean)
[keyisDown,secs,keyCode]=KbCheck;
find(keyCode) which key in ASCII
or you can ask directly if a given key was hit:
if keyCode('Z') (would be 1 if the z key was hit)
%note capitalization.
KbName: toolbox function that allows us to name the different keys on the keyboard (primary label) (Note: '5' vs. '5%')
Usage: KbName(arg) if arg is a string ('z'): returns the keyCode for that key if arg is the array keyCode, KbName returns the label of the key.
KbName deals with KEYS not Characters!
leftTarget = KbName('left');
MOUSE Management
List of useful functions:
HideCursor
ShowCursor
GetClicks
GetMouse
SetMouse
[x,y,buttons]=GetMouse(windowPtr);
PsychDemos
If you are wondering what kind of thing you can do with PTB and how some of your
ideas can be coded up, look at the demos that ship with PTB. Try typing:
>>help PsychDemos
This will give you a list of available demos and a short description of what they do. If you are curious what a certain
demo does you can inquire further. For example, type:
>>help MandelbrotDemo
This will tell you what this script does. If you are curious how this is implemented, type:
>>edit MandelbrotDemo
This will open the file MandelbrotDemo.m in an editor window. Don’t edit this file! You might cause some damage.
Instead, save the file under a new name. For example, ‘myMandelbrotDemo.m’. Now you can twiddle things in the file
and try to see what effect these changes have on the execution of the program. But before you start doing that, let’s get
acquainted with the single most important function in Psychtoolbox.
quesdlg
ButtonName=questdlg(Question,Title,Btn1,Btn2,
DEFAULT);
-up to three buttons.
-Default is optional.
ButtonName=questdlg('What is your wish?', ...
'Genie Question', ...
'Food','Clothing','Money','Money');
INPUTDLG
ANSWER = INPUTDLG(PROMPT) creates an input dialog box where
users can enter text, saved in the cell array ANSWER.
PROMPT is a cell array containing the PROMPT strings.
Saturday, September 27, 2008
A Brief History of
Human Computer Interaction Technology
Human Computer Interaction Technology
Brad A. Myers
Carnegie Mellon University School of Computer Science Technical
Report CMU-CS-96-163
and
Human Computer Interaction Institute
Technical Report CMU-HCII-96-103
December, 1996
Please cite this work as:
Brad A. Myers. "A Brief History of Human Computer Interaction
Technology."
ACM interactions. Vol. 5, no. 2, March, 1998. pp. 44-54.
Human Computer Interaction Institute
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3891
bam@a.gp.cs.cmu.edu
Abstract
This article summarizes the historical development of major advances in
human-computer interaction technology, emphasizing the pivotal role of
university research in the advancement of the field.
Copyright (c) 1996 -- Carnegie Mellon University
A short excerpt from this article appeared as part of "Strategic Directions
in
Human Computer Interaction," edited by Brad Myers, Jim Hollan, Isabel Cruz,
ACM Computing Surveys, 28(4), December 1996
This research was partially sponsored by NCCOSC under Contract No.
N66001-94-C-6037, Arpa Order No. B326 and partially by NSF under grant number
IRI-9319969. The views and conclusions contained in this document are those
of
the authors and should not be interpreted as representing the official
policies, either expressed or implied, of NCCOSC or the U.S. Government.
Keywords: Human Computer Interaction, History, User Interfaces,
Interaction Techniques.
1.
Introduction
Research in Human-Computer Interaction (HCI) has been spectacularly
successful,
and has fundamentally changed computing. Just one example is the ubiquitous
graphical interface used by Microsoft Windows 95, which is based on the
Macintosh, which is based on work at Xerox PARC, which in turn is based on
early research at the Stanford Research Laboratory (now SRI) and at the
Massachusetts Institute of Technology. Another example is that virtually
all
software written today employs user interface toolkits and interface builders,
concepts which were developed first at universities. Even the spectacular
growth of the World-Wide Web is a direct result of HCI research: applying
hypertext technology to browsers allows one to traverse a link across the
world
with a click of the mouse. Interface improvements more than anything else
has
triggered this explosive growth. Furthermore, the research that will lead
to
the user interfaces for the computers of tomorrow is happening at universities
and a few corporate research labs.
This paper tries to briefly summarize many of the important research
developments in Human-Computer Interaction (HCI) technology. By "research,"
I
mean exploratory work at universities and government and corporate research
labs (such as Xerox PARC) that is not directly related to products. By "HCI
technology," I am referring to the computer side of HCI. A companion article
on the history of the "human side," discussing the contributions from
psychology, design, human factors and ergonomics would also be appropriate.
A motivation for this article is to overcome the mistaken impression that
much
of the important work in Human-Computer Interaction occurred in industry,
and
if university research in Human-Computer Interaction is not supported, then
industry will just carry on anyway. This is simply not true. This paper
tries
to show that many of the most famous HCI successes developed by companies
are
deeply rooted in university research. In fact, virtually all of today's
major
interface styles and applications have had significant influence from research
at universities and labs, often with government funding. To illustrate this,
this paper lists the funding sources of some of the major advances. Without
this research, many of the advances in the field of HCI would probably not
have
taken place, and as a consequence, the user interfaces of commercial products
would be far more difficult to use and learn than they are today. As described
by Stu Card:
"Government funding of advanced human-computer interaction technologies built
the intellectual capital and trained the research teams for pioneer systems
that, over a period of 25 years, revolutionized how people interact with
computers. Industrial research laboratories at the corporate level in Xerox,
IBM, AT&T, and others played a strong role in developing this technology
and bringing it into a form suitable for the commercial arena." [6, p.
162]).
Figure 1 shows time lines for some of the technologies discussed in this
article. Of course, a deeper analysis would reveal much interaction between
the university, corporate research and commercial activity streams. It is
important to appreciate that years of research are involved in creating and
making these technologies ready for widespread use. The same will be true
for
the HCI technologies that will provide the interfaces of tomorrow.
It is clearly impossible to list every system and source in a paper of this
scope, but I have tried to represent the earliest and most influential systems.
Although there are a number of other surveys of HCI topics (see, for example
[1] [10] [33] [38]), none cover as many aspects as this one, or try to be
as
comprehensive in finding the original influences. Another useful resource
is
the video "All The Widgets," which shows the historical progression of a
number
of user interface ideas [25].
The technologies covered in this paper include fundamental interaction styles
like direct manipulation, the mouse pointing device, and windows; several
important kinds of application areas, such as drawing, text editing and
spreadsheets; the technologies that will likely have the biggest impact on
interfaces of the future, such as gesture recognition, multimedia, and 3D;
and
the technologies used to create interfaces using the other technologies,
such as user interface management systems, toolkits, and interface builders.
Figure 1: Approximate time lines showing where work was performed
on
some major technologies discussed in this article.
2.
Basic Interactions
- Direct Manipulation of graphical objects: The now ubiquitous
direct
manipulation interface, where visible objects on the screen are directly
manipulated with a pointing device, was first demonstrated by Ivan Sutherland
in Sketchpad [44], which was his 1963 MIT PhD thesis. SketchPad supported
the
manipulation of objects using a light-pen, including grabbing objects, moving
them, changing size, and using constraints. It contained the seeds of myriad
important interface ideas. The system was built at Lincoln Labs with support
from the Air Force and NSF. William Newman's Reaction Handler [30], created
at
Imperial College, London (1966-67) provided direct manipulation of graphics,
and introduced "Light Handles," a form of graphical potentiometer, that was
probably the first "widget." Another early system was AMBIT/G (implemented
at
MIT's Lincoln Labs, 1968, ARPA funded). It employed, among other interface
techniques, iconic representations, gesture recognition, dynamic menus with
items selected using a pointing device, selection of icons by pointing, and
moded and mode-free styles of interaction. David Canfield Smith coined the
term "icons" in his 1975 Stanford PhD thesis on Pygmalion [41] (funded by
ARPA
and NIMH) and Smith later popularized icons as one of the chief designers
of
the Xerox Star [42]. Many of the interaction techniques popular in direct
manipulation interfaces, such as how objects and text are selected, opened,
and
manipulated, were researched at Xerox PARC in the 1970's. In particular,
the
idea of "WYSIWYG" (what you see is what you get) originated there with systems
such as the Bravo text editor and the Draw drawing program [10] The concept
of
direct manipulation interfaces for everyone was envisioned by Alan Kay of
Xerox
PARC in a 1977 article about the "Dynabook" [16]. The first commercial systems
to make extensive use of Direct Manipulation were the Xerox Star (1981) [42],
the Apple Lisa (1982) [51] and Macintosh (1984) [52]. Ben Shneiderman at
the
University of Maryland coined the term "Direct Manipulation" in 1982 and
identified the components and gave psychological foundations [40]. - The Mouse: The mouse was developed at Stanford Research Laboratory
(now SRI) in 1965 as part of the NLS project (funding from ARPA, NASA, and
Rome
ADC) [9] to be a cheap replacement for light-pens, which had been used at
least
since 1954 [10, p. 68]. Many of the current uses of the mouse were
demonstrated by Doug Engelbart as part of NLS in a movie created in 1968
[8].
The mouse was then made famous as a practical input device by Xerox PARC
in the
1970's. It first appeared commercially as part of the Xerox Star (1981),
the
Three Rivers Computer Company's PERQ (1981) [23], the Apple Lisa (1982),
and
Apple Macintosh (1984). - Windows: Multiple tiled windows were demonstrated in Engelbart's
NLS
in 1968 [8]. Early research at Stanford on systems like COPILOT (1974) [46]
and at MIT with the EMACS text editor (1974) [43] also demonstrated tiled
windows. Alan Kay proposed the idea of overlapping windows in his 1969
University of Utah PhD thesis [15] and they first appeared in 1974 in his
Smalltalk system [11] at Xerox PARC, and soon after in the InterLisp system
[47]. Some of the first commercial uses of windows were on Lisp Machines
Inc.
(LMI) and Symbolics Lisp Machines (1979), which grew out of MIT AI Lab
projects. The Cedar Window Manager from Xerox PARC was the first major tiled
window manager (1981) [45], followed soon by the Andrew window manager [32]
by
Carnegie Mellon University's Information Technology Center (1983, funded
by
IBM). The main commercial systems popularizing windows were the Xerox Star
(1981), the Apple Lisa (1982), and most importantly the Apple Macintosh (1984).
The early versions of the Star and Microsoft Windows were tiled, but eventually
they supported overlapping windows like the Lisa and Macintosh. The X Window
System, a current international standard, was developed at MIT in 1984 [39].
For a survey of window managers, see [24].
3.
Application Types
- Drawing programs: Much of the current technology was
demonstrated in
Sutherland's 1963 Sketchpad system. The use of a mouse for graphics was
demonstrated in NLS (1965). In 1968 Ken Pulfer and Grant Bechthold at the
National Research Council of Canada built a mouse out of wood patterned after
Engelbart's and used it with a key-frame animation system to draw all the
frames of a movie. A subsequent movie, "Hunger" in 1971 won a number of
awards, and was drawn using a tablet instead of the mouse (funding by the
National Film Board of Canada) [3]. William Newman's Markup (1975) was the
first drawing program for Xerox PARC's Alto, followed shortly by Patrick
Baudelaire's Draw which added handling of lines and curves [10, p. 326].
The
first computer painting program was probably Dick Shoup's "Superpaint" at
PARC
(1974-75). - Text Editing: In 1962 at the Stanford Research Lab, Engelbart
proposed, and later implemented, a word processor with automatic word wrap,
search and replace, user-definable macros, scrolling text, and commands to
move, copy, and delete characters, words, or blocks of text. Stanford's
TVEdit
(1965) was one of the first CRT-based display editors that was widely used
[48]. The Hypertext Editing System [50, p. 108] from Brown University had
screen editing and formatting of arbitrary-sized strings with a lightpen
in
1967 (funding from IBM). NLS demonstrated mouse-based editing in 1968.
TECO
from MIT was an early screen-editor (1967) and EMACS [43] developed from
it in
1974. Xerox PARC's Bravo [10, p. 284] was the first WYSIWYG editor-formatter
(1974). It was designed by Butler Lampson and Charles Simonyi who had started
working on these concepts around 1970 while at Berkeley. The first commercial
WYSIWYG editors were the Star, LisaWrite and then MacWrite. For a survey
of
text editors, see [22] [50, p. 108]. - Spreadsheets: The initial spreadsheet was VisiCalc which was developed
by Frankston and Bricklin (1977-8) for the Apple II while they were students
at
MIT and the Harvard Business School. The solver was based on a
dependency-directed backtracking algorithm by Sussman and Stallman at the
MIT
AI Lab. - HyperText: The idea for hypertext (where documents are linked
to
related documents) is credited to Vannevar Bush's famous MEMEX idea from
1945
[4]. Ted Nelson coined the term "hypertext" in 1965 [29]. Engelbart's NLS
system [8] at the Stanford Research Laboratories in 1965 made extensive use
of
linking (funding from ARPA, NASA, and Rome ADC). The "NLS Journal" [10,
p.
212] was one of the first on-line journals, and it included full linking
of
articles (1970). The Hypertext Editing System, jointly designed by Andy
van
Dam, Ted Nelson, and two students at Brown University (funding from IBM)
was
distributed extensively [49]. The University of Vermont's PROMIS (1976)
was
the first Hypertext system released to the user community. It was used to
link
patient and patient care information at the University of Vermont's medical
center. The ZOG project (1977) from CMU was another early hypertext system,
and was funded by ONR and DARPA [36]. Ben Shneiderman's Hyperties was the
first system where highlighted items in the text could be clicked on to go
to
other pages (1983, Univ. of Maryland) [17]. HyperCard from Apple (1988)
significantly helped to bring the idea to a wide audience. There have been
many other hypertext systems through the years. Tim Berners-Lee used the
hypertext idea to create the World Wide Web in 1990 at the government-funded
European Particle Physics Laboratory (CERN). Mosaic, the first popular
hypertext browser for the World-Wide Web was developed at the Univ. of
Illinois' National Center for Supercomputer Applications (NCSA). For a more
complete history of HyperText, see [31]. - Computer Aided Design (CAD): The same 1963 IFIPS conference at
which
Sketchpad was presented also contained a number of CAD systems, including
Doug
Ross's Computer-Aided Design Project at MIT in the Electronic Systems Lab
[37]
and Coons' work at MIT with SketchPad [7]. Timothy Johnson's pioneering
work
on the interactive 3D CAD system Sketchpad 3 [13] was his 1963 MIT MS thesis
(funded by the Air Force). The first CAD/CAM system in industry was probably
General Motor's DAC-1 (about 1963). - Video Games: The first graphical video game was probably SpaceWar
by
Slug Russel of MIT in 1962 for the PDP-1 [19, p. 49] including the first
computer joysticks. The early computer Adventure game was created by Will
Crowther at BBN, and Don Woods developed this into a more sophisticated
Adventure game at Stanford in 1966 [19, p. 132]. Conway's game of LIFE was
implemented on computers at MIT and Stanford in 1970. The first popular
commercial game was Pong (about 1976).
4.
Up-and-Coming Areas
- Gesture Recognition: The first pen-based input device,
the RAND
tablet, was funded by ARPA. Sketchpad used light-pen gestures (1963).
Teitelman in 1964 developed the first trainable gesture recognizer. A very
early demonstration of gesture recognition was Tom Ellis' GRAIL system on
the
RAND tablet (1964, ARPA funded). It was quite common in light-pen-based
systems to include some gesture recognition, for example in the AMBIT/G system
(1968 -- ARPA funded). A gesture-based text editor using proof-reading symbols
was developed at CMU by Michael Coleman in 1969. Bill Buxton at the University
of Toronto has been studying gesture-based interactions since 1980. Gesture
recognition has been used in commercial CAD systems since the 1970s, and
came
to universal notice with the Apple Newton in 1992. - Multi-Media: The FRESS project at Brown used multiple windows
and
integrated text and graphics (1968, funding from industry). The Interactive
Graphical Documents project at Brown was the first hypermedia (as opposed
to
hypertext) system, and used raster graphics and text, but not video (1979-1983,
funded by ONR and NSF). The Diamond project at BBN (starting in 1982, DARPA
funded) explored combining multimedia information (text, spreadsheets,
graphics, speech). The Movie Manual at the Architecture Machine Group (MIT)
was one of the first to demonstrate mixed video and computer graphics in
1983
(DARPA funded). - 3-D: The first 3-D system was probably Timothy Johnson's 3-D CAD
system mentioned above (1963, funded by the Air Force). The "Lincoln Wand"
by
Larry Roberts was an ultrasonic 3D location sensing system, developed at
Lincoln Labs (1966, ARPA funded). That system also had the first interactive
3-D hidden line elimination. An early use was for molecular modelling [18].
The late 60's and early 70's saw the flowering of 3D raster graphics research
at the University of Utah with Dave Evans, Ivan Sutherland, Romney, Gouraud,
Phong, and Watkins, much of it government funded. Also, the
military-industrial flight simulation work of the 60's - 70's led the way
to
making 3-D real-time with commercial systems from GE, Evans&Sutherland,
Singer/Link (funded by NASA, Navy, etc.). Another important center of current
research in 3-D is Fred Brooks' lab at UNC (e.g. [2]). - Virtual Reality and "Augmented Reality": The original work on
VR was
performed by Ivan Sutherland when he was at Harvard (1965-1968, funding
by Air
Force, CIA, and Bell Labs). Very important early work was by Tom Furness
when
he was at Wright-Patterson AFB. Myron Krueger's early work at the University
of Connecticut was influential. Fred Brooks' and Henry Fuch's groups at
UNC
did a lot of early research, including the study of force feedback (1971,
funding from US Atomic Energy Commission and NSF). Much of the early research
on head-mounted displays and on the DataGlove was supported by NASA. - Computer Supported Cooperative Work. Doug Engelbart's 1968
demonstration of NLS [8] included the remote participation of multiple people
at various sites (funding from ARPA, NASA, and Rome ADC). Licklider and
Taylor
predicted on-line interactive communities in an 1968 article [20] and
speculated about the problem of access being limited to the privileged.
Electronic mail, still the most widespread multi-user software, was enabled
by
the ARPAnet, which became operational in 1969, and by the Ethernet from Xerox
PARC in 1973. An early computer conferencing system was Turoff's EIES system
at the New Jersey Institute of Technology (1975). - Natural language and speech: The fundamental research for speech
and
natural language understanding and generation has been performed at CMU,
MIT,
SRI, BBN, IBM, AT&T Bell Labs and BellCore, much of it government funded.
See, for example, [34] for a survey of the early work.
5.
Software Tools and Architectures
The area of user interface software tools is quite active now, and
many
companies are selling tools. Most of today's applications are implemented
using various forms of software tools. For a more complete survey and
discussion of UI tools, see [26].
- UIMSs and Toolkits: (There are software libraries and tools
that
support creating interfaces by writing code.) The first User Interface
Management System (UIMS) was William Newman's Reaction Handler [30] created
at
Imperial College, London (1966-67 with SRC funding). Most of the early work
was done at universities (Univ. of Toronto with Canadian government funding,
George Washington Univ. with NASA, NSF, DOE, and NBS funding, Brigham Young
University with industrial funding, etc.). The term "UIMS" was coined by
David
Kasik at Boeing (1982) [14]. Early window managers such as Smalltalk (1974)
and InterLisp, both from Xerox PARC, came with a few widgets, such as popup
menus and scrollbars. The Xerox Star (1981) was the first commercial system
to
have a large collection of widgets. The Apple Macintosh (1984) was the first
to actively promote its toolkit for use by other developers to enforce a
consistent interface. An early C++ toolkit was InterViews [21], developed
at
Stanford (1988, industrial funding). Much of the modern research is being
performed at universities, for example the Garnet (1988) [28] and Amulet
(1994) [27] projects at CMU (ARPA funded), and subArctic at Georgia Tech
(1996,
funding by Intel and NSF). - Interface Builders: (These are interactive tools that allow interfaces
composed of widgets such as buttons, menus and scrollbars to be placed using
a
mouse.) The Steamer project at BBN (1979-85; ONR funding) demonstrated many
of
the ideas later incorporated into interface builders and was probably the
first
object-oriented graphics system. Trillium [12] was developed at Xerox PARC
in
1981. Another early interface builder was the MenuLay system [5] developed
by
Bill Buxton at the University of Toronto (1983, funded by the Canadian
Government). The Macintosh (1984) included a "Resource Editor" which allowed
widgets to be placed and edited. Jean-Marie Hullot created "SOS Interface"
in
Lisp for the Macintosh while working at INRIA (1984, funded by the French
government) which was the first modern "interface builder." Hullot built
this
into a commercial product in 1986 and then went to work for NeXT and created
the NeXT Interface Builder (1988), which popularized this type of tool.
Now
there are literally hundreds of commercial interface builders. - Component Architectures: The idea of creating interfaces by connecting
separately written components was first demonstrated in the Andrew project
[32]
by Carnegie Mellon University's Information Technology Center (1983, funded
by
IBM). It is now being widely popularized by Microsoft's OLE and Apple's
OpenDoc architectures.
6.
Discussion
It is clear that all of the most important innovations in Human-Computer
Interaction have benefited from research at both corporate research labs
and
universities, much of it funded by the government. The conventional style
of
graphical user interfaces that use windows, icons, menus and a mouse and
are in
a phase of standardization, where almost everyone is using the same, standard
technology and just making minute, incremental changes. Therefore, it is
important that university, corporate, and government-supported research
continue, so that we can develop the science and technology needed for the
user
interfaces of the future.
Another important argument in favor of HCI research in universities is that
computer science students need to know about user interface issues. User
interfaces are likely to be one of the main value-added competitive advantages
of the future, as both hardware and basic software become commodities. If
students do not know about user interfaces, they will not serve industry
needs.
It seems that only through computer science does HCI research disseminate
out
into products. Furthermore, without appropriate levels of funding of academic
HCI research, there will be fewer PhD graduates in HCI to perform research
in
corporate labs, and fewer top-notch graduates in this area will be interested
in being professors, so the needed user interface courses will not be
offered.
As computers get faster, more of the processing power is being devoted to
the
user interface. The interfaces of the future will use gesture recognition,
speech recognition and generation, "intelligent agents," adaptive interfaces,
video, and many other technologies now being investigated by research groups
at
universities and corporate labs [35]. It is imperative that this research
continue and be well-supported.
ACKNOWLEDGMENTS
I must thank a large number of people who responded to posts of earlier
versions of this article on the announcements.chi mailing list for their
very
generous help, and to Jim Hollan who helped edit the short excerpt of this
article. Much of the information in this article was supplied by (in
alphabetical order): Stacey Ashlund, Meera M. Blattner, Keith Butler, Stuart
K.
Card, Bill Curtis, David E. Damouth, Dan Diaper, Dick Duda, Tim T.K. Dudley,
Steven Feiner, Harry Forsdick, Bjorn Freeman-Benson, John Gould, Wayne Gray,
Mark Green, Fred Hansen, Bill Hefley, D. Austin Henderson, Jim Hollan,
Jean-Marie Hullot, Rob Jacob, Bonnie John, Sandy Kobayashi, T.K. Landauer,
John
Leggett, Roger Lighty, Marilyn Mantei, Jim Miller, William Newman, Jakob
Nielsen, Don Norman, Dan Olsen, Ramesh Patil, Gary Perlman, Dick Pew, Ken
Pier,
Jim Rhyne, Ben Shneiderman, John Sibert, David C. Smith, Elliot Soloway,
Richard Stallman, Ivan Sutherland, Dan Swinehart, John Thomas, Alex Waibel,
Marceli Wein, Mark Weiser, Alan Wexelblat, and Terry Winograd. Editorial
comments were also provided by the above as well as Ellen Borison, Rich
McDaniel, Rob Miller, Bernita Myers, Yoshihiro Tsujino, and the reviewers.
References
1. Baecker, R., et al., "A Historical and Intellectual Perspective,"
in
Readings in Human-Computer Interaction: Toward the Year 2000, Second
Edition, R. Baecker, et al., Editors. 1995, Morgan Kaufmann
Publishers, Inc.: San Francisco. pp. 35-47.
2. Brooks, F. "The Computer "Scientist" as Toolsmith--Studies in Interactive
Computer Graphics," in IFIP Conference Proceedings. 1977. pp.
625-634.
3. Burtnyk, N. and Wein, M., "Computer Generated Key Frame Animation."
Journal Of the Society of Motion Picture and Television Engineers,
1971.
8(3): pp. 149-153.
4. Bush, V., "As We May Think." The Atlantic Monthly, 1945.
176(July): pp. 101-108. Reprinted and discussed in
interactions,
3(2), Mar 1996, pp. 35-67.
5. Buxton, W., et al. "Towards a Comprehensive User Interface Management
System," in Proceedings SIGGRAPH'83: Computer Graphics. 1983. Detroit,
Mich. 17. pp. 35-42.
6. Card, S.K., "Pioneers and Settlers: Methods Used in Successful User
Interface Design," in Human-Computer Interface Design: Success Stories,
Emerging Methods, and Real-World Context, M. Rudisill, et al.,
Editors. 1996, Morgan Kaufmann Publishers: San Francisco. pp. 122-169.
7. Coons, S. "An Outline of the Requirements for a Computer-Aided Design
System," in AFIPS Spring Joint Computer Conference. 1963. 23.
pp. 299-304.
8. Engelbart, D. and English, W., "A Research Center for Augmenting Human
Intellect." Reprinted in ACM SIGGRAPH Video Review, 1994.,
1968.
106
9. English, W.K., Engelbart, D.C., and Berman, M.L., "Display Selection
Techniques for Text Manipulation." IEEE Transactions on Human Factors
in Electronics, 1967. HFE-8(1)
10. Goldberg, A., ed. A History of Personal Workstations. 1988,
Addison-Wesley Publishing Company: New York, NY. 537.
11. Goldberg, A. and Robson, D. "A Metaphor for User Interface Design," in
Proceedings of the 12th Hawaii International Conference on System
Sciences. 1979. 1. pp. 148-157.
12. Henderson Jr, D.A. "The Trillium User Interface Design Environment,"
in
Proceedings SIGCHI'86: Human Factors in Computing Systems. 1986. Boston,
MA. pp. 221-227.
13. Johnson, T. "Sketchpad III: Three Dimensional Graphical Communication
with
a Digital Computer," in AFIPS Spring Joint Computer Conference. 1963.
23. pp. 347-353.
14. Kasik, D.J. "A User Interface Management System," in Proceedings
SIGGRAPH'82: Computer Graphics. 1982. Boston, MA. 16. pp. 99-106.
15. Kay, A., The Reactive Engine. PhD Thesis, Electrical Engineering
and
Computer Science University of Utah, 1969,
16. Kay, A., "Personal Dynamic Media." IEEE Computer, 1977.
10(3): pp. 31-42.
17. Koved, L. and Shneiderman, B., "Embedded menus: Selecting items in
context." Communications of the ACM, 1986. 4(29): pp.
312-318.
18. Levinthal, C., "Molecular Model-Building by Computer." Scientific
American, 1966. 214(6): pp. 42-52.
19. Levy, S., Hackers: Heroes of the Computer Revolution. 1984, Garden
City, NY: Anchor Press/Doubleday.
20. Licklider, J.C.R. and Taylor, R.W., "The computer as Communication
Device." Sci. Tech., 1968. April: pp. 21-31.
21. Linton, M.A., Vlissides, J.M., and Calder, P.R., "Composing user interfaces
with InterViews." IEEE Computer, 1989. 22(2): pp. 8-22.
22. Meyrowitz, N. and Van Dam, A., "Interactive Editing Systems: Part 1 and
2." ACM Computing Surveys, 1982. 14(3): pp. 321-352.
23. Myers, B.A., "The User Interface for Sapphire." IEEE Computer
Graphics and Applications, 1984. 4(12): pp. 13-23.
24. Myers, B.A., "A Taxonomy of User Interfaces for Window Managers."
IEEE Computer Graphics and Applications, 1988. 8(5): pp. 65-84.
25. Myers, B.A., "All the Widgets." SIGGRAPH Video Review,
1990.
57
26. Myers, B.A., "User Interface Software Tools." ACM Transactions
on
Computer Human Interaction, 1995. 2(1): pp. 64-103.
27. Myers, B.A., et al., The Amulet V2.0 Reference Manual .
Carnegie Mellon University Computer Science Department Report, Number, Feb,
1996. System available from http://www.cs.cmu.edu/~amulet.
28. Myers, B.A., et al., "Garnet: Comprehensive Support for Graphical,
Highly-Interactive User Interfaces." IEEE Computer, 1990.
23(11): pp. 71-85.
29. Nelson, T. "A File Structure for the Complex, the Changing, and the
Indeterminate," in Proceedings ACM National Conference. 1965. pp.
84-100.
30. Newman, W.M. "A System for Interactive Graphical Programming," in AFIPS
Spring Joint Computer Conference. 1968. 28. pp. 47-54.
31. Nielsen, J., Multimedia and Hypertext: the Internet and Beyond.
1995, Boston: Academic Press Professional.
32. Palay, A.J., et al. "The Andrew Toolkit - An Overview," in
Proceedings Winter Usenix Technical Conference. 1988. Dallas, Tex.
pp.
9-21.
33. Press, L., "Before the Altair: The History of Personal Computing."
Communications of the ACM, 1993. 36(9): pp. 27-33.
34. Reddy, D.R., "Speech Recognition by Machine: A Review," in Readings
in
Speech Recognition, A. Waibel and K.-F. Lee, Editors. 1990, Morgan
Kaufmann: San Mateo, CA. pp. 8-38.
35. Reddy, R., "To Dream the Possible Dream (Turing Award Lecture)."
Communications of the ACM, 1996. 39(5): pp. 105-112.
36. Robertson, G., Newell, A., and Ramakrishna, K., ZOG: A Man-Machine
Communication Philosophy . Carnegie Mellon University Technical Report
Report, Number, August, 1977.
37. Ross, D. and Rodriguez, J. "Theoretical Foundations for the Computer-Aided
Design System," in AFIPS Spring Joint Computer Conference. 1963.
23. pp. 305-322.
38. Rudisill, M., et al., Human-Computer Interface Design: Success
Stories, Emerging Methods, and Real-World Context. 1996, San Francisco:
Morgan Kaufmann Publishers.
39. Scheifler, R.W. and Gettys, J., "The X Window System." ACM
Transactions on Graphics, 1986. 5(2): pp. 79-109.
40. Shneiderman, B., "Direct Manipulation: A Step Beyond Programming
Languages." IEEE Computer, 1983. 16(8): pp. 57-69.
41. Smith, D.C., Pygmalion: A Computer Program to Model and Stimulate
Creative Thought. 1977, Basel, Stuttgart: Birkhauser Verlag. PhD Thesis,
Stanford University Computer Science Department, 1975.
42. Smith, D.C., et al. "The Star User Interface: an Overview," in
Proceedings of the 1982 National Computer Conference. 1982. AFIPS.
pp.
515-528.
43. Stallman, R.M., Emacs: The Extensible, Customizable, Self-Documenting
Display Editor . MIT Artificial Intelligence Lab Report, Number, Aug,
1979, 1979.
44. Sutherland, I.E. "SketchPad: A Man-Machine Graphical Communication System,"
in AFIPS Spring Joint Computer Conference. 1963. 23. pp.
329-346.
45. Swinehart, D., et al., "A Structural View of the Cedar Programming
Environment." ACM Transactions on Programming Languages and
Systems, 1986. 8(4): pp. 419-490.
46. Swinehart, D.C., Copilot: A Multiple Process Approach to Interactive
Programming Systems. PhD Thesis, Computer Science Department Stanford
University, 1974, SAIL Memo AIM-230 and CSD Report STAN-CS-74-412.
47. Teitelman, W., "A Display Oriented Programmer's Assistant."
International Journal of Man-Machine Studies, 1979. 11: pp.
157-187. Also Xerox PARC Technical Report CSL-77-3, Palo Alto, CA, March
8,
1977.
48. Tolliver, B., TVEdit . Stanford Time Sharing Memo Report, Number,
March, 1965.
49. van Dam, A., et al. "A Hypertext Editing System for the 360,"
in
Proceedings Conference in Computer Graphics. 1969. University of
Illinois.
50. van Dam, A. and Rice, D.E., "On-line Text Editing: A Survey."
Computing Surveys, 1971. 3(3): pp. 93-114.
51. Williams, G., "The Lisa Computer System." Byte Magazine,
1983. 8(2): pp. 33-50.
52. Williams, G., "The Apple Macintosh Computer." Byte, 1984.
9(2): pp. 30-54.