To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Jean Vanderdonckt, Institut d'Informatique, Facultés Universitaires Notre-Dame de la Paix, Rue Grandgagnage 21, B-5000 Namur, Belgium,
Missiri Ouedraogo, Ministére de la Fonction Publique et de la Modernisation del'Administration, 03 BP 7006, Ouagadougou 03, Burkina Faso,
Banta Ygueitengar, c/o D. Ngarboldoum, Meridian Biad, B.P.87 N'djamena, Republic of Tchad
The development of graphical user interfaces for interactive applications is subject to a series of well-known problems which could be relevant of the domain of visual design. This typically includes the problem of placing aesthetically interaction objects (IO) according to principles applied in placement strategies. This paper first reviews the problem of IO placement and shows the rationale for the most significant placement strategies found today. It then tries to compare six such strategies along several dimensions and mathematical relationships with respect to three points of view: the designer's point of view, the human factors expert's point of view, and the user's point of view.
The problem of placement concerns the spatial position of interaction objects (IO) such as edit box, radio button, list box, … with respect to visual elements in a screen layout. Effective placement can be achieved through a particular placement strategy involving a certain amount of visual techniques such as proximity, alignment, separate reference, centering, and conformity. For instance, Galitz suggests a placement strategy where IOs should be placed according to their nature:
“All elements on a screen should be located in a unique and consistent position. These elements are: title, screen identifier, screen body (including caption, data, section headings, completion aids, prompting), status or instructional messages, error messages, command field or area.” (Galitz, 1992)
In this paper, the placement is defined as the description of a composite IO (e.g. a form container widget) for displaying IOs to be placed relatively to one another. Composite IOs generally allow IOs to resize themselves as the composite resizes.
By
Ben Shneiderman, Department of Computer Science and Institute for Systems Research University of Maryland, College Park, MD 20742, USA,
Catherine Plaisant, Human-Computer Interaction Laboratory at the Center for Automation Research University of Maryland, College Park, MD 20742, USA
Personal computer users typically manage hundreds of directories and thousands of files with hierarchically structured file managers, plus archaic cluttered-desktop window managers, and iconic representations of applications. These users must deal with the annoying overhead of window housekeeping and the greater burden of mapping their organizational roles onto the unnecessarily rigid hierarchy. An alternate approach is presented, Personal Role Manager (PRM), to structure the screen layout and the interface tools to better match the multiple roles that individuals have in an organization. Each role has a vision statement, schedule, hierarchy of tasks, set of people, and collection of documents.
Keywords: personal role manager, desktop metaphor, graphic user interface, coordination, computer-supported cooperative work (CSCW).
Introduction
The transition from the first generation command line interfaces (such as DOS 3 or UNIX) to second generation point-and-click graphical user interfaces (GUIs) was accompanied by an important metaphorical shift. The older systems required users to understand computerdomain concepts such as executable binary software (the .EXE or .COM files), file naming rules, and hierarchical directories. The designers of second generation GUIs presented users with more meaningful metaphors and supported direct manipulation interactions (Shneiderman, 1982). The graphical user interface offered a desktop with applications represented as icons, documents organized into folders, and even a trashcan as an affordance for the delete action. This visual representation of the world of action made objects and actions visible, permitted rapid, incremental and reversible actions, and emphasized pointing and clicking instead of keyboarding.
The current third generation approach emphasizes a “docu-centric” design (Microsoft's Object Linking and Embedding or Apple's OpenDoc Architecture), unified suites of software, and “information at your fingertips” through hypertext linking.
By
François Bodart, Institut d'Informatique, Facultés Universitaires Notre-Dame de la Paix, Rue Grandgagnage 21, B-5000 Namur, Belgium,
Jean Vanderdonckt, Institut d'Informatique, Facultés Universitaires Notre-Dame de la Paix, Rue Grandgagnage 21, B-5000 Namur, Belgium
This volume contains the full papers and invited papers from the 1994 conference of the British HCI Group (a specialist group of the British Computer Society). It is a partial record of a more extensive conference that includes tutorials, panels, short papers, posters and demonstrations.
Human-Computer Interaction has been consolidating over the last few years. As a result, many recent conferences have tended to be bland, without really high points or really low points. The invited papers are forward looking and avoid blandness in that way. There are, however, many forward looking full papers. Moreover, the 1994 conference will be remembered as one that moved HCI forward in some areas, rather than just documented its current state within the established areas.
Proceedings Structure
The proceedings begin with the invited papers. This apart, the usual compromises of programme structure have not been carried forward into these proceedings. The full papers in this volume have been reorganised into broad topics. As ever in HCI, these topics are not disjoint and some papers could have easily have gone in one section as another. Readers interested in specific topics are thus advised to read through the full contents to avoid missing papers that may be of interest to them.
The second group of papers covers all stages of interactive systems development (properly iterated of course) from requirements capture and specification, through to evaluation. There are also papers on specific design issues, including application areas such as computer-assisted learning.
The third group of papers is the largest, and covers design knowledge at all levels of abstraction in interactive systems, from low level presentation to high level metaphors.
By
Michel Beaudouin-Lafon, Laboratoire de Recherche en Informatique mdash; CNRS URA 410, Bâtiment490—Université de Paris-Sud, 91 405 ORSAY Cedex, France
The embodiment of computers in desktop workstations has had a tremendous impact on the field of HCI. Now that mice and graphics displays are everywhere, the workstation defines the frontier between the computer world and the real world. We spend a lot of time and energy transferring information between those two worlds. This could be reduced by better integrating the real world with the computer world. This article describes two approaches to this integration: Mediaspaces, which allow people to communicate through an audio, video and computer environment, and Augmented Reality, which adds computational power to real world objects. The argument is made that the success of these approaches lies in their ability to build on fundamental human skills, namely the ability to communicate with other people and the ability to interact with objects in the real world.
Over the last decade, computers have evolved from mainframes to networks of personal computers and workstations. The range of users and uses of computers has expanded dramatically. Today, a computer is perceived more as an appliance than as a ‘machine’. A key aspect of this evolution has been, and still is, the development of the field of Human-Computer Interaction. HCI has complemented, and sometimes driven, the evolution of the technology to make computer systems easier to use by a wider variety of users in a larger number of contexts.
As most researchers in HCI know, this picture is a bit idyllic and much still needs to be done to improve the scope and usability of computers.
By
Jonathan Hassell, Human-Computer Interaction Group, Department of Computer Science, University of York, Heslington, York YO1 5DD, UK,
Michael Harrison, Human-Computer Interaction Group, Department of Computer Science, University of York, Heslington, York YO1 5DD, UK
Automated macro systems which apply re-use to a user's input are a possible solution to the problems of customising an interactive system to the needs of the user. More useful than simple re-use would be a system that makes use of general patterns in users' behaviour and encapsulates this knowledge for application in similar, yet unfamiliar, circumstances. This process we term generalisation. This paper outlines some issues involved in controlling generalisation and the presentation and interaction with these macros, and specifies applicable heuristics. Finally the architecture for building an adaptive agent to perform the whole process is presented, with an example prototype operating on UNIX command-line interaction.
One example of demonstrational interfaces (Myers, 1991) — automated macro creation — has been shown to be a promising area for adaptive system research previously by Greenberg (1990) and Crow & Smith (1992). Crow & Smith have extended the simple re-use of previous command entries (the history/tool-based systems of Greenberg) from a single line to an inferred macro. Macros are a concept that users are already familiar with for automation.
Re-use, however, is limited to situations corresponding exactly to those which have occurred before. Whilst it has been shown that these situations happen reasonably frequently (Greenberg & Witten, 1993a; Greenberg & Witten, 1993b) for single line re-use (a result which has not been investigated for multi-line re-use), both single-line and multi-line macro re-use break down in situations which differ slightly from the original. In both cases the re-use system is of no help.
By
Francesmary Modugno, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA,
T R G Green, MRC Applied Psychology Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK,
Brad A Myers, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA
We present a new visual programming language and environment that serves as a form of feedback and representation in a Programming by Demonstration system. The language differs from existing visual languages because it explicitly represents data objects and implicitly represents operations by changes in data objects. The system was designed to provide non-programmers with programming support for common, repetitive tasks and incorporates some principles of cognition to assist these users in learning to use it. With this in mind, we analyzed the language and its editor along cognitive dimensions. The assessment provided insight into both strengths and weaknesses of the system, prompting a number of design changes. This demonstrates how useful such an analysis can be.
A visual shell (or desktop) is a direct manipulation interface to a file system. Examples include the Apple Macintosh desktop and the Xerox Star. Although such systems are easy to use, most do not support end-user programming. Pursuit is a visual shell aimed at providing programming capabilities in a way that is consistent with the direct manipulation paradigm.
To enable users to construct programs, Pursuit contains a Programming by Demonstration (PBD) system (Cypher, 1993). In a PBD system, users execute actions on real data and the underlying system attempts to construct a program (Myers, 1991). Such systems have limitations: feedback is often difficult to understand, disruptive or non-existent; and programs often have no representation for users to examine or edit. Pursuit addresses these problems by presenting the evolving program in a visual language while it is being constructed.
By
Ben Anderson, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Michael Smyth, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Roger P Knott, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Marius Bergan, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Julie Bergan, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
James L Alty, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK
By
Steve Benford, Department of Computer Science, The University of Nottingham, Nottingham NG7 2RD, UK,
Lennart E Fahlén, Swedish Institute of Computer Science (SICS), Box 1263, S-16428, Kista, Stockholm, Sweden
Synchronisation is a key issue for collaborative user interfaces. An examination of current approaches, in particular the concept WYSIVVIS and the use of Video as a communication medium, highlights a number of issues in this area including lack of a common spatial frame of reference, lack of appropriate embodiment of users and inflexible and rigid communication channels between users. The paper then proposes a new framework for designing collaborative user interfaces which addresses these issues. This framework is based on the notion of a common spatial frame within which embodied users are free to move autonomously, being casually aware of each other's activities. Embodiment is considered in terms of both individual viewpoints and actionpoints (e.g. telepointers) within the display space. We propose that, in many cases, synchronisation of the spatial frame is necessary but synchronisation of viewpoints and actionpoints may actually inhibit collaboration. We finish by describing some prototype systems which provide one (of possibly many) examples of how our framework might be employed; in this case to create shared cooperative virtual environments.
Collaborative user interfaces, particularly shared workspaces, have been the focus of considerable research effort in recent years. Resulting systems include multi-user editors and drawing tools (Ellis, Gibbs & Rein, 1991; Foster & Stefik, 1986; Greenberg & Bohnet, 1991), shared screen systems and more specialised design surfaces (Ishii & Kobayashi, 1992). There has also been a growth in the use of multi-media technology to support communication and awareness between the users of such systems including conferencing systems (Sarin & Greif, 1985) and media-spaces (Gaver et al., 1992; Root, 1988).
By
Kee Yong Him, School of Mechanical and Production Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 2263,
John Long, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK
The paper illustrates the use of structured notations to support the specification of various aspects of a system design; such as organisational hierarchies, conceptual level tasks, domain semantics, human-computer interactions, etc. In contrast with formal or algebraic notations, graphical structured notations are communicated to users more easily. Thus, user feedback elicitation and design validation would be supported better throughout system development. It is expected that the structured notations illustrated in the paper, could be used more widely for two reasons; namely they support more specific task specifications, and have now been incorporated into a structured human factors method. In addition, off-the-shelf computer-based support for the notation is emerging, e.g. PDFTM.
Keywords: graphical structured notations, human factors specifications, structured human factors method.
General Requirements of a Notation for Human Factors Specification
Generally, an appropriate human factors notation should fulfil two pre-requisites, namely it should rectify the inadequacies of existing human factors notations, and accommodate additional specification demands arising from wider human factors involvement in system development. In particular, a notation should satisfy the following requirements:
a. Specificity. Current human factors specifications have been criticised for being insufficiently specific. This situation is aggravated further by the increasingly complex and sophisticated systems being designed. In response to these demands, human factors methods should be enhanced to include more powerful notations to support tighter design specification. For instance, in safety critical system development, task specifications should be detailed enough to support design simulation, workload assessment and probabilistic human reliability assessment. Thus, Brooks' (1991) emphasis on task specifications that reveal the hierarchical structure and operational control of the user's task, is especially pertinent. Hence, notational constructs should satisfy the demands of such design specifications;
By
John Dowell, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK,
Ian Salter, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK,
Solaleh Zekrullahi, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK
The demand for a more effective Air Traffic Management system, and the central role of the controller in that system, has focused attention on the design of the controller's interface. This paper presents an analysis of the task domain of Air Traffic Management. It demonstrates with a simulated system how the domain analysis can be used to model the controller's performance in the traffic management task. The use of this model in rationalising interface design issues is then illustrated. The analysis supports the general case for explicitly capturing the task domain in interface design.
The Need for Analysis of the Air Traffic Management Task Domain
The Operational Issue in Air Traffic Management
Increases in the volume of air traffic have consistently exceeded all predictions and now demand a more effective Air Traffic Management (ATM) system. Although the amount of air traffic over the UK has increased threefold in the last three decades, the public evidence points only to its increasing safety (NATS, 1988). Rather, the most pressing concern of the Civil Aviation Authority (CAA) is now the forecast 70% growth in demand on UK airspace over the next decade. This forecast increase is extremely problematic since the UK system is already considered to be operating near capacity, and bottlenecks are publicly visible (Jackson, 1993; John & Macalister, 1991). The same problem faces the US authorities where, even in the 1980s, delays and congestions were estimated to cost between 1 and 1.5 billion dollars per year (Kanafani, 1986).
If safety must not be compromised by further increases in air traffic volume, neither must ‘expedition’.
Many safety-critical applications rely upon complex interaction between computer systems and their users. When accidents occur, regulatory bodies are called upon to investigate the causes of user ‘error’ and system ‘failure’. Reports are drawn up so that the designers and operators of future systems will not repeat previous ‘mistakes’. These documents present the work of specialists who are drawn from many different technical disciplines: human factors; forensic investigation; engineering reconstruction; computer simulation; etc. The findings of these different experts are often separated into different sections. This creates a number of problems. Important evidence can be hidden within numerous appendices. The interaction between systems and users can be obscured by tortuous cross referencing schemes. There are occasional temporal ambiguities and inconsistencies between the different analyses. This paper presents ways in which formal methods can be exploited to address these problems. Mathematical notations provide means of representing and reasoning about the circumstances that lead to accidents in human machine systems. Executable logics can also be used to simulate event sequences. These simulations might be shown to other analysts. They can be used to encourage agreement on the course of events prior to more detailed investigations.
Accident reports are intended to ensure that the faults of previous systems are not propagated into future applications. For example, the Presidential investigation into the Three Mile Island accident led the United States' Nuclear Regulatory Commission (NRC) to adopt a policy of minimal intervention (Pew, Miller & Feehrer, 1981). Whenever possible operators should not be required to intervene in order to preserve the safety of their system.
By
Darryn Lavery, Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
Alistair Kilgour, Department of Computing and Electrical Engineering, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS, UK,
Pete Sykes, Axon Networks Inc., Scottish Software Partners Centre, South Queensferry, Edinburgh EH30 9TG, UK
This paper describes a case study in the design and prototyping of a system to support shared use of application programs in an X Windows environment. The primary aim was to satisfy the requirements for remote observation at Royal Observatory Edinburgh. The starting point for the software development was an existing tool ‘Shared-X’, developed to support window-sharing in X Windows. The paper describes the analysis of requirements for safe and efficient shared control in the remote observing situation. Previous work in groupware and application sharing is reviewed, and the architecture for the target system is related to existing taxonomies. The modifications that were necessary to the Shared X tool are described, in particular an improved and extended mechanism for floor control, which was found to be an important factor in the acceptability and usability of the system in the target domain. However limitations in the underlying X Windows architecture and having no access to the shared X source code prevented full implementation of the specification for shared telepointers. In conclusion the work highlights the importance of key issues in collaborative system design, including the importance of flexible and transparent mechanisms for floor control, the effective representation of status and control information in the user interface, the need for appropriate support mechanisms in the underlying window system (e.g. for multiple telepointers), and the increased complexity of evaluation with collaborative as opposed to single-user systems.
By
A Dutt, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK,
H Johnson, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK,
P Johnson, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK
In HCI the aim of evaluation is to gather information about the usability or potential usability of a system. This paper is principally concerned with evaluating the effectiveness of two discount user inspection evaluation methods in identifying usability problems in a commercial recruitment database system with complex interface and system functionality. The two specific inspection methods investigated are heuristic evaluation and cognitive walkthrough. Several comparisons are made between the number, nature and severity of usability problems highlighted, the time needed to employ the methods and the ability to generate requirements for re-design. The results indicate that the methods are best considered as complementary and both should be employed in, but perhaps at different stages of, the design process.
The development of a successful interactive system depends on a formula of iterative design and early and continuous evaluation. However, industry's response to conducting evaluations has been patchy (Johnson & Johnson, 1989; Rosson, Maass & Kellogg, 1988). Many industrialists remark that the reasons for this are the cost of employing evaluation methods and the expertise necessary. Another reason is the cumbersome and complex nature of evaluation approaches, especially task analytic approaches such as TAG (Payne & Green, 1986), TAL (Reisner, 1981) and GOMS (Card, Moran & Newell, 1983). Additionally, evaluations are seen as providing information about what is unsatisfactory, but are less useful in generating information that can be used to facilitate more usable and fewer re-designs. Researchers therefore, must assess the effect of using current evaluation methods within the industrial development process, develop future methodologies and tools that require a limited training period and can be far more easily accommodated within the development process.
Analogy is an important factor in learning unfamiliar computer systems and problem solving when using those systems. Designers of computer systems can aid novice users by exploiting analogies and explicitly representing a model world with which the users are familiar as part of the user interface. Objects in the model world, and some operations that may be performed on them, are often analogous to those in the real world. We consider the qualitative reasoning approach to modelling people's knowledge of the real world and attempt to build qualitative models of objects and operations in the model world of a user interface. These models reveal features of existing systems that cannot be explained in terms of users' knowledge of the real world and suggest limits to direct engagement with on-screen objects.
Keywords: analogy, qualitative reasoning, direct engagement.
Introduction
Two principle paradigms have been employed in designing user interfaces to interactive computing systems, the ideas of the conversation metaphor, and the model world metaphor. In the conversation metaphor, users and systems engage in a dialogue, using languages of various complexities, about some unseen, but assumed, task domain. In the model world metaphor, the task domain is explicitly represented on-screen. Even with these direct manipulation interfaces, when users encounter them for the first time, as Carroll & Thomas (1982) suggest, by definition they do not have the knowledge required to successfully use the system. Instead, related knowledge is employed and is used as a metaphor for the material being acquired.
By
Simon Buckingham Shum, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK,
Nick Hammond, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK
The human-computer interaction (HCI) community is generating a large number of analytic approaches such as models of user cognition and user-centred design representations. However, their successful uptake by practitioners depends on how easily they can be understood, and how usable and useful they are. We present a framework which identifies four different ‘gulfs’ between HCI modelling and design techniques and their intended users. These gulfs are potential opportunities to support designers if techniques can be encapsulated in appropriate forms. Use of the gulfs framework is illustrated in relation to three very different strands of work:
i. representing HCI design spaces and design rationale;
ii. modelling user cognition; and
iii. modelling interactive system behaviour.
We summarise what is currently known about these gulfs, report empirical investigations showing how these gulfs can be ‘bridged’, and describe plans for further investigations. We conclude that it is desirable for practitioners' requirements to shape analytic approaches much earlier in their development than has been the case to date. The work reported in this paper illustrates some of the techniques which can be recruited to this end.
The human-computer interaction (HCI) community is generating a large number of analytic, usability-oriented approaches such as cognitive modelling and user-centred design representations. Three critical factors which will determine whether any of these approaches makes any impact on design practice are their intelligibility to practitioners, and their utility and usability.