To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many safety-critical applications rely upon complex interaction between computer systems and their users. When accidents occur, regulatory bodies are called upon to investigate the causes of user ‘error’ and system ‘failure’. Reports are drawn up so that the designers and operators of future systems will not repeat previous ‘mistakes’. These documents present the work of specialists who are drawn from many different technical disciplines: human factors; forensic investigation; engineering reconstruction; computer simulation; etc. The findings of these different experts are often separated into different sections. This creates a number of problems. Important evidence can be hidden within numerous appendices. The interaction between systems and users can be obscured by tortuous cross referencing schemes. There are occasional temporal ambiguities and inconsistencies between the different analyses. This paper presents ways in which formal methods can be exploited to address these problems. Mathematical notations provide means of representing and reasoning about the circumstances that lead to accidents in human machine systems. Executable logics can also be used to simulate event sequences. These simulations might be shown to other analysts. They can be used to encourage agreement on the course of events prior to more detailed investigations.
Accident reports are intended to ensure that the faults of previous systems are not propagated into future applications. For example, the Presidential investigation into the Three Mile Island accident led the United States' Nuclear Regulatory Commission (NRC) to adopt a policy of minimal intervention (Pew, Miller & Feehrer, 1981). Whenever possible operators should not be required to intervene in order to preserve the safety of their system.
By
Darryn Lavery, Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
Alistair Kilgour, Department of Computing and Electrical Engineering, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS, UK,
Pete Sykes, Axon Networks Inc., Scottish Software Partners Centre, South Queensferry, Edinburgh EH30 9TG, UK
This paper describes a case study in the design and prototyping of a system to support shared use of application programs in an X Windows environment. The primary aim was to satisfy the requirements for remote observation at Royal Observatory Edinburgh. The starting point for the software development was an existing tool ‘Shared-X’, developed to support window-sharing in X Windows. The paper describes the analysis of requirements for safe and efficient shared control in the remote observing situation. Previous work in groupware and application sharing is reviewed, and the architecture for the target system is related to existing taxonomies. The modifications that were necessary to the Shared X tool are described, in particular an improved and extended mechanism for floor control, which was found to be an important factor in the acceptability and usability of the system in the target domain. However limitations in the underlying X Windows architecture and having no access to the shared X source code prevented full implementation of the specification for shared telepointers. In conclusion the work highlights the importance of key issues in collaborative system design, including the importance of flexible and transparent mechanisms for floor control, the effective representation of status and control information in the user interface, the need for appropriate support mechanisms in the underlying window system (e.g. for multiple telepointers), and the increased complexity of evaluation with collaborative as opposed to single-user systems.
By
A Dutt, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK,
H Johnson, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK,
P Johnson, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK
In HCI the aim of evaluation is to gather information about the usability or potential usability of a system. This paper is principally concerned with evaluating the effectiveness of two discount user inspection evaluation methods in identifying usability problems in a commercial recruitment database system with complex interface and system functionality. The two specific inspection methods investigated are heuristic evaluation and cognitive walkthrough. Several comparisons are made between the number, nature and severity of usability problems highlighted, the time needed to employ the methods and the ability to generate requirements for re-design. The results indicate that the methods are best considered as complementary and both should be employed in, but perhaps at different stages of, the design process.
The development of a successful interactive system depends on a formula of iterative design and early and continuous evaluation. However, industry's response to conducting evaluations has been patchy (Johnson & Johnson, 1989; Rosson, Maass & Kellogg, 1988). Many industrialists remark that the reasons for this are the cost of employing evaluation methods and the expertise necessary. Another reason is the cumbersome and complex nature of evaluation approaches, especially task analytic approaches such as TAG (Payne & Green, 1986), TAL (Reisner, 1981) and GOMS (Card, Moran & Newell, 1983). Additionally, evaluations are seen as providing information about what is unsatisfactory, but are less useful in generating information that can be used to facilitate more usable and fewer re-designs. Researchers therefore, must assess the effect of using current evaluation methods within the industrial development process, develop future methodologies and tools that require a limited training period and can be far more easily accommodated within the development process.
Analogy is an important factor in learning unfamiliar computer systems and problem solving when using those systems. Designers of computer systems can aid novice users by exploiting analogies and explicitly representing a model world with which the users are familiar as part of the user interface. Objects in the model world, and some operations that may be performed on them, are often analogous to those in the real world. We consider the qualitative reasoning approach to modelling people's knowledge of the real world and attempt to build qualitative models of objects and operations in the model world of a user interface. These models reveal features of existing systems that cannot be explained in terms of users' knowledge of the real world and suggest limits to direct engagement with on-screen objects.
Keywords: analogy, qualitative reasoning, direct engagement.
Introduction
Two principle paradigms have been employed in designing user interfaces to interactive computing systems, the ideas of the conversation metaphor, and the model world metaphor. In the conversation metaphor, users and systems engage in a dialogue, using languages of various complexities, about some unseen, but assumed, task domain. In the model world metaphor, the task domain is explicitly represented on-screen. Even with these direct manipulation interfaces, when users encounter them for the first time, as Carroll & Thomas (1982) suggest, by definition they do not have the knowledge required to successfully use the system. Instead, related knowledge is employed and is used as a metaphor for the material being acquired.
By
Simon Buckingham Shum, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK,
Nick Hammond, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK
The human-computer interaction (HCI) community is generating a large number of analytic approaches such as models of user cognition and user-centred design representations. However, their successful uptake by practitioners depends on how easily they can be understood, and how usable and useful they are. We present a framework which identifies four different ‘gulfs’ between HCI modelling and design techniques and their intended users. These gulfs are potential opportunities to support designers if techniques can be encapsulated in appropriate forms. Use of the gulfs framework is illustrated in relation to three very different strands of work:
i. representing HCI design spaces and design rationale;
ii. modelling user cognition; and
iii. modelling interactive system behaviour.
We summarise what is currently known about these gulfs, report empirical investigations showing how these gulfs can be ‘bridged’, and describe plans for further investigations. We conclude that it is desirable for practitioners' requirements to shape analytic approaches much earlier in their development than has been the case to date. The work reported in this paper illustrates some of the techniques which can be recruited to this end.
The human-computer interaction (HCI) community is generating a large number of analytic, usability-oriented approaches such as cognitive modelling and user-centred design representations. Three critical factors which will determine whether any of these approaches makes any impact on design practice are their intelligibility to practitioners, and their utility and usability.
By
Phil Gray, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
David England, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
Steve McGowan, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK
Time is one of the most vital properties of an interface from a user's point of view, and the TAU project aims to explore how temporal properties of user interfaces affect their usability. This paper describes the XUAN notation of the specification of temporal behaviour. This notation also provides the basis for a software tool allowing not only specification but also rapid instantiation and modification of (small) user interfaces with defined temporal behaviour. This in turn will support rapid experimentation on users that varies temporal aspects on interfaces. In this paper we describe the features we have added to the UAN in creating XUAN in order to express temporal properties of tasks.
Keywords: task description language, response time, specification.
Introduction
Time is one of the most vital properties of an interface from a user's point of view but an aspect of interaction that is neglected by HCI theorists and practitioners. Work by Teal & Rudnicky (1992) has shown that users change their interaction strategies in response to varying response delays. This change in strategy is not accounted for in Norman's theory of action (Norman, 1986) or GOMS (Card, Moran & Newell, 1983). The use of multimedia systems and CSCW systems will mean that people will be faced increasingly with time-varying interactions. Our work in the TAU project provides an experimental basis for exploring issues of time in complex interactions.
Informally we know that if mouse tracking is too slow, using the mouse becomes almost impossible.
By
Alan Conway, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin 2, Ireland,
Tony Veale, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin 2, Ireland
This paper describes a linguistically motivated approach to synthesising animated sign language. Our approach emphasises the importance of the internal, phonological structure of signs. Representing this level of structure results in greatly reduced lexicon size and more realistic signed output, a claim which is justified by reference to sign linguistics and by examples of sign language structure. We outline a representation scheme for phonological structure and a synthesis system which uses it to address these concerns.
Keywords: deaf sign language, phonological structure, human animation.
Introduction
The sign languages used by the deaf are a striking example of the diversity of human communication. On the surface, visual-gestural languages appear entirely dissimilar to verbal languages. It is a common misconception that signs are a form of pantomime and that they cannot convey the same range of abstract meanings as words. However, research has shown that this is entirely untrue (Klima & Bellugi, 1979). Sign languages are languages in the full sense of the word with all the expressive power of verbal languages.
In this paper we present an approach to the synthesis of animated sign language which focuses on the internal structure of signs. Several authors have discussed the translation of verbal language into sign language and the visual presentation of sign language via 3D graphics (Holden & Roy, 1992; Lee & Kunii, 1992; Patten & Hartigan, 1993). However, these authors seem to regard the sign as a unit which requires no further analysis. Sign linguists tell us that signs have internal structure and are built from more fundamental units. We argue that representing this level of structure in a synthesis system is essential for the synthesis of native sign languages.
By
Angel R Puerto, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
Henrik Eriksson, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
John H Gennari, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
Mark A Musen, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.
Researchers in the area of automated design of user interfaces have shown that the layout of an interface can, in many cases, be generated from the application's data model using an intelligent program that applies design rules. The specification of interface behavior, however, has not been automated in the same manner, and is mostly a programmatic task. Mecano is a model-based user-interface development environment that extends the notion of automating interface design from data models. Mecano uses a domain model — a highlevel knowledge representation that augments significantly the expressiveness of a data model — to generate automatically both the static layout and the dynamic behavior of an interface. Mecano has been applied successfully to completely generate the layout and the dynamic behavior of relatively large and complex, domain-specific, form- and graph-based interfaces for medical applications and several other domains.
One of the areas that is receiving increased interest by researchers is that of model-based user interface development. This emerging technology is centered around the premise that a declarative interface model can be used as a basis for building interface development environments. The model-based approach facilitates the automation of the design and implementation of user interfaces.
In addition, researchers have shown that an application's data model can be used effectively to generate the static layout of an application's interface (de Baar, Foley & Mullet, 1992; Janssen, Weisbecker & Ziegler, 1993). However, data models have not been applied to the generation of interface behavior specifications.
In this paper, we present Mecano, a model-based interface development environment that extends the concept of generating interface specifications from data models.
By
Richard M Young, MRC Applied Psychology Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK,
Gregory D Abowd, College of Computing, Georgia Institute of Technology, 801 Atlantic Drive, Atlanta, GA 30332-0280, USA
Successful interface design respects constraints stemming from a number of diverse domains analysed by different disciplines. Modelling techniques exist within the individual disciplines, but there is a need for ways to weave together different techniques to provide an integrated analysis of interface design issues from multiple perspectives. We illustrate the relations and interplay between six different modelling techniques — two for system modelling, two for user modelling, one for interaction modelling, and one for design modelling — applied to a shared design scenario concerning the provision of an Undo facility for a collaborative editor. The resulting multi-perspective analysis provides a depth of understanding and a breadth of scope beyond what can be achieved by any one technique alone.
Keywords: user modelling, system modelling, design rationale, interaction analysis, multi-disciplinary analysis, scenario analysis, undo, multi-user, editing.
Introduction
Successful interface design requires the satisfaction of a diverse set of constraints stemming from different domains. One of the factors making interface design so challenging is the number and diversity of those domains, and the different disciplines that study each. Relevant domains include that of the computer, within which are the disciplines of computer science and software engineering; of the user, studied by disciplines such as psychology; of work, the topic of sociology and anthropology and other disciplines; and of design itself.
Modelling techniques that can contribute to interface design exist in each of these domains. However, any one of these approaches tells only part of the story, and covers only some of the issues. There is a pressing need to combine modelling techniques derived from different disciplines and reflecting different perspectives in order to provide analyses with the scope and the depth adequate for guiding design.
Systems analysts have a number of techniques at their disposal when capturing or generating the requirements for a system. One of the most commonly used is the interview. Interviewing users and other members of the client organisation is often fraught with difficulty: social and communicational barriers may prove difficult to overcome, especially if the level of contact between developers and users is kept to a minimum. Poor interview technique, ignorance of incorrect implicit/unspoken assumptions and the misinterpretation of interview data can lead to incorrect requirements or incomplete specifications. This paper describes a technique for developing a collaborative visual representation of information gathered during the interview process which enhances understanding between participants and enriches the information gathered. The method combines the manipulation of graphical objects and informal discussions which are collected via cassette or video recording. Graphical representation objects — representing the groups, procedures, tools and products that exist in the interviewee's experience — provide a standard, structured means of visual expression. Recording of walkthroughs and discussions of the results keeps note-making to a minimum and helps to reduce the social distance between the participants. A description of the four main stages of the technique is presented, along with supporting material outlining reasons why the technique was developed and describing how it has been used on organisational case studies. The paper concludes with an assessment of the effectiveness of the technique and suggests how it could be tailored to support requirements capture for system design.
Keywords: problems in communication, interviews, visual thinking, visual description, system design.
Introduction
This paper describes an interview technique that was developed in an attempt to solve the problems generated by the possibility of communication barriers between analysts and domain experts.
By
Howell O Istance, Imaging and Displays Research Group, Department of Computing Science, De Montfort University, Leicester, UK,
Peter A Howarth, Vision and Lighting Research Group, Department of Human Sciences, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK
This paper examines the issues surrounding the use of an eyetracker, providing eye-movement data, as a general purpose input device for graphical user interfaces. Interacting with computers via eye-movements is not in itself new, however previous work in the area has been directed towards interaction with purpose-built software which can take into account device limitations such as accuracy. This work investigates how one can interact with unmodified graphical interface software which normally requires mouse and/or keyboard input. The results of three experiments are discussed which have compared performance between the eyetracker and the mouse, and between different ways of emulating mouse button presses using the eyetracker data. The experiments as a whole consider a range of tasks from simple button presses to the more complex and demanding operations of selecting text, and they indicate the feasibility of using the eyes to control computers.
Benefits of Controlling Graphical User Interfaces by Eye
Overview
The use of the eyes as a primary means of controlling input is appealing for a number of reasons.
First, it can be considered as a ‘natural’ mode of input and by-passes the need for learned hand-eye co-ordination to effect operations such as object selection. The user simply looks at a screen object they wish to select rather than using a hand-held pointing device, such as a mouse, to position a screen cursor over the object.
Second, one can expect performance benefits. If a user need only look at an object to acquire it, rather than having additionally to control and position a cursor by hand, speed of selection will be increased.
By
Andrew F Monk, Department of Psychology, University of York, Heslington, York YO1 5DD, UK,
Martin B Curry, Sowerby Research Centre, British Aerospace pic, FPC 267, Filton, Bristol BS12 7QW, UK
A description of the high level structure of a user interface is an important part of any system specification. Currently the most common way of thinking about and recording this part of the design is through story boards and verbal descriptions, these may be imprecise and are difficult to evaluate. Action Simulator allows a designer to build simple models of the high level behaviour of the user interface. The models are easy to read and can be executed to give a dynamic view of the design. This makes it possible to ‘run through’ the actions needed to complete the users' work. A procedure for characterising the users' work that is suitable for this purpose is also sketched out in the paper. Action Simulator consists of an Excel spreadsheet and associated macros and is publicly available.
Keywords: dialogue model, task model, work objective, decomposition, scenario, system behaviour, specification, spreadsheet.
The Need for Abstract Dialogue Models
The design of software, like any other undertaking in engineering, involves the construction of a specification that includes models of various kinds. The reason engineers construct a blue print or specification before building the artefact itself is that the latter is difficult to change and so between gathering requirements and implementation a specification is built that is easy to change. Analysis and evaluation of the specification enables improvements to be made before implementation begins. Also like other engineering projects, software is extremely complex and so difficult to reason about. For this reason engineers build models that concentrate on some aspect of the design and abstract across others.
Assessment contributes to the educational process of students but only a small fraction of the full potential is typically realized. The primary impediment to realizing greater benefit is the infeasibility of implementing more effective alternatives in the resource-limited settings typical of modern educational environments. We are developing a system architecture that exploits hypermedia technology to overcome serious limitations of traditional assessment methods.
The architecture addresses the design of cost-effective confidence-measuring and performance-testing assessment vehicles using hypermedia-based studentsystem interaction. In this paper we describe the conceptual foundation, its embodiment in prototypes, and preliminary results from classroom tests.
The educational experience can be enhanced by using assessment methods as techniques for evaluation and as guides for instructors and administrators in curriculum design and teaching methods (Airasian, 1991). Unfortunately, standardized assessment methods do not discriminate between finer-grained states of knowledge nor do they adequately reflect the ability of students to apply what they've learned. In addition, since the assessment instrument significantly influences instruction, alternative assessment methods are needed to better address fundamental educational goals. Past attempts to address these problems and goals on a large scale using traditional technology have proven infeasible primarily due to the high costs of providing adequate, standardized materials and controlled, responsive environments. In this paper we present alternatives that exploit the characteristics of modern hypermedia-capable computer systems to achieve the desired goals in a cost-effective way.
By
Mark Addison, Department of Psychology, University of Stirling, Stirling FK9 4LA, UK,
Harold Thimbleby, Department of Psychology, University of Stirling, Stirling FK9 4LA, UK
A user manual may provide instructions that, if the user follows them, achieve any of certain objectives as determined by the manual designers. A manual may therefore be viewed rather like a computer program, as pre-planned instructions. Accordingly, software engineering and its methods may be applied mutatis mutandis to the manual and its design process.
We consider structured programming methods, and show that some difficulties with user interfaces may be attributed to manuals being ‘unstructured’. Since there are many programming metrics, and very many styles of manuals for user interfaces, this paper is concerned with justifying the approach and showing how insightful it is.
Keywords: manuals, hypertext, multimedia, finite state machines, flowgraphs.
Introduction
There is much evidence that improved manuals improve user acceptance (Carroll, 1990). There is also the argument that improving manuals by changing the system documented by them leads to improved systems (Thimbleby, 1990). Thus manuals are an essential part of the system life cycle: from requirements and design, through usability, to acceptance.
The importance of manuals certainly extends beyond their use in training and reference. In some sense (whether this is explicit or implicit) a user must ‘know’ what they are doing to use a system, and the manual is a representation of what they could know. Whether a user could in practice verbalise their knowledge as a system manual is unlikely — it may not even be necessary to be able to do so if the system feedback is sufficient, cf. (Payne, 1991); however it is certain that, for many users, the manual is the prime input to their initial system knowledge.