To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Russell Beale, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK,
Andrew Wood, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
Agents are becoming widespread in a variety of computer systems and domains, but often appear to have little in common with each other. In this paper we look at different agent systems and identify what a generic agent should be composed of. We also identify the characteristics of a task that make it worthy of an agentbased approach. We then discuss the implications for the interaction of using agents, that is, the notion of a balanced interface, and briefly look at how an agent-based approach assists in two very different application domains.
Keywords: agents, intelligent interfaces, groupwork, computer-supported cooperative work (CSCW).
Introduction
The term agent has become increasingly widespread over the past few years. Unfortunately, it has no consistent definition and means many different things to different people. This paper considers the multifarious uses that these disparate agents are put to and tries to identify what, if anything, they have in common. With this commonality in mind, we identify the general properties of an agent and consider what makes a task ‘agent worthy’.
Agent-based interaction has consequences, leading to the notion of a balanced interface. This leads on to a description of the design of a generic agent and a look at a couple of applications that are benefiting from being treated in an agent-based manner.
Classifying Agents
We can identify a number of categories that agents can be classified under, based on the functionality offered by the different types of agent.
By
Maria da Graça Campos Pimentel, Computing Laboratory, University of Kent at Canterbury, Canterbury, Kent CT2 7NF, UK, Department of Computer Science, ICMSC, Universidade de São Paulo, CP 668, São Carlos – SP, 13560–970, Brazil
The aim of the Previewing Information Operation (PIO) approach is to tackle some overhead factors imposed on the user-hypertext interaction. The purpose is to diminish cognitive overhead and disorientation problems by reducing some of their causes.
This paper describes an experiment carried out to evaluate the usability of the operations based on the PIO approach. Results from between-groups studies show that subjects' evaluation of the ease of use of the system and feeling of general orientation were affected by the presence of PIO operations. A further study has revealed that the PIO operations were predicted by standard navigational operations.
Keywords: hypertext, link selection, previewing information, evaluation.
Introduction
When referring to a user's interaction with a hypertext system, the metaphor generally used in the literature is that the user navigates or browses through the information by selecting those links which are interesting.
In such a scenario, an interactive session could be described as a sequence of link selections along with other navigational operations, as for instance backtracking and string searching. Each of the link selection operations performed is a very important unit of the navigation sequence the user goes through: without the link options and the navigation taking place by the user freely choosing among links, there is no hypertext.
Accordingly, the secondary navigational modes such as bookmarks, history lists, backtracking and search operations (Bernstein & Joyce, 1992), are probably as important as the link selection alternatives. Firstly, they promote the understanding of the embedded hypertext structure and the building of a cognitive map. Secondly, they help users orientate themselves when they are lost.
By
Richard J King, Computer Power Group, 493 St Kilda Road, Melbourne, Victoria 3004, Australia,
Ying K Leung, Centre for Systems Methodologies, Swinburne University of Technology, PO Box 218, Hawthorn, Victoria 3122, Australia
Software engineering requires the production of hierarchically organised text-based documents, such as specifications, source codes, and manuals. Folding editors are effective tools designed to support such activities involving documents of any complexity. In a collaborative work environment, especially in a large software development team, folding editors offer many advantages over the conventional editing system. Surprisingly, the use of folding editors has often been overlooked. This could partly be attributed to the generally inadequate design of their interface and partly to the lack of effective facilities available to support collaborative work.
This paper identifies some of the typical difficulties associated with sharing files in a software engineering teamwork environment and discusses the advantages of folding editors to overcome these problems. It then describes the rationale of a design of a user interface for folding editors to support collaborative work.
Keywords: folding editors, computer-supported cooperative work (CSCW), groupware, user interface design, software engineering, CASE tools, collaborative writing.
Introduction
Software development is an area of human activity requiring that people work together to solve a problem. Typically, teams of people are involved in the specification, design, implementation and testing of the many different modules that comprise a system under construction. Even though most tasks are assigned to one individual at any one time, many documents and program modules are written and maintained by multiple authors over the lifetime of the software project. Software engineering requires the production of hierarchically organised text-based documents, examples of which are: specifications, schedules, designs, programs, test plans, on-line help and manuals.
The use of formal system models in interface development is often criticised for failing to recognise the ‘human’ nature of interaction. This paper describes an abstract system model of interaction, termed the template model, which is designed to address this criticism (Roast, 1993). The template model relies upon the identification of template abstractions which are system components that have particular user significance. By employing template abstractions system properties can be linked closely to user requirements. Using this framework we describe two categories of system constraint that express general usability requirements output correctness and structural consistency.
The appropriateness of the perspective offered by the template model and the requirements expressed using it are illustrated in the analysis of an electronic mail system.
Keywords: formal methods, system modelling, interaction models, email.
Introduction
To ensure that interactive systems are used effectively it is accepted that intended users should have a central role in system development. From the point of view of system specification and development, this concern is met by normally requiring that a system and its operating environment be modelled. In the case of HCI, users of a system are the principal environment with which the system interacts. Hence ideally developers should consider modelling users in conjunction with the system being designed — for example see (Barnard & Harrison, 1989; Barnard & Harrison, 1992). In practice efforts to model system users have not provided sufficient accuracy and generality to reliably support software development. Commonly user models are used solely as evaluation mechanisms, and are not in a form that would relate to a design representation.
By
Conn V Copas, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Ernest A Edmonds, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK
Executable task models are considered to hold promise for increasing the impact of task analysis upon software engineering. However, this approach generates a number of integration issues when user interface management systems (UIMS) are employed at run-time. A number of models of integration are proposed and critiqued, employing a task scenario which involves the customisation of a geographic information system. Some current UIMS are shown to militate against this integration on account of the dominance which is accorded to the application. Some current task representations are shown to pose integration problems on account of both their procedural nature, and neglect of the application. One organising theme of the paper is that of executable task models being considered as particular cases of knowledge-based systems.
Keywords: task analysis, user interface management systems, knowledge-based systems, geographic information systems, rapid prototyping, knowledge representation, formal methods, user modelling.
Introduction
A truism of software engineering for interactive systems is that development should ideally proceed from a requirements analysis, and should incorporate user requirements. Task analysis (TA) is frequently regarded as the most formal of available methods for addressing user requirements. However, it is also generally acknowledged that the lack of integration between TA and software engineering techniques has reduced the design impact of TA.
Recently, one promising development is executable task models, which take some task specification and generate a series of interaction screens (de Haan & van der Veer, 1992; Wilson et al., 1993). This approach has the advantage of demonstrating user requirements to the software developer in a concrete and unambiguous fashion. It may be distinguished from conventional rapid prototyping by the fact that the interaction is generated in a more principled fashion.
The problem of abduction can be characterized as finding the best explanation of a set of data. In this chapter we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity results demonstrating that this type of abduction is intractable (NP-hard) in general. In particular, choosing between incompatible hypotheses, reasoning about cancellation effects among hypotheses, and satisfying the maximum plausibility requirement are major factors leading to intractability. We also identify a tractable, but restricted, class of abduction problems.
Introduction
What kinds of abduction problems can be solved efficiently? To answer this question, we must formalize the problem and then consider its computational complexity. However, it is not possible to prescribe a specific complexity threshold for all abduction problems. If the problem is “small,” then exponential time might be fast enough. If the problem is sufficiently large, then even O(n2) might be too slow. However, for the purposes of analysis, the traditional threshold of intractability, NP-hard, provides a rough measure of what problems are impractical (Garey & Johnson, 1979). Clearly, NP-hard problems will not scale up to larger, more complex domains.
Our approach is the following. First, we formally characterize abduction as a problem of finding the most plausible composite hypothesis that explains all the data. Then we consider several classes of problems of this type, the classes being differentiated by additional constraints on how hypotheses interact.
TIPS (Task Integrated Problem Solver) and PATHEX/LIVER were built using the PEIRCE tool. Both are examples of third-generation abduction machines. PEIRCE is not specialized for diagnoses and might be used as a shell for any abductive-assembly system. TIPS and PATHEX/LIVER, however, are diagnostic systems. They are complicated systems that are similar in organization and capabilities. Despite their similarities, in the following descriptions we emphasize TIPS's ability to dynamically integrate multiple problem-solving methods and PATHEX/LIVER's proposed ability to combine structure-function models – for causal reasoning – with compiled diagnostic knowledge. First we describe TIPS, and then PATHEX/LIVER.
TIPS
TIPS is a preliminary framework that implements the idea (described in chapter 4) of making alternative problem-solving methods available for a task. Method invocation depends on the problem state and the capabilities of the method, not on a preset sequence of invocations. TIPS presents a general mechanism for the dynamic integration of multiple methods in diagnosis.
One can describe diagnosis not only in terms of the overall goal (say, explaining symptoms in terms of malfunctions), but also in terms of the rich structure of subgoals that arise as part of diagnostic reasoning and in terms of the methods used to achieve those goals. We call such a description a task-structure analysis. A diagnostic system explicitly realized in these terms has a number of advantages:
a. Such a system has multiple approaches available for solving a problem. Thus the failure of one method does not mean failure for the whole problem solver.
b. Such a system can potentially use more kinds of knowledge.
c. Such a system can potentially solve a broader range of diagnostic problems.
Abduction can be described as “inference to the best explanation,” which includes the generation, criticism, and possible acceptance of explanatory hypotheses. What makes one explanatory hypothesis better than another are such considerations as explanatory power, plausibility, parsimony, and internal consistency. In general a hypothesis should be accepted only if it surpasses other explanations for the same data by a distinct margin and only if a thorough search was conducted for other plausible explanations.
Abduction seems to be an especially appropriate and insightful way to describe the evidence-combining characteristics of a variety of cognitive and perceptual processes, such as diagnosis, scientific theory formation, comprehension of written and spoken language, visual object recognition, and inferring intentions from behavior. Thus abductive inference appears to be ubiquitous in cognition. Moreover, humans can often interpret images, understand sentences, form causal theories of everyday events, and so on, apparently making complex abductive inferences in fractions of a second.
Yet the abstract task of inferring the best explanation for a given set of data, as the task was characterized in chapter 7, has been proved to be computationally intractable under ordinary circumstances. Clearly there is a basic tension among the intractability of the abduction task, the ubiquity of abductive processes, and the rapidity with which humans seem to make abductive inferences. An adequate model of abduction must explain how cognitive agents can make complex abductive inferences routinely and rapidly.
In chapters 1 and 2, we describe abduction, design science, and the generictask approach to building knowledge-based systems. In this chapter we examine the first two of our abductive systems, which we call here RED-1 and RED-2. RED-2 extended RED-1 in several dimensions, the most important being a more sophisticated strategy for assembling composite hypotheses. RED-2 was widely demonstrated and served as a paradigm for our subsequent work on abduction. The RED systems show that abduction can be described precisely enough so that it can be programmed on a digital computer. Moreover, the RED systems do not use methods that are explicitly or recognizably deductive or probabilistic, and thus the RED systems demonstrate evidence-combining inference that apparently goes beyond those classical frameworks.
The red-cell antibody identification task
The RED systems are medical test-interpretation systems that operate in the knowledge domain of hospital blood banks. Our domain experts for these two RED systems were Patricia L. Strohm, MT (ASCP) SBB and John Svirbely, MD. The blood bank is a medical laboratory responsible for providing safe blood and blood products for transfusion. The major activities required are A-B-O and Rh blood typing, red-cell antibody screening, redcell antibody identification, and compatibility testing. The RED systems provide decision support for red-cell antibody identification.
Blood cells have chemical structures on their surfaces called red-cell antigens. When a donor's cells are transfused into a patient, these antigens can be recognized as foreign by the patient's immune system.
The field of artificial intelligence (AI) seems scattered and disunited with several competing paradigms. One major controversy is between proponents of symbolic AI (which represents information as discrete codes) and proponents of connectionism (which represents information as weighted connections between simple processing units in a network). Even within each of these approaches there is no clear orthodoxy. Another concern is whether AI is an engineering discipline or a science. This expresses an uncertainty about the basic nature of AI as well as an uncertainty about methodology. If AI is a science like physics, then an AI program is an experiment. As experiments, perhaps AI programs should be judged by the standards of experiments. They should be clearly helpful in confirming and falsifying theories, in determining specific constants, or in uncovering new facts. However, if AI is fundamentally engineering, AI programs are artifacts, technologies to be used. In this case, there is no such reason for programs to have clear confirming or falsifying relationships to theories. A result in AI would then be something practical, a technique that could be exported to a real-world domain and used. Thus, there is confusion about how results in AI should be judged, what the role of a program is, and what counts as progress in AI.
It has often been said that the plurality of approaches and standards in AI is the result of the extreme youth of AI as an intellectual discipline.
In chapter 7 abduction stumbled. Our powerful all-purpose inference pattern, maybe the basis for all knowledge from experience, was mathematically proved to be impossible (or anyway deeply impractical under ordinary circumstances). How can this be? Apparently we do make abductions all the time in ordinary life and science. Successfully. Explanation-seeking processes not only finish in reasonable time, they get right answers. Correct diagnosis is possible, even practical. (Or maybe skepticism is right after all, knowledge is impossible, correct diagnosis is an illusion.)
Maybe there is no deep question raised by those mathematical results. Perhaps all they are telling us is that we do not always get the right answer. Sometimes our best explanation is not the “true cause” (ways this can occur are systematically described in chapter 1). Sometimes we cannot find a best explanation in reasonable time, or we find one but do not have enough time to determine whether it is unique. Maybe knowledge is possible after all, but it is a kind of hit or miss affair. Yet if knowledge is possible, how can we succeed in making abductions without being defeated by incompatible hypotheses, cancellation effects, and too-close confidence values?
Whether or not knowledge is possible, we can build diagnostic systems able to achieve good performance in complex domains. This chapter presents two such systems and also includes a special section on how a kind of learning can be fruitfully treated as abduction. A fuller response to the complexity results is given in chapter 9.
This chapter develops the hypothesis that perception is abduction in layers and that understanding spoken language is a special case. These rather grand hypotheses are rich with implications: philosophical, technological, and physiological.
We present here a layered-abduction computational model of perception that unifies bottom-up and top-down processing in a single logical and information-processing framework. In this model the processes of interpretation are broken down into discrete layers where at each layer a best-explanation composite hypothesis is formed of the data presented by the layer or layers below, with the help of information from above. The formation of such a hypothesis is an abductive inference process, similar to diagnosis and scientific theory formation. The model treats perception as a kind of frozen or “compiled” deliberation. It applies in particular to speech recognition and understanding, and is a model both of how people process spoken language input, and of how a machine can be organized to do it.
Perception is abduction in layers
There is a long tradition of belief in philosophy and psychology that perception relies on some form of inference (Kant, 1787; Helmholtz; Bruner, 1957; Rock, 1983; Gregory, 1987; Fodor, 1983). But this form of inference has been typically thought of as some form of deduction, or simple recognition, or feature based classification, not as abduction. In recent times researchers have occasionally proposed that perception, or at least language understanding, involves some form of abduction or explanation-based inference (Charniak & McDermott, 1985, p. 557; Charniak, 1986; Dasigi, 1988; Josephson, 1982, pp. 87-94; Fodor, 1983, pp. 88, 104; Hobbs, Stickel, Appelt and Martin, 1993).
In this book, we described six generations of abduction machines. Each generation's story was told by describing an abstract machine and experiments with realizations of the machine as actual computer programs. Each realization was approximate, partial, something less than a full realization of the abstract machine. Each realization was also more than the abstract machine: an actual chunk of software, a knowledge-based expert system constructed to do a job, with an abundance of insights, domain-specific solutions, and engineering shortcuts to get the job done. The abstract machines are simplified idealizations of actual software.
An abstract abduction machine is a design for a programming language for building knowledge systems. It is also a design for a tool for constructing these systems (a partial design, since a tool also has a programming environment).
Each of the six machines has a strategy for finding and accepting best explanations. Machine 6 inherits all the abilities of the earlier machines. Suppose that we connect it to abstract machines for the subtasks of hypothesis matching, hierarchical classification, and knowledge-directed data retrieval (see chapter 2). Then we conjoin abstract machines able to derive knowledge for various subtasks from certain forms of causal and structural knowledge (see chapters 5 and 8). Then we implement the whole abductive device as a program written for a problem-solving architecture, which is an abstract device of a different sort that provides control for generalized, flexible, goal-pursuing behavior (see chapter 4).