To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
X. Zhang, Knowledge Engineering Research Group School of Computing & Mathematical Sciences Oxford Polytechnic,
J. L. Nealon, Knowledge Engineering Research Group School of Computing & Mathematical Sciences Oxford Polytechnic,
R. Lindsay, Knowledge Engineering Research Group School of Computing & Mathematical Sciences Oxford Polytechnic
Abstract: Current intelligent user interfaces have two limitations: (i) They are domain specific and mainly built for existing database management systems, (ii) They are specific to the target systems for which they are constructed. However, user goals, which motivate interactions with a computer, are likely to be complicated and to require the use of multiple target systems in various domains. In this paper, we discuss the development of intelligent user interfaces which are not subject to the limitations identified. An architecture is proposed, the major function of which is the dynamic integration and intelligent use of multiple target systems relevant to a user's goals. Other important features of the proposed system include its theoretical orientation around relevance relationships, mental models and speech acts, and the introduction of “system experts” and “goal manager”. A prototype Intelligent Multifunctional User Interface, (IMUI), is briefly described which indicates that the proposed architecture is viable, the methodology is promising, and the theoretical ideas introduced are worthy of further investigation.
INTRODUCTION
Computer-based systems are coming to play an ever more important part in our society, and as they do so, they become increasingly complicated and difficult to use effectively. As a consequence, the need to develop flexible and versatile intelligent interfaces has become more crucial than ever.
What would an ideal interface look like, and how can such a system be designed and implemented? Most investigators would agree that it should behave like an intelligent human assistant who has expert knowledge both of user characteristics and requirements, and of target system(s).
By
P. Maher, Department of Mathematics and Computer Science University of Missouri - St. Louis, St. Louis, MO 63121 USA.,
O. Traynor, FB 3 Informatik und Mathematik, Universität Bremen, Bremen 33, Germany.
This paper describes and illustrates the use of a methodology suitable for the formal development of expert systems. It addresses the problems of verification and validation of expert systems in a realistic way, though the methods are not advocated as a general tool for expert system development. The framework described allows for both the specification of Knowledge and the specification of the Inference methods which provide the basis for deduction. A flexible and extensible environment for the development and testing of specific types of expert system is presented. Various tools and results are shown to be useful in determining properties of both the knowledge base and the inference system when these are developed within the proposed framework.
The framework is based on exploitation of the transformational model of software development in combination with techniques from algebraic specification.
INTRODUCTION
The development of expert systems, within a formal development framework (see [Krieg-Brückner and Hoffmann 91]), can be seen as a significant advance in expert system technology. The benefits accrued from such an approach are substantial. In particular the following are notable: a formal foundation for reasoning about properties of the knowledge base and inference system is provided. Inductive and deductive methods are available to help in both the construction of the expert system and as a tool for analysis of the knowledge bases. A well defined language, with well defined semantics, provides the basis for specifying both the expert system and the associated knowledge bases.
One of the principal difficulties in developing a distributed problem solver is how to distribute the reasoning task between the agents cooperating to find a solution.
We will propose the distributed logic programming language DLP as a vehicle for the design and implementation of distributed knowledge based systems. The language DLP combines logic programming with active objects.
We will show how object oriented modeling may be applied for the specification and implementation of a distributed diagnostic (medical) expert system. The example illustrates how the diagnostic process is distributed over the agents participating in the diagnosis according to the structure of the knowledge of that particular domain.
Logic programming offers a declarative way to solve problems in Artificial Intelligence. However, when implementing large (possibly distributed) systems, traditional software engineering problems such as modularization and the distribution of data and control reoccur. Cf. [Subrahmanyam, 1985].
One of the principal difficulties in developing a distributed problem solver is how to distribute the reasoning task between the agents cooperating to find a solution.
Due to its declarative nature, logic programming has become popular for implementing knowledge-based systems. However, lacking adequate modularization facilities, logic programming languages such as Prolog fall short in providing the mechanisms necessary to specify the distribution of data and control.
Object oriented modeling To tackle these problems, we suggest in this paper to embed the logic programming paradigm into an object oriented approach.
By
J. W. Brahan, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
B. Farley, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
R. A. Orchard, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
A. Parent, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
C. S. Phan, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6
Most expert systems perform a task on behalf of the user. The task usually involves gathering and analyzing data, and recommending or initiating the appropriate action. However, expert systems can also play an important role in showing the user how to perform a task. In this role, the expert system provides support until it eventually becomes of decreasing importance as its knowledge base is transferred to the user. This category includes Help Systems, Coaching Systems, and Tutorial Systems. In this paper, we discuss the development of an Intelligent Advisor combining the three functions in a system to assist the user in acquiring and refining the knowledge required to carry out a design task. The combined system provides a means of introducing a training facility as an integral part of the work environment. The primary goal of our project is the creation of a system in which the generic advisor components are identified along with the methodology required to adapt them to specific applications. The conceptual modelling phase of database design was chosen as the application domain to develop the system and to demonstrate feasibility. An initial prototype has been implemented, which illustrates the operation of the system in each of the three modes as applied to database modelling. The technology is currently being extended to a second application domain.
Introduction
ERMA (Entity-Relationship Modelling Advisor) is a knowledge-based system that serves as a consultant to the user of a computer-based design tool, providing advice as required.
By
S. Craw, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
D. Sleeman, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
N. Graner, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
M. Rissakis, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
S. Sharma, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE
The Machine Learning Toolbox (MLT), an Esprit project (P2154), provides an integrated toolbox of ten Machine Learning (ML) algorithms. One distinct component of the toolbox is Consultant, an advice-giving expert system, which assists a domain expert to choose and use a suitable algorithm for his learning problem. The University of Aberdeen has been responsible for the design and implementation of Consultant.
Consultant's knowledge and domain is unusual in several respects. Its knowledge represents the integrated expertise of ten algorithm developers, whose algorithms offer a range of ML techniques; but also some algorithms use fairly similar approaches. The lack of an agreed ML terminology was the initial impetus for an extensive, associated help system. From an MLT user's point of view, an ML beginner requires significant assistance with terminology and techniques, and can benefit from having access to previous, successful applications of ML to similar problems; but in contrast a more experienced user of ML does not wish constant supervision. This paper describes Consultant, discusses the methods used to achieve the required flexibility of use, and compares Consultant's similarities and distinguishing features with more standard expert system applications.
INTRODUCTION
The Machine Learning Toolbox (MLT), an Esprit project (P2154), provides an integrated toolbox of ten Machine Learning (ML) algorithms. One distinct component of the toolbox is Consultant, an advice-giving expert system. It provides domain experts with assistance and guidance on the selection and use of tools from the toolbox, but it is specifically aimed at experts who are not familiar with ML and its design has focused on their needs.
By
B. R. Gaines, Knowledge Science Institute, University of Calgary Calgary, Alberta, Canada T2N 1N4.,
M. L. G. Shaw, Knowledge Science Institute, University of Calgary Calgary, Alberta, Canada T2N 1N4.
An intelligent learning data base (ILDB) system is an integrated learning system which implements automatic knowledge acquisition from data bases by providing formalisms for 1) translation of standard data base information into a form suitable for use by its induction engines. 2) using induction techniques to produce knowledge from data bases, and 3) interpreting the knowledge produced efficiently to solve users' problems. Although a lot of work on knowledge acquisition from data bases has been done, the requirements for building practical learning systems to learn from conventional data bases are still far away for existing systems to reach. A crucial requirement is more efficient learning algorithms as realistic data bases are usually fairly large. Based on KEshell. dBASE3 and the low-order polynomial induction algorithm HCV. this paper presents a knowledge engineering shell. KEsheH2. which implements the 3 phases of automatic knowledge acquisition from data bases in an integral way.
INTRODUCTION
Over the past twenty years data base research has evolved technologies that are now widely used in almost every computing and scientific field. However, many new advanced applications including computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided software engineering (CASE), image processing, and office automation (OA) have revealed that traditional data base management systems (DBMSs) are inadequate, especially on the following cases [Wu 90b]:
Conventional data base technology has laid particular stress on dealing with large amounts of persistent and highly structured data efficiently and using transactions for concurrency control and recovery.
By
D. R. Pugh, Department of Computer Science University of Wales Aberystwyth Dyfed SY23 3DB United Kingdom,
C. J. Price, Department of Computer Science University of Wales Aberystwyth Dyfed SY23 3DB United Kingdom
This paper describes work on the construction of a configurable knowledge acquisition tool, Jigsaw. Unlike automated knowledge acquisition programs such as MORE [Kahn, 1988], MOLE [Eshelman, 1988], and OPAL [Musen, 1989], each of which automates elicitation for just one problem solving method, it is possible to alter Jigsaw's knowledge acquisition strategy to match different problem solving methods.
The work is based upon eliciting knowledge for problem solvers made up from different combinations of generic task (as denned in [Chandrasekaran, 1986] and [Chandrasekaran, 1988]). Each combination of generic tasks defines the functionality of a different problem solving method. However, the eventual aim of this work is that it will be possible to adapt it to a range of different KADS [Schreiber et. al., 1987] interpretation models and thus it will be part of a complete knowledge acquisition methodology.
The paper outlines the requirements for such a knowledge acquisition tool and details the distributed architecture which allows the tool, Jigsaw, to achieve the required flexibility to elicit knowledge for such problem solvers. An important part of this flexibility is the way in which Jigsaw can be configured to match different types of problem solver. This is described in some detail.
Jigsaw has been used to reproduce the MDX2 [Sticklen, 1987] knowledge base, which was initially constructed by using manual knowledge acquisition techniques. The paper gives a description of how Jigsaw elicited this knowledge.
The Lacedæmonians [advanced] slowly and to the music of
many flute-players […], meant to make them advance evenly,
stepping in time, without breaking their order, as large
armies are apt to do in the moment of engaging.
—Thucydides
The routing algorithms in Chapter 5 can be converted into efficient asynchronous algorithms by replacing the global clock with a synchronization scheme based on message passing. We also demonstrate that asynchronous fsra has low sensitivity to variations in link and processor speeds.
Introduction
The assumption of synchronism often greatly simplifies the design of algorithms, be they sequential or parallel. Many computation models — for example, RAM (“Random Access Machine”) [5] in the sequential setting and PRAM in the parallel setting — assume the existence of a global clock. But, this assumption will become less desirable as the number of processors increases. For one thing, a global clock introduces a single point of failure. A global clock also restrains each processor's degree of autonomy and renders the machine unable to exploit differences in running speed [42, 192], limiting the overall speed to, so to speak, that of the (“slowest” component instead of the “average” one, thus wasting cycles. Tight synchronization also limits the size of the parallel computer, since it takes time to distribute the clock-signal to the whole system [316].
Proceeding in epochs, our routing schemes in Chapter 5 assume synchronism. In fact, the very definition of ECS assumes a global clock to synchronize epochs. We show in this chapter that with synchronization done via message passing, ECSs can be made asynchronous without loss of efficiency and without global control. Much work has been done in this area; see, for example, [31, 32, 33].
It has long been recognized that computer design utilizing more than one processor is one promising approach — some say the only approach — toward more powerful computing machines. Once one adopts this view, several issues immediately emerge: how to connect processors and memories, how do processors communicate efficiently, how to tolerate faults, how to exploit the redundancy inherent in multiprocessors to perform on-line maintenance and repair, and so forth.
This book confronts the above-mentioned issues with two keys insights. There exist error-correcting codes that generate redundancy which is efficient in terms of the number of bits. Such redundancy is used to correct errors and erasures caused by component failures and resource limitations (such as limited buffer size). This insight comes from Michael Rabin. The next insight, due to Leslie Valiant, demonstrates the criticality of randomization in achieving communication efficiency.
We intend to make this book an up-to-date account of the information dispersal approach as it is applied to parallel computation. We also discuss related work in the general area of parallel communication and computation and provide an extensive bibliography in the hope that either might be helpful for researchers and students who want to explore any particular topic. Although materials in this book extend across several disciplines (algebra, coding theory, number theory, arithmetics, algorithms, graph theory, combinatorics, and probability), it is, the author believes, a self-contained book; adequate introduction is given and every proof is complete.
In this chapter, I consider the potential and actual reuse opportunities within UNIX. First, several methods are suggested that could increase the likelihood that the next submission matches an item in a small set of predictions offered to the user for review and reuse. All methods are applied to the UNIX traces, and the predictive “quality” of each method is measured and contrasted against the others. In the second part of the chapter, I investigate how well the reuse facilities supplied by the UNIX shell are used in practice.
Conditioning the distribution
In the last chapter, particular attention was paid to the recurrence of command lines during csh use, and to the probability distribution of the next line given a sequential history list of previous ones. We saw that the most striking feature of the collected statistics is the tremendous potential for a historical reuse facility: the recurrence rate is high and the last few submissions are the likeliest to be repeated.
One may predict what the user will do next by looking at those recent submissions. But there is still room for improvement, because a significant portion of recurrences are not recent submissions. Can better predictions of the user's next step be offered? This section proposes and evaluates alternative models of arranging a user's command line history that will condition the distribution in different ways.
The recurrence distributions of Section 5.4.2 were derived by considering all input for a user as one long sequential stream, with no barriers placed between sessions.
The simulation of the ideal parallel computation model, pram, on the hypercube network is considered in this chapter. It is shown that a class of pram programs can be simulated with a slowdown of O(log N) with almost certainty and without using hashing, where N denotes the number of processors. Also shown is that general pram programs can be simulated with a slowdown of O(log N) with the help of hashing. Both schemes are ida-based and fault-tolerant.
Introduction
Parallel algorithms are notoriously hard to write and debug. Hence, it is only natural to turn to ideal models that provide good abstraction. As these models do not assume any particular hardware configuration, they should have the additional benefit that programs written for them can be executed on any hardware that supports the model, similar to the situation in the sequential case where the existence of a, say, C compiler on a particular platform implies standard C programs can be compiled and executed there [370]. The PRAM (“Parallel Random Access Machine”) is one such model. It completely abstracts out the cost issue in communication and allows us to focus on the computational aspect. However, such convenience and generality is not without its price: the PRAM model, unlike the von Neumann machine model, is not physically feasible to build [369]. The simulation of PRAMs by feasible computers is therefore important and forms the major theme of this chapter.
After briefly examining the challenges facing designing ever more powerful computers, we discuss the important issues in parallel processing and outline solutions. An overview of the book is also given.
The von Neumann Machine Paradigm
The past five decades have witnessed the birth of the first electronic computer [257] and the rapid growth of the computing industry to exceed $1,000 billion (annual revenue) in the U.S. alone [162]. The demand for high-performance machines is further powered by the advent of many crucial problems whose solutions require enormous computing power: environmental issues, search for cures for diseases, accurate and timely weather forecasting, to mention just a few [271]. Moreover, although unre-lenting decrease in the feature size continues to improve the computing capability per chip, turning that into a corresponding increase in computing performance is a major challenge [152, 316]. All these factors point toward the necessity of sustained innovation in the design of computers.
That the von Neumann machine paradigm [37], the conceptual framework for most computers, considered at the system level will impede further performance gains is not hard to see. Input and output excluded, the von Neumann machine conceptually consists of a processing unit, a memory storing both programs and data, and a wire that connects the two.
In this final chapter, we briefly review techniques and concepts in fault-tolerant computing. Then we sketch the design of a fault-tolerant parallel computer, hpc (“hypercube parallel computer”), based on the results and ideas from previous chapters.
Introduction
A fault-free computer, or any human artifact, has never been built, and will never be. No matter how reliable each component is, there is always possibility, however small, that it will go wrong. Statistical principles dictate that, other things being equal, this possibility increases as the number of components increases. Such an event, if not anticipated and safe-guarded against, will eventually make the computer malfunction and lead to anything from small annoyance and inconvenience to disaster.
Recently, the same enormous decrease in hardware cost which makes parallel computers economically feasible also makes fault tolerance more affordable [297]. In other words, the low cost of hardware makes possible both high degree of fault tolerance using redundancy and high performance. Indeed, most fault-tolerant computers today employ multiple processors; see [241, 254, 317] for good surveys.
It is in the light of these backgrounds that we take this extra step toward designing a hypercube parallel computer (HPC for short). In the HPC processors are grouped into logical clusters consisting of physically close processors, and each program execution is replicated at all members of a cluster. Clusters overlap, however. The concept of cluster — logical or physical — introduces a two-level, instead of flat, organization and can be found in, for example, the Cm [344], Cedar [187], and FTPP (“Fault Tolerant Parallel Processor”) [148] computers.
Humans are the most versatile of creatures, and computers are their most versatile of creations. Human–Computer Interaction (HCI) is the study of what they do together; in particular, HCI aims to make interaction better suit the humans. Computers contribute to art, science, engineering, … all areas of human endeavor. It is no surprise, then, that there is heated debate about what the essence of HCI is and what it should be. What is good HCI? The answer to this question will be elusive given that there is good engineering that is not art, good art that is not science, and good science that is not engineering.
It's easier to see what form of answer there can be by taking a quick excursion into another field. Imagine the discovery of a dye, such as W. H. Perkin's breakthrough discovery of mauve. Is it science? Yes: certain chemicals must react to produce the dyestuff, and the principles of chemistry suggest other possibilities. Is it art? Yes: it makes an attractive color. Is it engineering? Yes: its quantity production, fastness in materials, and so forth, are engineering. Perkin's work made the once royal purple accessible to all. Fortunately there is no subject “Human Chemical Interaction” to slide us into thinking that there is, or should be, one right view of the work of making or using, designing, standardizing, or evaluating a dye. Nevertheless, we appreciate a readily available, stunning color, used by an able artist, and one that lasts without deteriorating.