To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Those who ignore history are destined to retype it
— Ben Shneiderman
It is evident that users often repeat activities they have previously submitted to the computer. These activities include not only the commands they choose from the many available in command-driven systems (Chapter 3), but also the complete command line entry. Similarly, people repeat the ways they traverse paths within menu hierarchies, select icons within graphical interfaces, and choose documents within hypertext systems. Often, recalling the original activity is difficult or tedious. For example, problem-solving processes must be recreated for complex activities; command syntax or search paths in hierarchies must be remembered; input lines retyped; icons found; and so on. Given these difficulties, potential exists for a well-designed “reuse facility” to reduce the problems of activity reformulation.
But most system interfaces offer little support for reviewing and reusing previous activities. Typically they must be completely retyped, or perhaps reselected through menu navigation. Those systems that do provide assistance offer ad hoc “history” mechanisms that employ a variety of recall strategies, most based on the simple premise that the last n recent user inputs are a reasonable working set of candidates for reselection. But is this premise correct? Might other strategies work better? Indeed, is the dialog sufficiently repetitive to warrant some type of activity reuse facility in the first place? As existing reuse facilities were designed by intuition rather than from empirical knowledge of user interactions, it is difficult to judge how effective they really are or what scope there is for improvement.
In every trade a specific way of organizing tools and objects for the craftsman has been established. Every workshop is equipped with appropriate tools and organized with respect to the specific working situation. In this way strategies for the solution of typical problems are at hand for the workers.
— Hoffman and Voider, 1986
This book opened by advocating the common metaphor of tool use for thinking about command-based systems, where command lines are the tools people employ to manipulate the materials in their computer environment. The four preceding chapters pursued the notion that recently used lines, like tools, should be available for reuse. But reuse is not the only strategy for supporting user activities. It is evident that people impose some organization on their computer tools and materials, just as craftsmen do with their physical counterparts. Real workshops support these organizations through toolboxes for arranging and locating tools, workbenches for performing specific tasks, shelving and drawers for keeping relevant tools and materials readily available, and so on. Computing environments, on the other hand, do little to promote personal organization. A command-based interface is comparable to an unhelpful clerk who waits for you to name the tool you want, retrieves the tool (if available) from a separate room, and demands that you return it immediately after use. At the other extreme, arranging facilities into fixed taxonomic menus is reminiscent of a totalitarian chaining of tools to a single location.
There is nothing quite so frustrating for the avid do-it-yourselfer than to begin a project, suddenly need a particular tool, but have no idea where in the house to look for it.
General-purpose computer environments that furnish a large set of diverse tools are often hard to use. Although some difficulty is associated with using any particular tool, this book is concerned with the problems that a person faces when selecting a tool from the many available, reusing that tool while performing a task, and organizing the chosen tools in a way that makes them ready to hand later on. Surprisingly, methods and habits for using physical tools that have evolved over millions of years have not been transferred effectively to the computer domain.
The goal of the research discussed in this book is to identify properties of a human–computer interface that supports how people select, reuse, and organize the tools available in general-purpose computing environments. These properties come from empirical analyses of user behavior. This introduction sets the scene first by reviewing physical tools, from their very natural use by animals to ultra sophisticated machinery that taxes human capabilities beyond acceptable performance limits. Section 1.2 moves to the focus of this book – general-purpose computing environments that make diverse collections of on-line tools available. It identifies two problem areas: the dearth of knowledge about people's use of on-line tools, and the poor existing user support for everyday interactions with them.
The following pages list a few basic statistics observed for the subjects involved in the study. Each subject is identified by the name of his group and a number. For example, “Novice–1” is the first subject of the Novice Programmer group. These names match the file names found in the publicly available trace data (Greenberg, 1988b).
The statistics include each user's number of login sessions, the command lines entered, the different commands used, the csh errors noted, the times history was used, and the different directories accessed. For example, Novice–1 entered 2,457 command lines over fifty-five login sessions. Of those lines, 213 produced csh errors. History was invoked thirty-seven times, eighteen different directories were visited, and sixty-seven different commands were used.
Basically, this workbench is composed of a pair of storage cabinets, on which rests a rugged work top. The exact design of the storage cabinets depends on the kind of work you do, the kind of tools you use, the amount of space you have.
— Homeowner's How-to Treasury, Popular Science, 1976
This chapter describes a design and implementation of a user support tool that embodies the reuse properties suggested in Chapters 4 through 7, and the workspace organization of Chapter 8. Called WORKBENCH, the system is a graphical window-based front end to UNIX csh. The facilities and user interface are described in the first section, along with the rationale behind its design. WORKBENCH is not an end in itself. Although recently made available to selected members of the University of Calgary's Department of Computer Science and now used by several people, it serves here as an exploration of a workspace design. It is not formally evaluated; experimental appraisal is neither credible nor necessary at this early stage. Rather, the intent is to discover how feasible it is to build a workspace, to note initial pragmatic considerations arising from its use, and to suggest research areas motivated by problems encountered or envisaged. These issues are covered in the second section.
The WORKBENCH system
WORKBENCH is a window-based facility that allows people to reuse and structure their on-line UNIX csh activities.
The two preceding chapters analyzed command line recurrences with dialogs with the UNIX csh. Based on the empirical results, the first section of this chapter formulates general principles that characterize how users repeat their activities on computers. Some guidelines are also tabulated for designing a reuse facility that allows users to take advantage of their previous transaction history. The second section corroborates these principles by a post hoc study of user traces obtained from another quite different command line system. The final section steps back from the empirical findings and presents a broader view of reuse.
Principles and guidelines
This section abstracts empirical principles governing how people repeat their activities from the UNIX study described earlier. They are summarized and reformulated in Table 7.1 as empirically based general guidelines for the design of reuse facilities. Although there is no guarantee that these guidelines generalize to all recurrent systems, they do provide a more principled design approach than uninformed intuition.
Principles: how users repeat their activities
A substantial portion of each user's previous activities are repeated. In spite of the large number of options and arguments that could qualify a command, command lines in UNIX csh are repeated surprisingly often by all classes of users. On average, three out of every four command lines entered by the user have already appeared previously. UNIX is classified as a recurrent system by the definition in Section 5.1.
The quest for programming languages which are more readable and expressive has led to many developments in programming languages, one of which is the logic programming paradigm. In theory, logic programming languages are more readable and expressive because they borrow some of the expressive power of the language of mathematical logic – a language which was developed specifically in order to model some of the deductive processes of the human mind.
This theoretical goal has been achieved to only a limited extent in practice, because the implementations of logic programming languages differ from the ideal theoretical model in many ways. One of the most basic and profound of the differences is that the theory concerns languages which can be implemented completely only by parallel (breadth-first) interpreters, while most practical implementations use incomplete, sequential (depthfirst) strategies.
This incompleteness in itself would not necessarily be a problem; but unfortunately, the exact set of terminating sequential logic programs is hard to characterise in a logical way. Sequentiality also affects reasoning about programs, disrupting the hope that the identification of program with logical formula would make this straightforward. These problems tend to weaken claims that practical and truly logical programming is possible.
This thesis is intended as a step towards mending this rift between theory and practice, between parallel and sequential systems. In the thesis, I present a homogeneous operational characterisation of the parallel and sequential versions of a basic logic programming language; I then use proof systems to characterise, in a logical manner, the sets of queries which terminate in the various parallel, sequential, and mixed control disciplines.
This thesis has taken as its object of study the control-discipline variants of a simple logic programming language equivalent to Horn clause logic programming. It has classified and logically characterised the set of successful and failing queries of these variants of the language.
I have given an operational semantics, SOS, of which variants correspond to the parallel “and” and “or”, sequential “and”, sequential “or”, and sequential “and” and “or” control disciplines. This operational semantics homogenises the treatment of the control disciplines by incorporating control information (such as the failure-backtrack mechanism of seqeuntial systems) into the operational semantics. (Some of the variants of SOS have equivalent compositional operational semantics, which I have given.) I have also classified the queries into those succeeding and those failing in each of the control disciplines, and have proven the equivalence of some of these classes.
I have then used a sequent calculus framework, in which the elements of sequents are assertions about the success or failure of queries, to give a logical analysis of these classes of queries. Three calculi are given; they share a common set LKE of rules for classical logic with equality as syntactic identity, and differ in the set of axioms which characterise the behaviour of queries.
LKE+PAR characterises the queries which succeed in parallel-or systems, and those which fail in parallel-and systems;
LKE+SEQ characterises the queries which succeed in the sequential-and, sequential-or system, and those which fail in sequential-and systems;
LKE+PASO characterises the queries which succeed in the parallel-and, sequential-or system.
In this chapter, I will give a characterisation of the two inner circles of the Venn diagram in Figure 2.6 in the same way as I characterised the two outer circles. That is, I will give a proof-theoretic characterisation of sequential logic programming (in particular, the operational semantics SP) in the form of a sequent calculus.
For this sequent calculus, we can use the rules LKE from the last chapter unchanged; we need only give anew group of axioms, SEQ, corresponding to PAR from the last chapter. These axioms, however, are more complex than those in PAR, have more side-conditions, and in particular involve the concept of disjunctive unfoldings of formulae.
Nevertheless, we can prove the same things about SEQ that we can about PAR: the laws are sound, and the proof system LKE+SEQ characterises sequential logic programming in several useful ways.
I will also give a characterisation of the last circle in Figure 2.6, namely the middle success circle. This set contains all queries which succeed in SOS/so, and can be characterised by a set of axioms, PASO, which combines axioms from PAR and from SEQ in a simple and intuitively clear way.
Approaches to Semantics
I begin by going into more detail about why we want a semantics for sequential logic programming, and what approaches have been taken so far to giving one.
The assumptions made about search strategies in most research on foundations of logic programming (for instance, SLD-resolution with a fair search rule) are not satisfied by sequential logic programming.
Operational (or “procedural”) semantics, as I mentioned in the Introduction, are used to provide characterisations of programming languages which meet certain “computational” criteria: giving a detailed description of the language for implementation purposes, and giving a computational model to which programmers can refer.
For logic programming, operational semantics are particularly important because it is in them that the innovations of logic programming lie. The notions of resolution and unification are not immediately apparent; unification, though defined by Herbrand in his thesis [44], was virtually ignored until Prawitz's work [62], and resolution was not defined until 1965 [66]. These notions must be explained within the context of a full description of the computational model of the language.
If we want to do such things as soundness and completeness proofs, or indeed any formal comparison of the operational semantics to other characterisations of the language, the operational semantics must also be mathematically precise – for instance, in the form of a formal system. (Plotkin [58] has explored the idea of structured operational semantics in detail, and gives a taxonomy to which I will refer in this chapter.) SLD-resolution [49], SLDNF-resolution [50], and the operational semantics in this chapter are just a few examples of formal operational semantics for logic programming. Other examples include Voda's tree-rewriting system [76], Deransart and Ferrand's [29] and Börger's [13] standardisation efforts, and the abstract computation engines for Andorra Prolog [43] and the “Pure Logic Language”, PLL [10, 52].