To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We (Ergonomists) borrow and invent techniques to serve our special needs.
A. Chapanis, 1990.
Current human factors input to system development is effected through methods, tools and guidelines. Although the input prompts the consideration of human factors concerns during system development, reports have highlighted inadequacies with respect to the scope, granularity, format and timing of the contributions (see Smith, 1986; Chapanis and Burdurka, 1990; Sutcliffe, 1989; etc.).
To improve the effectiveness of human factors input to system development, problems with existing approaches need to be examined. Such an examination would:
highlight requirements pertaining to the role of human factors in system development; such as the concerns of the ‘who’, ‘what’, ‘when’ and ‘how’ of human factors input;
support the enhancement of existing approaches for human factors input;
facilitate the specification of new and more promising solutions to existing problems of human factors input.
This book argues that current problems of input to system development cannot be solved by early human factors involvement alone. Instead, it is emphasised that the problems would be solved only by ensuring early human factors involvement that is then continued throughout system development. To achieve this objective, human factors designers must also contribute actively to system specification as opposed to system evaluation only. In addition, the requirements and activities of human factors specification should be made explicit. Thus, both software engineering and human factors needs may be represented and accommodated appropriately by an overall system development agenda. Intersecting design concerns between the disciplines may also be identified and addressed more effectively in this way.
Annexes A and B are provided here for the sake of completeness.
Annex A completes the set of human factors descriptions that may be derived for the case-study described in Chapter Four. The extant system descriptions shown are applicable only if the system to be developed is very similar to an existing system. In addition, note that the procedures described here are simplified versions of the sets presented in Chapters Four to Six.
Annex B provides the reader with an advance view of possible enhancements of the design descriptions and notations of MUSE. The case-study descriptions presented are for illustration only.
Annex A: Case-study Illustration of Secondary Activities and Products of the Extant Systems Analysis (ESA) Stage (Network Security Management System)
This account completes an illustration of the Extant Systems Analysis (ESA) Stage, for the case-study used in Chapters Three to Six; namely the Network Security Management System. Specifically, secondary human factors activities and products applicable in a variant design scenario are described. Note that the extent to which such activities and products are addressed depends largely on how similar the domain characteristics and implementation technology are between the extant and target systems. Consequently, illustrations of design products are provided only when their derivation was considered appropriate during the case-study.
The end of our foundation is the knowledge of causes, and the secret motions of things; and the enlarging of the bounds of human Empire, to the effecting of all things possible.
Francis Bacon, 1627, New Atlantis
Following the derivation of a conceptual design of the target system, user interface specification is undertaken in the Design Specification Phase of the method. Presently, the design stages comprising the phase, namely the Interaction Task Model, Interface Model and Display Design Stages, are described in the sequence performed during design (i.e. in the given order). As before, human factors design activities and products of each of the stages are summarised using a block diagram, and case-study examples are provided where appropriate.
The Interaction Task Model (ITM) Stage
Having defined the on-line task conceptually in a Target System Task Model, the high-level cycles of human-computer interaction may be decomposed further. The human factors description derived at this stage is termed a Target Interaction Task Model (or ITM(y)). Note that the model is concerned primarily with the description of error-free user-computer interaction. Potential user errors are addressed at later stages of the method (see later). Figure 6-1 shows the location of the Interaction Task Model Stage relative to other stages of the method.
The objective of deriving a Target Interaction Task Model is to specify the device level interactions required to achieve on-line task goals on the computer. The model is described in terms of expected user interactions with the designated hardware, and with bespoke, variant and standard objects and actions of the chosen user interface environment.
Really we create nothing. We merely plagiarise nature.
Jean Baitaillon
Engineering attempts to fully constrain its outputs…‥ Engineering investigates successful designs and adopts those ‘means’ that it finds generalisable.
Jim Carter
This chapter describes in detail the Information Elicitation and Analysis Phase of the method. The phase comprises two design stages, namely the Extant Systems Analysis Stage and the Generalised Task Model Stage. The stages, described in the order of their application (see Figure 4-1), are concerned with the generation and analysis of background information to support subsequent derivation of a system design.
As indicated in Chapter Three, intermediate design processes, products and documentation schemes for each stage of the method are described as follows:
(a) an overview of the stage is provided prior to a detailed description;
(b) a block diagram summary of the stage is provided to highlight the sub-processes involved in transforming stage input(s) into one or more intermediate product(s);
(c) case-study illustrations of the products, processes and documentation schemes of the stage are provided where appropriate.
The Extant Systems Analysis (ESA) Stage
The main objective of the ESA Stage is to generate background information to assist the design of the target system at later stages of the method (see Figure 4–1 – the ESA Stage is highlighted by a box outlined in bold). To this end, extant systems are analysed to characterise current user needs and problems; existing function allocation between the user and device; the design features and rationale of existing user interfaces; etc.
If at first you know where you are, and whither you are tending, you will better know what to do and how to do it.
Abraham Lincoln, 1809–1865
The primary objective of this chapter is to characterise the problem addressed by MUSE, a structured human factors Method for Usability Engineering. To this end, existing problems of human factors contributions to system development are reviewed; namely, existing contributions are poorly timed and contextualised to the support required at different stages of the system design cycle. As a result, the relevance, format and granularity of human factors contributions are not optimal for effective uptake during design. By establishing the nature of the problems, promising solutions may then be assessed. Arguments supporting a structured analysis and design method, such as MUSE, are thus exposed.
General Problems of Human Factors Contribution to System Development
Recent developments in computer technology (e.g. the availability and affordability of personal computers and the rapid diversification in computer applications) have resulted in a shift from mainframes to personal computers. Today, such interactive computers have made significant inroads into both the workplace and the home. Consequently, the user base of computers has widened considerably.
The extended user base, together with market forces, highlighted the importance of designing computer applications that are appropriate in both functionality and usability. The success of Macintosh computers is an example (see also Shackel, 1985 and 1986b; CCTA (Draft) Report, 1988, Annex 1; Shuttleworth, 1987).
In the land of the blind, the one-eyed man is king. Bottom-line argument for the method?
John Long, 1990
Good order is the foundation of all good things.
Edmund Burke, 1790
The objective of the present overview is to establish a conceptual foundation for a detailed stage-wise account of the method in Chapters Four to Six.
General Characteristics of the Human Factors Method
The primary focus of the method is on design specification because a literature survey indicated that current human factors contributions are well established at later stages of system development, e.g. human factors evaluation after design implementation. In contrast, human factors contributions to design specification are generally inadequate and implicit. Since the recruitment of human factors contributions is traditionally late, the discovery of design errors is also delayed. As a result, the required modifications are costly and difficult to implement (see Chapter One). Thus, greater emphasis is placed on ensuring human factors contributions to design specification. In this context, a participative followed by consultative design role for human factors contribution is envisaged at system specification and implementation respectively. During the latter stage, existing techniques for human factors evaluation may be recruited to support the method. An overview of the method follows.
The method is structured into three phases, each of which comprises a number of design stages (Figure 2-8 is reproduced overleaf for reference).
The second substantially technical issue in the design of parallel computers concerns the complexity of each of the processing elements. To a very significant extent this will reflect the target functionality of the system under consideration. If a parallel computer is intended for use in scientific calculations, then in all likelihood floating-point operations will be a sine qua non and appropriate units will be incorporated. On the other hand, a neural network might be most efficiently implemented with threshold logic units which don't compute normal arithmetic functions at all. Often, the only modifier on such a natural relationship will be an economic or technological one. Some desirable configurations may be too expensive for a particular application, and the performance penalty involved in a non-optimum solution may be acceptable. Alternatively, the benefits of a particularly compact implementation may outweigh those of optimum performance. There may even be cases where the generality of application of the system may not permit an optimum solution to be calculated at all. In such cases, a variety of solutions may be supportable.
We will begin our analysis of processor complexity by considering the variables which are involved, and the boundaries of those variables.
Analogue or digital?
Although almost all modern computers are built from digital circuits, this was not always, and need not necessarily be, so. The original, overwhelming reason for using digital circuits was because of the vastly improved noise immunity which could be obtained, particularly important when using highprecision numbers.
This chapter describes a number of features that might be useful in practical work with qualified types. We adopt a less rigourous approach than in previous chapters and we do not attempt to deal with all of the technical issues that are involved.
Section 6.1 suggests a number of techniques that can be used to reduce the size of the predicate set in the types calculated by the type inference algorithm, resulting in smaller types that are often easier to understand. As a further benefit, the number of evidence parameters in the translation of an overloaded term may also be reduced, leading to a potentially more efficient implementation.
Section 6.2 shows how the use of information about satisfiability of predicate sets may be used to infer more accurate typings for some terms and reject others for which suitable evidence values cannot be produced.
Finally, Section 6.3 discusses the possibility of adding the rule of subsumption to the type system of OML to allow the use of implicit coercions from one type to another within a given term.
It would also be useful to consider the task of extending the language of OML terms with constructs that correspond more closely to concrete programming languages such as recursion, groups of local binding and the use of explicit type signatures. One example where these features have been dealt with is in the proposed static semantics for Haskell given in (Peyton Jones and Wadler, 1992) but, for reasons of space, we do not consider this here.
This chapter describes an ML-like language (i.e. implicitly typed λ-calculus with local definitions) and extends the framework of (Milner, 1978; Damas and Milner, 1982) with support for overloading using qualified types and an arbitrary system of predicates of the form described in the previous chapter. The resulting system retains the flexibility of the ML type system, while allowing more accurate descriptions of the types of objects. Furthermore, we show that this approach is suitable for use in a language based on type inference, in contrast for example with more powerful languages such as the polymorphic λ-calculus that require explicit type annotations.
Section 3.1 introduces the basic type system and Section 3.2 describes an ordering on types, used to determine when one type is more general than another. This is used to investigate the properties of polymorphic types in the system.
The development of a type inference algorithm is complicated by the fact that there are many ways in which the typing rules in our original system can be applied to a single term, and it is not clear which of these (if any!) will result in an optimal typing. As an intermediate step, Section 3.3 describes a syntax-directed system in which the choice of typing rules is completely determined by the syntactic structure of the term involved, and investigates its relationship to the original system. Exploiting this relationship, Section 3.4 presents a type inference algorithm for the syntax-directed system which can then be used to infer typings in the original system.
In order to make sense of the way in which users control parallel computers, we shall have to adopt a rather wider definition of the idea of programming than that which is usually taken. This is principally because of the existence of systems which are trained rather than programmed. Later in this chapter I shall attempt to show the equivalence between the two approaches but it is probably best to begin by considering how the conventional idea of programming is applied to parallel systems.
Parallel programming
There are three factors which we should take into account in order to arrive at a proper understanding of the differences between one type of parallel language and another and to appreciate where the use of each type is appropriate. These are whether the parallelism is hidden or explicit, which paradigm is employed and the level of the language. Although there is, inevitably, a certain amount of overlap between these factors, I shall treat them as though they were independent.
The embodiment of parallelism
There is really only one fundamental choice to be made here – should the parallelism embodied in a language be implicit or explicit? That is, should the parallelism be hidden from the programmers, or should it be specified by them? The first alternative is usually achieved by allowing the use of data types, in a program, which themselves comprise multiple data entities.
The purpose of this book has been to introduce the reader to the subject of parallel computing. In attempting to make the subject digestible, it is inevitable that a great deal of advanced material has been omitted. The reader whose interest has been kindled is directed to the bibliography, which follows this chapter, as a starting point for continued studies. In particular, a more rigorous treatment of advanced computer architectures is available in Advanced Computer Architectures by Hwang.
It should also be noted that parallel computing is a field in which progress occurs at a prodigious rate. So much work is being done that some startling new insight or technique is sure to be announced just after this book goes to press. In what follows, the reader should bear in mind the developing nature of the field. Nevertheless, a great deal of material has been presented here, and it is worthwhile making some attempt to summarise it in a structured manner.
I have attempted to make clear in the preceding chapters that understanding parallel computing is an hierarchical process. A valuable degree of insight into the subject can be obtained even at the level considered in the first chapter, where three basic classes or types of approach were identified. Thereafter, each stage of the process should augment the understanding already achieved. It is up to each reader to decide on the level of detail required.
As was indicated in Chapter 1, there is a prima facie case for supposing that parallel computers can be both more powerful and more cost-effective than serial machines. The case rests upon the twin supports of increased amount of computing power and, just as importantly, improved structure in terms of mapping to specific classes of problems and in terms of such parameters as processor-to-memory bandwidth.
This chapter concerns what is probably the most contentious area of the field of parallel computing – how to quantify the performance of these allegedly superior machines. There are at least two significant reasons why this should be difficult. First, parallel computers, of whatever sort, are attempts to map structures more closely to some particular type of data or problem. This immediately invites the question – on what set of data and problems should their performance be measured? Should it be only the set for which a particular system was designed, in which case how can one machine be compared with another, or should a wider range of tasks be used, with the immediate corollary – which set? Contrast this with the accepted view of the general-purpose serial computer, where a few convenient acronyms such as MIPS and MFLOPS (see Section 6.1.3) purport to tell the whole story. (That they evidently do not do so casts an interesting sidelight on our own problem.)
The second reason concerns the economic performance, or cost-effectiveness, of parallel systems.