To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
If at first you know where you are, and whither you are tending, you will better know what to do and how to do it.
Abraham Lincoln, 1809–1865
The primary objective of this chapter is to characterise the problem addressed by MUSE, a structured human factors Method for Usability Engineering. To this end, existing problems of human factors contributions to system development are reviewed; namely, existing contributions are poorly timed and contextualised to the support required at different stages of the system design cycle. As a result, the relevance, format and granularity of human factors contributions are not optimal for effective uptake during design. By establishing the nature of the problems, promising solutions may then be assessed. Arguments supporting a structured analysis and design method, such as MUSE, are thus exposed.
General Problems of Human Factors Contribution to System Development
Recent developments in computer technology (e.g. the availability and affordability of personal computers and the rapid diversification in computer applications) have resulted in a shift from mainframes to personal computers. Today, such interactive computers have made significant inroads into both the workplace and the home. Consequently, the user base of computers has widened considerably.
The extended user base, together with market forces, highlighted the importance of designing computer applications that are appropriate in both functionality and usability. The success of Macintosh computers is an example (see also Shackel, 1985 and 1986b; CCTA (Draft) Report, 1988, Annex 1; Shuttleworth, 1987).
In the land of the blind, the one-eyed man is king. Bottom-line argument for the method?
John Long, 1990
Good order is the foundation of all good things.
Edmund Burke, 1790
The objective of the present overview is to establish a conceptual foundation for a detailed stage-wise account of the method in Chapters Four to Six.
General Characteristics of the Human Factors Method
The primary focus of the method is on design specification because a literature survey indicated that current human factors contributions are well established at later stages of system development, e.g. human factors evaluation after design implementation. In contrast, human factors contributions to design specification are generally inadequate and implicit. Since the recruitment of human factors contributions is traditionally late, the discovery of design errors is also delayed. As a result, the required modifications are costly and difficult to implement (see Chapter One). Thus, greater emphasis is placed on ensuring human factors contributions to design specification. In this context, a participative followed by consultative design role for human factors contribution is envisaged at system specification and implementation respectively. During the latter stage, existing techniques for human factors evaluation may be recruited to support the method. An overview of the method follows.
The method is structured into three phases, each of which comprises a number of design stages (Figure 2-8 is reproduced overleaf for reference).
This chapter describes a number of features that might be useful in practical work with qualified types. We adopt a less rigourous approach than in previous chapters and we do not attempt to deal with all of the technical issues that are involved.
Section 6.1 suggests a number of techniques that can be used to reduce the size of the predicate set in the types calculated by the type inference algorithm, resulting in smaller types that are often easier to understand. As a further benefit, the number of evidence parameters in the translation of an overloaded term may also be reduced, leading to a potentially more efficient implementation.
Section 6.2 shows how the use of information about satisfiability of predicate sets may be used to infer more accurate typings for some terms and reject others for which suitable evidence values cannot be produced.
Finally, Section 6.3 discusses the possibility of adding the rule of subsumption to the type system of OML to allow the use of implicit coercions from one type to another within a given term.
It would also be useful to consider the task of extending the language of OML terms with constructs that correspond more closely to concrete programming languages such as recursion, groups of local binding and the use of explicit type signatures. One example where these features have been dealt with is in the proposed static semantics for Haskell given in (Peyton Jones and Wadler, 1992) but, for reasons of space, we do not consider this here.
This chapter describes an ML-like language (i.e. implicitly typed λ-calculus with local definitions) and extends the framework of (Milner, 1978; Damas and Milner, 1982) with support for overloading using qualified types and an arbitrary system of predicates of the form described in the previous chapter. The resulting system retains the flexibility of the ML type system, while allowing more accurate descriptions of the types of objects. Furthermore, we show that this approach is suitable for use in a language based on type inference, in contrast for example with more powerful languages such as the polymorphic λ-calculus that require explicit type annotations.
Section 3.1 introduces the basic type system and Section 3.2 describes an ordering on types, used to determine when one type is more general than another. This is used to investigate the properties of polymorphic types in the system.
The development of a type inference algorithm is complicated by the fact that there are many ways in which the typing rules in our original system can be applied to a single term, and it is not clear which of these (if any!) will result in an optimal typing. As an intermediate step, Section 3.3 describes a syntax-directed system in which the choice of typing rules is completely determined by the syntactic structure of the term involved, and investigates its relationship to the original system. Exploiting this relationship, Section 3.4 presents a type inference algorithm for the syntax-directed system which can then be used to infer typings in the original system.
One of the main goals in preparing this book for publication was to preserve the thesis, as much as possible, in the form that it was originally submitted. With this in mind, we have restricted ourselves to making only very minor changes to the body of the thesis, for example, correcting typographical errors.
On the other hand, we have continued to work with the ideas presented here, to find new applications and to investigate some of the areas identified as topics for further research. In this short chapter, we domment briefly on some examples of this, illustrating both the progress that has been made and some of the new opportunities for further work that have been exposed.
We should emphasize once again that this is the only chapter that was not included as part of the original thesis.
Constructor classes
The initial ideas for a system of constructor classes as sketched in Section 9.2 have been developed in (Jones, 1993b), and full support for these ideas is now included in the standard Gofer distribution (versions 2.28 and later). The two main technical extensions in the system of constructor classes to the work described here are:
The use of kind inference to determine suitable kinds for all the user-defined type constructors appearing in a given program.
The extension of the unification algorithm to ensure that it calculates only kind-preserving substitutions. This is necessary to ensure soundness and is dealt with by ensuring that constructor variables are only ever bound to constructors of the corresponding kind. Fortunately, this has a very simple and efficient implementation.
While the results of the preceding chapter provide a satisfactory treatment of type inference with qualified types, we have not yet made any attempt to discuss the semantics or evaluation of overloaded terms. For example, given a generic equality operator (==) of type ∀a.Eq a ⇒ a → a → Bool and integer valued expressions E and F, we can determine that the expression E == F has type Bool in any environment which satisfies Eq Int. However, this information is not sufficient to determine the value of E == F; this is only possible if we are also provided with the value of the equality operator which makes Int an instance of Eq.
Our aim in the next two chapters is to present a general approach to the semantics and implementation of objects with qualified types based on the concept of evidence. The essential idea is that an object of type π ⇒ σ can only be used if we are also supplied with suitable evidence that the predicate π does indeed hold. In this chapter we concentrate on the role of evidence for the systems of predicates described in Chapter 2 and then, in the following chapter, extend the results of Chapter 3 to give a semantics for OML.
As an introduction, Section 4.1 describes some simple techniques used in the implementation of particular forms of overloading and shows why these methods are unsuitable for the more general systems considered in this thesis.
This chapter, describes GTC, an alternative approach to the use of type classes that avoids the problems associated with context reduction, while retaining much of the flexibility of HTC. In addition, GTC benefits from a remarkably clean and efficient implementation that does not require sophisticated compile-time analysis or transformation. As in the previous chapter we concentrate more on implementation details than on formal properties of GTC.
An early description of GTC was distributed to the Haskell mailing list in February 1991 and subsequently used as a basis for Gofer, a small experimental system based on Haskell and described in (Jones, 1991c). The two languages are indeed very close, and many programs that are written with one system in mind can be used with the other with little or no changes. On the other hand, the underlying type systems are slightly different: Using explicit type signature declarations it is possible to construct examples that are well typed in one but not in the other.
Section 8.1 describes the basic principles of GTC and its relationship to HTC. The only significant differences between the two systems are in the methods used to simplify the context part of an inferred type. While HTC relies on the use of context reduction, GTC adopts a weaker form of simplification that does not make use of the information provided in instance declarations.
Section 8.2 describes the implementation of dictionaries used in the current version of Gofer. As an alternative to the treatment of dictionaries as tuples of values in the previous chapter, we give a representation which guarantees that the translation of each member function definition requires at most one dictionary parameter.
The principal aim of this chapter is to show how the concept of evidence can be used to give a semantics for OML programs with implicit overloading.
Outline of chapter
We begin by describing a version of the polymorphic λ-calculus called OP that includes the constructs for evidence application and abstraction described in the previous chapter (Section 5.1). One of the main uses of OP is as the target of a translation from OML with the semantics of each OML term being defined by those of its translation. In Section 5.2 we show how the OML typing derivations for a term E can be interpreted as OP derivations for terms with explicit overloading, each of which is a potential translation for E. It is immediate from this construction that every well-typed OML term has a translation and that all translations obtained in this way are well-typed in OP.
Given that each OML typing typically has many distinct derivations it follows that there will also be many distinct translations for a given term and it is not clear which should be chosen to represent the original term. The OP term corresponding to the derivation produced by the type inference algorithm in Section 3.4 gives one possible choice but it seems rather unnatural to base a definition of semantics on any particular type inference algorithm. A better approach is to show that any two translations of a term are semantically equivalent so that an implementation is free to use whichever translation is more convenient in a particular situation while retaining the same, well-defined semantics.
This chapter expands on the implementation of type classes in Haskell using dictionary values as proposed by Wadler and Blott (1989) and sketched in Section 4.5. For brevity, we refer to this approach to the use of type classes as HTC. The main emphasis in this chapter is on concrete implementation and we adopt a less rigourous approach to formal properties of HTC than in previous chapters. In particular, we describe a number of optimisations that are necessary to obtain an efficient implementation of HTC - i.e. to minimise the cost of overloading. We do not consider the more general problems associated with the efficient implementation of non-strict functional languages like Haskell which are beyond the scope of this thesis.
Section 7.1 describes an important aspect of the system of type classes in Haskell which means that only a particularly simple form of predicate expression can be used in the type signature of an overloaded function. The set of predicates in a Haskell type signature is usually referred to as the context and hence we will use the term context reduction to describe the process of reducing the context to an acceptable form. Context reduction usually results in a small context, acts as a partial check of satisfiability and helps to guarantee decidability of predicate entailment. Unfortunately, it can also interfere with the use of data abstraction and limits the possibilities for extending the Haskell system of type classes.
The main ideas used in the implementation of HTC are described in Section 7.2 including the treatment of default definitions which were omitted from our previous descriptions.
In this thesis we have developed a general formulation of overloading based on the use of qualified types. Applications of qualified types can be described by choosing an appropriate system of predicates and we have illustrated this with particular examples including Haskell type classes, explicit subtyping and extensible records. We have shown how these ideas can be extended to construct a system that combines ML-style polymorphism and overloading in an implicitly typed programming language. Using the concept of evidence we have extended this work to describe the semantics of overloading in this language, establishing sufficient conditions to guarantee that the meaning of a given term is well-defined. Finally, we have described techniques that can be used to obtain efficient concrete implementations of systems based on this framework.
From a theoretical perspective, some of the main contributions of this thesis are:
The formulation of a general purpose system that can be used to describe a number of different applications of overloading.
The extension of standard results, for example the existence of principal types, to the type system of OML.
A new approach to the proof of coherence, based on the use of conversions.
From a practical perspective, we mention:
The implementation of overloading using the template-based approach, and the closely related implementation of type class overloading in Gofer.
A new implementation for extensible records, based on the use of evidence.
The use of information about satisfiability of predicate sets to obtain more informative inferred types.
The key feature of a system of qualified types that distinguishes it from other systems based solely on parametric polymorphism is the use of a language of predicates to describe sets of types (or more generally, relations between types). Exactly which sets of types and relations are useful will (of course) vary from one application to another and it does not seem appropriate to base a general theory on any particular choice. Our solution, outlined in this chapter, is to work in a framework where the properties of a (largely unspecified) language of predicates are described in terms of an entailment relation that is expected to satisfy a few simple laws. In this way, we are able to treat the choice of a language of predicates as a parameter for each of the type systems described in subsequent chapters. This approach also has the advantage that it enables us to investigate how the properties of particular type systems are affected by properties of the underlying systems of predicates.
The basic notation for predicates and entailment is outlined in Section 2.1. The remaining sections illustrate this general framework with applications to: Haskell-style type classes (Section 2.2), subtyping (Section 2.3) and extensible records (Section 2.4). Although we consider each of these examples independently, this work opens up the possibility of combining elements of each in a single concrete programming language.
Basic definitions
For much of this thesis we deal with an abstract language of predicates on types.
Many programming languages rely on the use of a system of types to distinguish between different kinds of value. This in turn is used to identify two classes of program; those which are well-typed and accepted by the type system, and those that it rejects. Many different kinds of type system have been considered but, in each case, the principal benefits are the same:
The ability to detect program errors at compile time: A type discipline can often help to detect simple program errors such as passing an inappropriate number of parameters to a function.
Improved performance: If, by means of the type system, it is possible to ensure that the result of a particular calculation will always be of a certain type, then it is possible to omit the corresponding runtime checks that would otherwise be needed before using that value. The resulting program will typically be slightly shorter and faster.
Documentation: The types of the values defined in a program are often useful as a simple form of documentation. Indeed, in some situations, just knowing the type of an object can be enough to deduce properties about its behaviour (Wadler, 1989).
The main disadvantage is that no effective type system is complete; there will always be programs that are rejected by the type system, even though they would have produced welldefined results if executed without consideration of the types of the terms involved.