To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces elementary definitions, concepts and results concerning arbitrary Petri nets. We start with a short section on mathematical notation. Section 2.2 is devoted to the definition and properties of nets, markings, the occurrence rule and incidence matrices. Section 2.3 defines net systems as nets with a distinguished initial marking. We give formal definitions of some behavioural properties of systems: liveness, deadlock-freedom, place-liveness, boundedness. Section 2.4 introduces S- and T-invariants, an analysis technique used throughout the book. The relationship between these invariants and the behavioural properties of Section 2.3 is discussed.
The chapter includes six simple but important results, which are very often used in later chapters. They are the Monotonicity, Marking Equation, Exchange, Boundedness, and Reproduction Lemma, and the Strong Connectedness Theorem. We encourage the reader to become familiar with them before moving to the next chapters.
Mathematical preliminaries
We use the standard definitions on sets, numbers, relations, sequences, vectors and matrices. The purpose of this section is to fix some additional notations.
Notation 2.1Sets, numbers, relations
Let X and Y be sets. We write X ⊆ Y if X is a subset of Y, including the case X = Y. X ⊂ Y denotes that X is a proper subset of Y, i.e., X ⊆ Y and X ≠ Y. X\Y denotes the set of elements of X that do not belong to Y. |X| denotes the cardinality of X.
Free-choice Petri nets have been around for more than twenty years, and are a successful branch of net theory. Nearly all the introductory texts on Petri nets devote some pages to them. This book is intended for those who wish to go further. It brings together the classical theorems of free-choice theory obtained by Commoner and Hack in the seventies, and a selection of new results, like the Rank Theorem, which were so far scattered among papers, reports and theses, some of them difficult to access.
Much of the recent research which found its way into the book was funded by the ESPRIT II BRA Action DEMON, and the ESPRIT III Working Group CALIBAN. The book is self-contained, in the sense that no previous knowledge of Petri nets is required. We assume that the reader is familiar with naïve set theory and with some elementary notions of graph theory (e.g. path, circuit, strong connectedness) and linear algebra (e.g. linear independence, rank of a matrix). One result of Chapter 4 requires some knowledge of the theory of NP-completeness.
The book can be the subject of an undergraduate course of one semester if the proofs of the most difficult theorems are omitted. If they are included, we suggest the course be restricted to Chapters 1 through 5, which contain most of the classical results on S- and T-systems and free-choice Petri nets. A postgraduate course could cover the whole book.
All chapters are accompanied by a list of exercises.
This chapter describes a number of features that might be useful in practical work with qualified types. We adopt a less rigourous approach than in previous chapters and we do not attempt to deal with all of the technical issues that are involved.
Section 6.1 suggests a number of techniques that can be used to reduce the size of the predicate set in the types calculated by the type inference algorithm, resulting in smaller types that are often easier to understand. As a further benefit, the number of evidence parameters in the translation of an overloaded term may also be reduced, leading to a potentially more efficient implementation.
Section 6.2 shows how the use of information about satisfiability of predicate sets may be used to infer more accurate typings for some terms and reject others for which suitable evidence values cannot be produced.
Finally, Section 6.3 discusses the possibility of adding the rule of subsumption to the type system of OML to allow the use of implicit coercions from one type to another within a given term.
It would also be useful to consider the task of extending the language of OML terms with constructs that correspond more closely to concrete programming languages such as recursion, groups of local binding and the use of explicit type signatures. One example where these features have been dealt with is in the proposed static semantics for Haskell given in (Peyton Jones and Wadler, 1992) but, for reasons of space, we do not consider this here.
This chapter describes an ML-like language (i.e. implicitly typed λ-calculus with local definitions) and extends the framework of (Milner, 1978; Damas and Milner, 1982) with support for overloading using qualified types and an arbitrary system of predicates of the form described in the previous chapter. The resulting system retains the flexibility of the ML type system, while allowing more accurate descriptions of the types of objects. Furthermore, we show that this approach is suitable for use in a language based on type inference, in contrast for example with more powerful languages such as the polymorphic λ-calculus that require explicit type annotations.
Section 3.1 introduces the basic type system and Section 3.2 describes an ordering on types, used to determine when one type is more general than another. This is used to investigate the properties of polymorphic types in the system.
The development of a type inference algorithm is complicated by the fact that there are many ways in which the typing rules in our original system can be applied to a single term, and it is not clear which of these (if any!) will result in an optimal typing. As an intermediate step, Section 3.3 describes a syntax-directed system in which the choice of typing rules is completely determined by the syntactic structure of the term involved, and investigates its relationship to the original system. Exploiting this relationship, Section 3.4 presents a type inference algorithm for the syntax-directed system which can then be used to infer typings in the original system.
One of the main goals in preparing this book for publication was to preserve the thesis, as much as possible, in the form that it was originally submitted. With this in mind, we have restricted ourselves to making only very minor changes to the body of the thesis, for example, correcting typographical errors.
On the other hand, we have continued to work with the ideas presented here, to find new applications and to investigate some of the areas identified as topics for further research. In this short chapter, we domment briefly on some examples of this, illustrating both the progress that has been made and some of the new opportunities for further work that have been exposed.
We should emphasize once again that this is the only chapter that was not included as part of the original thesis.
Constructor classes
The initial ideas for a system of constructor classes as sketched in Section 9.2 have been developed in (Jones, 1993b), and full support for these ideas is now included in the standard Gofer distribution (versions 2.28 and later). The two main technical extensions in the system of constructor classes to the work described here are:
The use of kind inference to determine suitable kinds for all the user-defined type constructors appearing in a given program.
The extension of the unification algorithm to ensure that it calculates only kind-preserving substitutions. This is necessary to ensure soundness and is dealt with by ensuring that constructor variables are only ever bound to constructors of the corresponding kind. Fortunately, this has a very simple and efficient implementation.
While the results of the preceding chapter provide a satisfactory treatment of type inference with qualified types, we have not yet made any attempt to discuss the semantics or evaluation of overloaded terms. For example, given a generic equality operator (==) of type ∀a.Eq a ⇒ a → a → Bool and integer valued expressions E and F, we can determine that the expression E == F has type Bool in any environment which satisfies Eq Int. However, this information is not sufficient to determine the value of E == F; this is only possible if we are also provided with the value of the equality operator which makes Int an instance of Eq.
Our aim in the next two chapters is to present a general approach to the semantics and implementation of objects with qualified types based on the concept of evidence. The essential idea is that an object of type π ⇒ σ can only be used if we are also supplied with suitable evidence that the predicate π does indeed hold. In this chapter we concentrate on the role of evidence for the systems of predicates described in Chapter 2 and then, in the following chapter, extend the results of Chapter 3 to give a semantics for OML.
As an introduction, Section 4.1 describes some simple techniques used in the implementation of particular forms of overloading and shows why these methods are unsuitable for the more general systems considered in this thesis.
This chapter, describes GTC, an alternative approach to the use of type classes that avoids the problems associated with context reduction, while retaining much of the flexibility of HTC. In addition, GTC benefits from a remarkably clean and efficient implementation that does not require sophisticated compile-time analysis or transformation. As in the previous chapter we concentrate more on implementation details than on formal properties of GTC.
An early description of GTC was distributed to the Haskell mailing list in February 1991 and subsequently used as a basis for Gofer, a small experimental system based on Haskell and described in (Jones, 1991c). The two languages are indeed very close, and many programs that are written with one system in mind can be used with the other with little or no changes. On the other hand, the underlying type systems are slightly different: Using explicit type signature declarations it is possible to construct examples that are well typed in one but not in the other.
Section 8.1 describes the basic principles of GTC and its relationship to HTC. The only significant differences between the two systems are in the methods used to simplify the context part of an inferred type. While HTC relies on the use of context reduction, GTC adopts a weaker form of simplification that does not make use of the information provided in instance declarations.
Section 8.2 describes the implementation of dictionaries used in the current version of Gofer. As an alternative to the treatment of dictionaries as tuples of values in the previous chapter, we give a representation which guarantees that the translation of each member function definition requires at most one dictionary parameter.
The principal aim of this chapter is to show how the concept of evidence can be used to give a semantics for OML programs with implicit overloading.
Outline of chapter
We begin by describing a version of the polymorphic λ-calculus called OP that includes the constructs for evidence application and abstraction described in the previous chapter (Section 5.1). One of the main uses of OP is as the target of a translation from OML with the semantics of each OML term being defined by those of its translation. In Section 5.2 we show how the OML typing derivations for a term E can be interpreted as OP derivations for terms with explicit overloading, each of which is a potential translation for E. It is immediate from this construction that every well-typed OML term has a translation and that all translations obtained in this way are well-typed in OP.
Given that each OML typing typically has many distinct derivations it follows that there will also be many distinct translations for a given term and it is not clear which should be chosen to represent the original term. The OP term corresponding to the derivation produced by the type inference algorithm in Section 3.4 gives one possible choice but it seems rather unnatural to base a definition of semantics on any particular type inference algorithm. A better approach is to show that any two translations of a term are semantically equivalent so that an implementation is free to use whichever translation is more convenient in a particular situation while retaining the same, well-defined semantics.
This chapter expands on the implementation of type classes in Haskell using dictionary values as proposed by Wadler and Blott (1989) and sketched in Section 4.5. For brevity, we refer to this approach to the use of type classes as HTC. The main emphasis in this chapter is on concrete implementation and we adopt a less rigourous approach to formal properties of HTC than in previous chapters. In particular, we describe a number of optimisations that are necessary to obtain an efficient implementation of HTC - i.e. to minimise the cost of overloading. We do not consider the more general problems associated with the efficient implementation of non-strict functional languages like Haskell which are beyond the scope of this thesis.
Section 7.1 describes an important aspect of the system of type classes in Haskell which means that only a particularly simple form of predicate expression can be used in the type signature of an overloaded function. The set of predicates in a Haskell type signature is usually referred to as the context and hence we will use the term context reduction to describe the process of reducing the context to an acceptable form. Context reduction usually results in a small context, acts as a partial check of satisfiability and helps to guarantee decidability of predicate entailment. Unfortunately, it can also interfere with the use of data abstraction and limits the possibilities for extending the Haskell system of type classes.
The main ideas used in the implementation of HTC are described in Section 7.2 including the treatment of default definitions which were omitted from our previous descriptions.
In this thesis we have developed a general formulation of overloading based on the use of qualified types. Applications of qualified types can be described by choosing an appropriate system of predicates and we have illustrated this with particular examples including Haskell type classes, explicit subtyping and extensible records. We have shown how these ideas can be extended to construct a system that combines ML-style polymorphism and overloading in an implicitly typed programming language. Using the concept of evidence we have extended this work to describe the semantics of overloading in this language, establishing sufficient conditions to guarantee that the meaning of a given term is well-defined. Finally, we have described techniques that can be used to obtain efficient concrete implementations of systems based on this framework.
From a theoretical perspective, some of the main contributions of this thesis are:
The formulation of a general purpose system that can be used to describe a number of different applications of overloading.
The extension of standard results, for example the existence of principal types, to the type system of OML.
A new approach to the proof of coherence, based on the use of conversions.
From a practical perspective, we mention:
The implementation of overloading using the template-based approach, and the closely related implementation of type class overloading in Gofer.
A new implementation for extensible records, based on the use of evidence.
The use of information about satisfiability of predicate sets to obtain more informative inferred types.