To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B. Based on the idea of refinement, the author's systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.
Richard Bird takes a radical approach to algorithm design, namely, design by calculation. These 30 short chapters each deal with a particular programming problem drawn from sources as diverse as games and puzzles, intriguing combinatorial tasks, and more familiar areas such as data compression and string matching. Each pearl starts with the statement of the problem expressed using the functional programming language Haskell, a powerful yet succinct language for capturing algorithmic ideas clearly and simply. The novel aspect of the book is that each solution is calculated from an initial formulation of the problem in Haskell by appealing to the laws of functional programming. Pearls of Functional Algorithm Design will appeal to the aspiring functional programmer, students and teachers interested in the principles of algorithm design, and anyone seeking to master the techniques of reasoning about programs in an equational style.
Types are the central organizing principle of the theory of programming languages. In this innovative book, Professor Robert Harper offers a fresh perspective on the fundamentals of these languages through the use of type theory. Whereas most textbooks on the subject emphasize taxonomy, Harper instead emphasizes genetics, examining the building blocks from which all programming languages are constructed. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design – the absence of ill-defined programs – follows naturally. Professor Harper's presentation is simultaneously rigorous and intuitive, relying on elementary mathematics. The framework he outlines scales easily to a rich variety of language concepts and is directly applicable to their implementation. The result is a lucid introduction to programming theory that is both accessible and practical.
The inductive and the coinductive types are two important forms of recursive type. Inductive types correspond to least, or initial, solutions of certain type isomorphism equations, and coinductive types correspond to their greatest, or final, solutions. Intuitively, the elements of an inductive type are those that may be obtained by a finite composition of its introductory forms. Consequently, if we specify the behavior of a function on each of the introductory forms of an inductive type, then its behavior is determined for all values of that type. Such a function is called a recursor, or catamorphism. Dually, the elements of a coinductive type are those that behave properly in response to a finite composition of its elimination forms. Consequently, if we specify the behavior of an element on each elimination form, then we have fully specified that element as a value of that type. Such an element is called an generator, or anamorphism.
Motivating Examples
The most important example of an inductive type is the type of natural numbers as formalized in Chapter 9. The type nat is defined to be the least type containing z and closed under s(−). The minimality condition is witnessed by the existence of the recursor, iter e {z ⇒ e0 ∣ s(x) ⇒ e1}, which transforms a natural number into a value of type τ, given its value for zero, and a transformation from its value on a number to its value on the successor of that number.
The dynamics of a language is a description of how programs are to be executed. The most important way to define the dynamics of a language is by the method of structural dynamics, which defines a transition system that inductively specifies the step-by-step process of executing a program. Another method for presenting dynamics, called contextual dynamics, is a variation of structural dynamics in which the transition rules are specified in a slightly different manner. An equational dynamics presents the dynamics of a language equationally by a collection of rules for deducing when one program is definitionally equal to another.
Transition Systems
A transition system is specified by the following four forms of judgment:
s state, asserting that s is a state of the transition system,
s final, where s state, asserting that s is a final state,
s initial, where s state, asserting that s is an initial state,
s ↦ s′, where s state and s′ state, asserting that state s may transition to state s′.
In practice we always arrange things so that no transition is possible from a final state: If s final, then there is no s′ state such that s ↦ s′. A state from which no transition is possible is sometimes said to be stuck. Whereas all final states are, by convention, stuck, there may be stuck states in a transition system that are not final.
Many programs can be seen as instances of a general pattern applied to a particular situation. Very often the pattern is determined by the types of the data involved. For example, in Chapter 9 the pattern of computing by recursion over a natural number is isolated as the defining characteristic of the type of natural numbers. This concept will itself emerge as an instance of the concept of type-generic, or just generic, programming.
Suppose that we have a function f of type ρ → ρ′ that transforms values of type ρ into values of type ρ′. For example, f might be the doubling function on natural numbers. We wish to extend f to a transformation from type [ρ/t]τ to type [ρ′/t]τ by applying f to various spots in the input where a value of type τ occurs to obtain a value of type ρ′, leaving the rest of the data structure alone. For example, τ might be bool × ρ, in which case f could be extended to a function of type bool × ρ → bool × ρ′ that sends the pairs ⟨a, b⟩ to the pair ⟨a, f(b)⟩.
This example glosses over a significant problem of ambiguity of the extension. Given a function f of type ρ → ρ′, it is not obvious in general how to extend it to a function mapping [ρ/t]τ to [ρ′/t]τ.
Parallel computation seeks to reduce the running times of programs by allowing many computations to be carried out simultaneously. For example, if we wish to add two numbers, each given by a complex computation, we may consider evaluating the addends simultaneously, then computing their sum. The ability to exploit parallelism is limited by the dependencies among parts of a program. Obviously, if one computation depends on the result of another, then we have no choice but to execute them sequentially so that we may propagate the result of the first to the second. Consequently, the fewer dependencies among subcomputations, the greater the opportunities for parallelism. This argues for functional models of computation, because the possibility of mutation of shared assignables imposes sequentialization constraints on imperative code.
In this chapter we discuss nested parallelism in which we nest parallel computations within one another in a hierarchical manner. Nested parallelism is sometimes called fork-join parallelism to emphasize the hierarchical structure arising from forking two (or more) parallel computations, then joining these computations to combine their results before proceeding. We consider two forms of dynamics for nested parallelism. The first is a structural dynamics in which a single transition on a compound expression may involve multiple transitions on its constituent expressions. The second is a cost dynamics (introduced in Chapter 7) that focuses attention on the sequential and parallel complexity (also known as the work and depth) of a parallel program by associating a series-parallel graph with each computation.
An interface is a contract that specifies the rights of a client and the responsibilities of an implementor. Being a specification of behavior, an interface is a type. In principle any type may serve as an interface, but in practice it is usual to structure code into modules consisting of separable and reusable components. An interface specifies the behavior of a module expected by a client and imposed on the implementor. It is the fulcrum on which is balanced the tension between separability and integration. As a rule, a module should have a well-defined behavior that can be understood separately, but it is equally important that it be easy to combine modules to form an integrated whole.
A fundamental question is, what is the type of a module? That is, what form should an interface take? One long-standing idea is for an interface to be a labeled tuple of functions and procedures with specified types. The types of the fields of the tuple are traditionally called function headers, because they summarize the call and return types of each function. Using interfaces of this form is called procedural abstraction, because it limits the dependencies between modules to a specified set of procedures. We may think of the fields of the tuple as being the instruction set of an abstract machine. The client makes use of these instructions in its code, and the implementor agrees to provide their implementations.
We saw in Chapter 17 that an untyped language may be viewed as a unityped language in which the so-called untyped terms are terms of a distinguished recursive type. In the case of the untyped λ-calculus this recursive type has a particularly simple form, expressing that every term is isomorphic to a function. Consequently, no run-time errors can occur that are due to the misuse of a value—the only elimination form is application, and its first argument can only be a function. This property breaks down once more than one class of value is permitted into the language. For example, if we add natural numbers as a primitive concept to the untyped λ-calculus (rather than defining them via Church encodings), then it is possible to incur a run-time error arising from attempting to apply a number to an argument or to add a function to a number. One school of thought in language design is to turn this vice into a virtue by embracing a model of computation that has multiple classes of value of a single type. Such languages are said to be dynamically typed, in purported opposition to statically typed languages. But the supposed opposition is illusory: Just as the so-called untyped λ-calculus turns out to be unityped, so dynamic languages turn out to be but restricted forms of static language. This remark is so important it bears repeating: Every dynamic language is inherently a static language in which we confine ourselves to a (needlessly) restricted type discipline to ensure safety.
In Chapters 12 and 25 we investigated the use of sums for the classification of values of disparate type. Every value of a classified type is labeled with a symbol that determines the type of the instance data. A classified value is decomposed by pattern matching against a known class, which reveals the type of the instance data.
Under this representation the possible classes of an object are fully determined statically by its type. However, it is sometimes useful to allow the possible classes of data value to be determined dynamically. There are many uses for such a capability, some less apparent than others. The most obvious is simply extensibility, that we wish to introduce new classes of data during execution (and, presumably, define how methods act on values of those new classes).
A less obvious application exploits the fact that the new class is guaranteed to be distinct from any other class that has already been introduced. The class itself is a kind of “secret” that can be disclosed only if the computation that creates the class discloses its existence to another computation. In particular, the class is opaque to any computation to which this disclosure has not been explicitly made. This capability has a number of practical applications.
One application is to use dynamic classification as a “perfect encryption” mechanism that guarantees that a value cannot be determined without access to the appropriate “keys.”
Modernized Algol, or ℒ{nat cmd ⇀}, is an imperative, block-structured programming language based on the classic language Algol. ℒ{nat cmd ⇀} may be seen as an extension to ℒ{nat ⇀} with a new syntactic sort of commands that act on assignables by retrieving and altering their contents. Assignables are introduced by declaring them for use within a specified scope; this is the essence of block structure. Commands may be combined by sequencing and may be iterated by recursion.
ℒ{nat cmd ⇀} maintains a careful separation between pure expressions, whose meaning does not depend on any assignables, and impure commands, whose meaning is given in terms of assignables. This ensures that the evaluation order for expressions is not constrained by the presence of assignables in the language, and allows for expressions to be manipulated, much as in PCF. Commands, on the other hand, have a tightly constrained execution order, because the execution of one may affect the meaning of another.
A distinctive feature of ℒ{nat cmd ⇀} is that it adheres to the stack discipline, which means that assignables are allocated on entry to the scope of their declaration, and deallocated on exit, using a conventional stack discipline. This avoids the need for more complex forms of storage management, at the expense of reducing the expressiveness of the language.
Basic Commands
The syntax of the language ℒ {nat cmd ⇀} of modernized Algol distinguishes pure expressions from impure commands .