We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of these notes is to explain how games can provide an intensional semantics for functional programming languages, and for a theory of proofs. From the point of view of program semantics, the rough idea is that we can move from modelling computable functions (which give the ‘extensional’ behaviour of programs) to modelling ‘intensional’ aspects of the algorithms themselves. In proof theory, the tradition has been to consider syntactic representations of (what are presumably intended to be ‘intensional’) proofs; so the idea is to give a more intrinsic account of a notion of proof.
Three main sections follow this Introduction. Section 2 deals with games and partial strategies; it includes a discussion of the application of these ideas to the modelling of algorithms. Section 3 is about games and total strategies; it runs parallel to the treatment in Section 2, and is quite compressed. Section 4 gives no more than an outline of more sophisticated notions of game, and discusses them as models for proofs. Exercises are scattered through the text.
I very much hope that the broad outline of these notes will be comprehensible on the basis of little beyond an understanding of sequences (lists) and trees. However the statements of some results and some of the exercises presuppose a little knowledge of category theory, of domain theory and of linear logic.
The “classical” paradigm for denotational semantics models data types as domains, i.e. structured sets of some kind, and programs as (suitable) functions between domains. The semantic universe in which the denotational modelling is carried out is thus a category with domains as objects, functions as morphisms, and composition of morphisms given by function composition. A sharp distinction is then drawn between denotational and operational semantics. Denotational semantics is often referred to as “mathematical semantics” because it exhibits a high degree of mathematical structure; this is in part achieved by the fact that denotational semantics abstracts away from the dynamics of computation—from time. By contrast, operational semantics is formulated in terms of the syntax of the language being modelled; it is highly intensional in character; and it is capable of expressing the dynamical aspects of computation.
The classical denotational paradigm has been very successful, but has some definite limitations. Firstly, fine-structural features of computation, such as sequentially, computational complexity, and optimality of reduction strategies, have either not been captured at all denotationally, or not in a fully satisfactory fashion. Moreover, once languages with features beyond the purely functional are considered, the appropriateness of modelling programs by functions is increasingly open to question. Neither concurrency nor “advanced” imperative features such as local references have been captured denotationally in a fully convincing fashion.
Computational behaviours are often distributed, in the sense that they may be seen as spatially separated activities accomplishing a joint task. Many such systems are not meant to terminate, and hence it makes little sense to talk about their behaviours in terms of traditional input-output functions. Rather, we are interested in the behaviour of such systems in terms of their often complex patterns of stimuli/response relationships varying over time. For this reason such systems are often referred to as reactive systems.
Many structures for modelling reactive systems have been studied over the past 20 years. Here we present a few key models. Common to all of them, is that they rest on an idea of atomic actions, over which the behaviour of a system is defined. The models differ mainly with respect to what behavioural features of systems are represented. Some models are more abstract than others, and this fact is often used in informal classifications of the models with respect to expressibility. One of our aims is to present principal representatives of models, covering the landscape from the most abstract to the most concrete, and to formalise the nature of their relationships by explicitly representing the steps of abstraction that are involved in moving between them. In following through this programme, category theory is a convenient language for formalising the relationships between models.
To give an idea of the role categories play, let us focus attention on transition systems as a model of parallel computation.
Here is a sample of notations that might be useful to people who are considering Z. All are based on the discrete mathematics taught in Chapters 8 to 11.
In addition to Z itself, the Z family includes several object-oriented dialects including Object-Z, MooZ, OOZE, and Z++ [Stepney, Barden, and Cooper, 1992a; Stepney et al., 1992b; Lano and Haughton, 1993]. Some early contributors to Z went on to create a development method called B that includes a specification language and a tool for automating calculations and proofs [Lano and Haughton, 1995].
Of the other formal notations, VDM [Jones, 1990] is most similar to Z. Like Z, VDM is a model-based notation. You model a system by representing its state and a collection of operations that can change its state. VDM lacks the boxed paragraphs of Z and has nothing quite like the Z schema calculus. VDM stands for the Vienna Development Method. The VDM community emphasizes refinement, not just modelling. Z and VDM are compared in Hayes [1992b].
Combinations of conditions that define complex predicates can sometimes be made easier to grasp by presenting them in a two-dimensional tabular format. A particularly rigorous and comprehensive tabular notation was invented by Parnas and others [Parnas, 1994] and has been applied to nuclear reactor shutdown software. Leveson and colleagues invented a tabular notation called AND/OR tables and applied it to an aircraft collision avoidance system [Leveson et al., 1994].
This chapter presents a more realistic model for the graphical user interface we introduced in Chapter 6. It is based on the control console of a real medical device, but the same techniques can be applied to any system where the operator uses a pointing device such as a mouse to select items from on-screen windows and menus, and uses a keyboard to enter information into dialog boxes. Such facilities are provided by many software systems in wide use today, for example the X window system.
A graphical user interface is an example of a state transition system driven by events. This chapter explains how to model event-driven state transition systems in Z, and shows how to illustrate a Z text with a kind of state transition diagram called a statechart. This chapter also shows how to use Z to express designs that are partitioned into units or modules that are largely independent. In Z these units can include both data and the operations that act on it, so they can represent classes in object-oriented programming.
Events
A great advantage of a graphical user interface is that it allows the users to choose operations in whatever order makes the most sense to them, it does not force users through a fixed sequence determined by the designers. All operations are always potentially available, although some operations might have to be disabled at certain times.
This chapter teaches a practical method for writing code from Z specifications that supplements intuition and experience with formal derivation.
The preceding Chapters 26 and 27 on refinement and program derivation show how to get from Z to code by purely formal methods, where each development step is a formula manipulation. As you must have realized, it is rarely necessary to develop an entire system in this completely formal way. The programming problems that arise within a single project usually present a range of difficulty. Large parts of the project may be so routine that there is no need for any formal description other than the code itself. Only a portion requires specification in Z. In this portion, you might refine only a fraction to a detailed design in Z. And in this fraction you might derive and verify only a page or two of code. The rest is so obvious that it can be translated to code by intuition and then verified by inspection.
Nevertheless, you can choose a strategy for implementing Z that you could justify formally by the methods of Chapters 26 and 27 if you were challenged to do so. This chapter presents such a strategy. When you have a formal specification, you can check designs and code rigorously if doubts remain after informal inspection.
The examples in this chapter are in C. They could easily be adapted to other programming languages.
Formal methods are not project management methods, but some programmers fear that using formal methods would impose a burdensome and inflexible way of working. This chapter should dispel that misconception and reassure you that formal methods are compatible with many different development methods and management styles. This chapter discusses dividing projects into stages, learning users' requirements, translating informal requirements to formal specifications, and validating formal specifications.
Work in stages
There is one assumption that underlies all formal methods: A programming project is divided into stages, where each stage produces a product that can be examined, reviewed, and assessed for correctness and other qualities.
Three products that must be produced by almost any programming project are the specification, which describes the behavior of the product; the design, which describes its internal structure; and the code, which is executable and is expressed in some particular programming language. Most projects produce other products as well, such as manuals and other materials for instructing users, assurance evidence such as test plans and test results, and so forth.
Working in stages is a central concept in every systematic software development method. Formal methods add these innovations: express the specification and design (not just the code) in a formal notation, and use formula manipulations (such as calculation and proof) to derive the products and check that they are correct.
Experienced programmers are often skeptical of programming methods that proceed in stages.
Some problems present us with a large collection of facts and rules, but no underlying theory that we can use to design a compact algorithm for calculating a solution. Examples of such problems include medical diagnosis and treatment planning, scheduling jobs in a machine shop, diagnosis and repair of malfunctioning machinery, and determining customers' eligibility for financial credit. In these areas there are no simple first principles from which everything follows; instead, there are a lot of empirical observations and rules gleaned from hard experience or laid down by fiat. Sometimes you can find an acceptable solution by searching for relevant facts and applying the pertinent rules. Rule-based programming mechanizes this style of problem solving.
Rule-based programs are sometimes called expert systems and are said to display artificial intelligence, but they are just computer programs that employ some specialized techniques that have been found useful for certain kinds of problems. If such a program is intended for a serious purpose, it must meet the same standards of quality and correctness required of other programs. How can we tell if a rule-based program has computed the right answer?
Rule-based programs are often evaluated by submitting some sample results to a panel of human experts. This kind of validation can be helpful but it does not provide sufficient coverage to detect every incorrect result nor does it provide any guidance for design and implementation.