To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 1 we saw that abstraction relations, rather than abstraction functions, are the natural concept to formulate proof principles for establishing data refinement, i.e., simulation. This impression was reinforced in Chapter 4 by establishing completeness of the combination of L- and L−1-simulation for proving data refinement. How then is it possible that such an apparently practical method as VDM promotes the use of total abstraction functions instead? Notice that in our set-up such functions are the most restrictive version of abstraction relations, because for them the four versions of simulation are all equivalent. Should this not lead to a serious degree of incompleteness, in that it offers a much weaker proof method than L-simulation, which is already incomplete on its own? As we shall see in this chapter this is not necessarily the case. Combining total abstraction functions with so-called auxiliary variables allows the formulation of proof principles which are equal in power to L- and L−1-simulation. Auxiliary variables are program variables to which assignments are added inside a program not for influencing the flow of control but for achieving greater expressiveness in the formulation of abstraction functions and assertions. Following [AL91] such total abstraction functions are called refinement mappings. The chances for an abstraction relation (from a concrete data type to an abstract data type) to be functional can be increased by artificially inflating the concrete level state space via the introduction of auxiliary variables on that level.
By recording part of the history of a computation in an auxiliary variable, called a history variable, and combining this with refinement mappings, a proof method equivalent to L-simulation is obtained.
During the process of stepwise, hierarchical program development, a step represents a transformation of a so-called abstract higher level result into a more concrete lower level one. In general, this development process corresponds to increasing the amount of detail required for the eventual implementation of the original specification on a given machine.
In the first part of this book we develop the relational theory of simulation and a general version of Hoare logic, show how data refinement can be expressed within this logic, extend these results to total correctness, and show how all this theory can be uniformly expressed inside the refinement calculus of Ralph Back, Paul Gardiner, Carroll Morgan and Joakim von Wright. We develop this theory as a reference point for comparing various existing data refinement methods in the second part, some of which are syntax-based methods. This is one of the main reasons why we are forced to clearly separate syntax from semantics.
The second part of this monograph focuses on the introduction of, and comparison between, various methods for proving correctness of such transformation steps. Although these methods are illustrated mainly by applying them to correctness proofs of implementations of data types, the techniques developed apply equally well to proving correctness of such steps in general, because all these methods are only variations on one central theme: that of proof by simulation, of which we analyze at least 13 different formulations.
Simulation, our main technique for proving data refinement, also works for proving refinement of total correctness between data types based on the semantic model introduced in the previous chapter. However, certain complications arise; for instance, L−1-simulation is unsound in case abstract operations expose infinite nondeterminism, which severely restricts the use of specification statements.
Section 9.1 extends the soundness and completeness results for simulation from Chapter 4 to total correctness. As the main result, we present in Section 9.2 a total correctness version of our L-simulation theorem from Chapter 7.
Simulation
The semantics-based notions of data type, data refinement, and simulation need not be defined anew. The only notion changed is that of observation since, through our total correctness program semantics, nonterminating behaviors have also become observable. It is essential to the understanding of total correctness simulation between data types to realize that, semantically speaking, abstraction relations are directed. In particular, the relational inverse of an abstraction relation from level C to level A is not an abstraction relation in the opposite direction, as is the case for partial correctness. Now it becomes clear why several authors prefer the name downward simulation for L-simulation and upward simulation for L−1-simulation [HHS87]: the direction of an L-simulation relation is downwards, from the abstract to the concrete level, whence a more descriptive name for it would be representation relation or downward simulation relation. For this reason we redefine the meaning of ⊆Lβ such that β itself (and not its inverse) is used in the inclusions characterizing L-simulation.
The definition of data refinement given in the previous chapter requires that an infinite number of inclusions should hold, namely one for every choice of program involved. Consequently it does not yield an effective proof method. In this chapter we define such a method, called simulation, and investigate its soundness and completeness w.r.t. the criterion of proving data refinement. In order to define simulation one needs the concept of abstraction relation, relating concrete data values to abstract ones. We briefly discuss why abstraction relations rather than abstraction functions are used, and how data invariants (characterizing the reachable values in a data type) solve one of the problems associated with converting abstraction relations into abstraction functions. Since ultimately proofs are carried out in predicate logic, this raises the question how to express abstraction relations within that logic. As we shall see, this forces us to distinguish between those variables within a program that are unaffected by the data refinement step in question (these are called normal variables) and those that are affected by that step (called representation variables). This raises a number of technical issues which are discussed and for which we present a solution. Next, two methods presently in use for proving data refinement, namely Reynolds' method and VDM, are briefly introduced by way of an example. Finally, we discuss both the distinction between, and the relative values of, syntax-based methods for proving data refinement and semantically oriented ones.
This chapter is devoted to the semantics of the functional language that we described in the previous chapters. The point of its semantics is to define the meaning of expressions in this language; that is, to define precisely the value of each expression. The association between an expression and its value is created by rewrite rules; that is, rules that transform expressions textually. Those rules are presented and discussed in Section 3.1.
These rewrite rules are non-deterministic. That is, in general, for any expression under consideration, there is more than one rule that may be applied to it. The consistency of these rules rests on the fact that they form a convergent system. In other words, whatever the non-deterministic choices made, at every step it is always possible to make any two different computations converge toward the same expression. This property does not exclude the existence of infinite computations, but it does exclude the possibility of an expression having two distinct values. The value of an expression (when it exists), is therefore unique. We assert this convergence property here, but we will not try to prove it. References about the proof of convergence are found at the end of this chapter.
In practice, in order to implement an evaluator for a language, we have to define a strategy that lets us choose a rewrite at every step—we choose one such rewrite among the set of all possible rewrites.
In this chapter, we show you how to represent exact numbers of arbitrary size. In certain applications, our ability to compute with such numbers is indispensible, especially so in computer algebra. Formal systems of symbolic computation, such as Maple [14], Mathematica [44], or Axiom [19], exploit an exact rational arithmetic. Moreover, programming languages oriented toward symbolic computation generally support exact computations. Such is particularly the case of Caml with the libraries bignum and ratio.
The sets of numbers that we will treat here are the natural numbers (also known as integers for counting), the signed integers (that is, both positive and negative), and the rational numbers. The natural numbers will be represented by the sequence of their digits in a given base. The sequence itself can be represented in various ways. We will represent natural numbers primarily by ordinary lists. This choice is not very efficient because it supports traversal in only one direction. If we decide to put least significant digits at the head of the list, then we can multiply and add fairly efficiently, but division will be inefficient because we must then turn the lists around.
Nevertheless, if we represent natural numbers as lists, then we can program the usual operations simply, and that model can serve later as the point of reference for getting into various other representations, such as representations by doubly linked circular lists or by arrays—representations used in “real” implementations.
This book has a number of objectives. First, it provides the concepts and a language to produce sophisticated software. Second, the book tries to make you step back a bit from programming as an activity by highlighting basic problems linked to programming as a discipline. In the end, we hope to share our own pleasure in programming.
The language we use—Caml—makes it possible to achieve all these goals. Caml belongs to the family of “functional” languages, all of which have the following qualities:
They are particularly well adapted to writing applications for “symbolic computation”—the kind of computing that concerns computer scientists as well as mathematicians—in software engineering, artificial intelligence, formal computation, computer-aided proof, and so forth.
Functional languages are built on a fundamental theory that derives from mathematical logic. This basis provides these languages with their semantics as well as their systems of types and proof.
By the very way in which they are designed, these languages support a certain aesthetic in programming, an aesthetic which, like the aesthetic of a mathematical proof, is often an indication of their quality.
This book grew primarily out of a programming course given by Guy Cousineau at the Ecole Normale Supérieure between 1990 and 1995. The book also benefited from the teaching experience of Michel Mauny, who wrote Chapters 8 and 13 and contributed to the overall consistency of the book.
The spectacular development of the computing industry depends largely on progress in two very different areas: hardware and software. Progress in hardware has been fairly quantitative: miniaturized parts, increased performance, cost cutting; whereas the progress in software has been more qualitative: ease of use, friendliness, etc.
In fact, most users see their computer only through interfaces that let them exploit the machine while ignoring practically all its structure and internal details, just as if we drove our cars without ever opening the hood, just like we enjoy the comfort of central heating without necessarily grasping thermodynamics.
This qualitative improvement was brought to us by progress in software as an independent discipline. It is based on a major research effort, in the course of which computer science has been structured little by little around its own concepts and methods. Those concepts and methods, of course, should be the basis for teaching computer science.
The most fundamental concept in computer science is computing, of course. A computation is a set of transformations carried out “mechanically” by means of a finite number of predefined rules. A computation impinges on formalized symbolic data (information) representing, for example, numbers (as in numeric computations) or mathematical expressions (as in formal computation) or data or even knowledge of all kinds. The only characteristics common to all computations is the discreteness of their data (that is, the information is finite) and the mechanical way in which the rules are applied.
This last part of the book describes techniques to implement a language like Caml. We do not pretend to give a complete description here of an implementation of Caml, but rather a demonstration that such an implementation is feasible. We treat a subset of Caml to show the major difficulties in compilation and type synthesis.
Chapter 11 defines a Caml evaluator in Caml. It highlights the main ideas that make it possible to produce a compiler: the idea of an environment is used to manage variables, and the idea of closure is used to represent functional values.
Chapter 12 tackles two topics simultaneously: compilation schema and techniques of memory management that come into play in the implementation of a functional language. With respect to memory management, only allocation is described precisely. Techniques for recovering memory (that is, garbage collection) are only briefly touched.
The set of machine instructions we use occurs at a relatively abstract level compared to all the instructions available in assembly language, but that set can nevertheless be translated into true machine instructions quite directly.
Chapter 13 describes a type synthesizer. We give you a preliminary version of it in a purely functional style; then we move on to a more efficient one, one that uses a destructive variety of the unification algorithm. This version is quite close to the actual type synthesizer in Caml.
This book includes a great many examples. These examples are presented as if they were typed on the keyboard of a computer during an interactive session in Caml. In consequence, they include lines written by the user along with responses from the system. The character # that appears at the beginning of examples is the system prompt. Text written by the user begins after that character and ends with a double semi-colon (;;). Everything between the # and the double semi-colon is thus due to the user. The rest is the system response. This system response includes information about type and value.
In the Caml system, the type of expressions entered by the user is computed statically, that is, computed before evaluation. Any possible inconsistencies in type are detected automatically and reported by an error message. This kind of type-checking is carried out without the user ever having to give any indication to the system about types—no type declarations as in other languages, like Pascal or C.
Once the types have been synthesized satisfactorily, evaluation takes place, and then a result is computed and displayed. This display takes one of two different forms, depending on whether the text entered by the user is a simple expression or a definition.
The examples given here differ a bit in typography from those that actually appear on screen during a real working session; we modified them for legibility and aesthetic reasons.
All those aspects of Caml that cannot be described in a purely functional view of the language are known as its imperative qualities
either because they make sense only with respect to a particular evaluation strategy,
or because they refer to the machine representation of data structures.
Among the imperative aspects of that first kind, there are exceptions and input-output.
Among the second kind of imperative aspects, we find destructive operations such as assignment. The effect of such operations can be explained completely only by reference to formal semantics or to a description of the implementation of data structures. (We will get to those ideas later in Chapter 12.) However, we can still give you a reasonable description here based on examples.
Exceptions
In Section 2.3.4, we touched on the problem of writing partial functions. To do that, we introduced the type
This solution can hardly take into account all the situations where we need partial functions. For example, division is a partial operation (since division by 0 (zero) is not defined), but it would not be practical to replace the types int and float by the types int option and float option in every numeric calculation because doing so assumes that all arithmetic operations foresee the case where one of their arguments is undefined. The chief effect of that assumption would be to make numeric calculations impractical simply because they would be too inefficient to perform!