We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we extend the simple imperative language and the methods for reasoning about its programs to include one-dimensional arrays with integer subscripts. Although more elaborate and varied forms of arrays are provided by many programming languages, such simple arrays are enough to demonstrate the basic semantical and logical properties of arrays.
There are two complementary ways to think about arrays. In the older view, which was first made explicit in early work on semantics by Christopher Strachey, an array variable is something that one can apply to an integer (called a subscript) to obtain an “array element” (in Strachey's terminology, an “L-value”), which in turn can be either evaluated, to obtain a value, or assigned, to alter the state of the computation. In the newer view, which is largely due to Hoare but has roots in the work of McCarthy, an array variable, like an ordinary variable, has a value — but this value is a function mapping subscripts into ordinary values. Strachey's view is essential for languages that are rich enough that arrays can share elements. But for the simple imperative language, and especially for the kind of reasoning about programs developed in the previous chapter, Hoare's view is much more straightforward.
Abstract Syntax
Clearly, array variables are a different type of variable than the integer variables used in previous chapters.
Partial recursive, or computable functions, may be defined in a number of equivalent ways. This is what Church's thesis is about: all definitions of computability turn out to be equivalent. Church's thesis justifies some confidence in ‘semi-formal’ arguments, used to show that a given function is computable. These arguments can be accepted only if at any moment, upon request, the author of the argument is able to fully formalize it in one of the available axiomatizations.
In this summary, functions are always partial, unless otherwise specified.
Partial recursive functions
The most basic way of defining computable functions is by means of computing devices of which Turing machines are the most well known. A given Turing machine defines, for each n, a partial function f : ωn → ω. More mathematical presentations are by means of recursive program schemes, or by means of combinations of basic recursive functions.
Theorem A1.1.1 (Gödel-Kleene)For any n, the set of Turing computable functions from ωn to ω) is the set of partial recursive functions from ωn to ω, where by definition the class of partial recursive (p.r.) functions is the smallest class containing:
: ω → ω defined by 0(x) = 0.
succ : ω → ω (the successor function).
Projections πn,i : ωn → ω defined by πn,i(x1,…,xn) = xi, and closed under the following constructions:
Composition: If f1 : ωm → ω, …, fn : ωm → ω and g : ωn → ω are partial recursive, then g ∘ 〈f1, …, fn〉 : ωm → ω is partial recursive.
In this chapter, we provide a finer analysis of algebraicity. The central result – which was conjectured by Plotkin and was first proved in [Smy83a] – is that there exists a maximum cartesian closed full subcategory (full sub-CCC) of ω Acpo (the category of ω-algebraic cpo's). Jung has extended this result: he has characterized the maximal cartesian closed full subcategories of Acpo and Adcpo (and of ω Adcpo as well).
In section 5.1, we define continuous dcpo's, which are dcpo's where approximations exist without being necessarily compact. Continuous lattices have been investigated in depth from a mathematical perspective [GHK+80]. Our interest in continuous dcpo's arises from the fact that retracts of algebraic dcpo's are not algebraic in general, but are continuous. Much of the technical work involved in our quest of maximal full cartesian closed subcategories of (d)cpo's involves retracts. In section 5.2, we introduce two cartesian closed categories: the category of profinite dcpo's and the category of L-domains, both with continuous functions as morphisms. In section 5.3, we show that the algebraic L-domains and the bifinite domains form the two maximal cartesian closed full subcategories of Acpo, and derive Smyth's result for ωAcpo with little extra work. In section 5.4, we treat more sketchily the situation for Adcpo. The material of sections 5.3 and 5.4 is based on [Jun88]. In section 5.5, we show a technical result needed in section 5.3: a partial order is a dcpo if and only if all its well-founded subsets have a lub.
Girard's linear logic [Gir87] is an extension of propositional logic with new connectives providing a logical treatment of resource control. As a first hint, consider the linear λ-terms, which are the λ-terms defined with the following restriction: when an abstraction λx.M is formed, then x occurs exactly once in M. Linear λ-terms are normalized in linear time, that is, the number of reduction steps to their normal form is proportional to their size: a linear β-redex (λx.M)N involves no duplication of the argument N. Thus all the complexity of normalization comes from non-linearity.
Linear logic pushes the limits of constructivity much beyond intuitionistic logic. A proper proof theoretical introduction to linear logic is beyond the scope of this book. In this chapter, we content ourselves with a semantic introduction. By doing so, we actually follow the historical thread: the connectives of linear logic arose from the consideration of (a particularly simple version of) the stable model.
In section 13.1, we examine stable functions between coherence spaces, and discover two decompositions. First the function space E → E′ is isomorphic to a space (!E)⊸E′, where ⊸ constructs the space of linear functions, and where! is a constructor which allows reuse of data. Intuitively, linear functions, like linear terms, can use their input only once. On the other hand, the explicit declaration of reusability,!, allows us to recover all functions and terms.
We introduce a fundamental duality that arises in topology from the consideration of points versus open sets. A lot of work in topology can be done by working at the level of open sets only. This subject is called the pointless topology, and can be studied in [Joh82]. It leads generally to formulations and proofs of a more constructive nature than the ones ‘with points’. For the purpose of computer science, this duality is quite suggestive: points correspond to programs, and open sets to program properties. The investigation of Stone duality for domains has been pioneered by Martin-Löf [ML83] and by Smyth [Smy83b]. The work on intersection types, particularly in relation with the D∞ models, as exposed in chapter 3, appears as an even earlier precursor. We also recommend [Vic89], which offers a computer science oriented introduction to Stone duality.
In section 10.1, we introduce locales, and Stone duality in its most abstract form. In sections 10.2 and 10.4, we specialize the construction to various categories of dcpo's and continuous functions, most notably those of Scott domains and of profinite dcpo's (cf. definition 5.2.2). On the way, in section 10.3, we prove Stone's theorem: every Boolean algebra is order-isomorphic to an algebra of subsets of some set X, closed under set theoretical intersection, union, and complementation. The proof of Stone's theorem involves a form of the axiom of choice (Zorn's lemma), used in the proof of an important technical lemma known as Scott open filter theorem.
Denotational semantics is concerned with the mathematical meaning of programming languages. Programs are to be interpreted in categories with structure by which we mean initially sets and functions, and later, suitable topological spaces and continuous functions. The main goals of this branch of computer science are, in our belief:
To provide rigorous definitions that abstract away from implementation details, and that can serve as an implementation independent reference.
To provide mathematical tools for proving properties of programs: as in logic, semantic models are guides in designing sound proof rules, that can then be used in automated proof-checkers like LCF.
Historically the first goal came first. In the sixties Strachey was writing semantic equations involving recursively defined data types without knowing if they had mathematical solutions. Scott provided the mathematical framework, and advocated its use in a formal proof system called LCF. Thus denotational semantics has from the beginning been applied to the two goals.
In this book we aim to present in an elementary and unified way the theory of certain topological spaces, best presented as order theoretical structures, that have proved to be useful in the modelling of various families of typed λ-calculi considered as core programming languages and as meta-languages for denotational semantics. This theory is now known as Domain Theory, and has been founded as a subject by Scott and Plotkin.
The notion of continuity used in domain theory finds its origin in recursion theory.
Category theory has been tightly connected to abstract mathematics since the first paper on cohomology by Eilenberg and Mac Lane [EML45] which establishes its basic notions. This appendix is a reminder of a few elementary definitions and results in this branch of mathematics. We refer to [ML71, AL91] for adequate introductions and wider perspectives.
In mathematical practice, category theory is helpful in formalizing a problem, as it is a good habit to ask in which category we are working in, if a certain transformation is a functor, if a given subcategory is reflective, etc. Using category theoretical terminology, one can often express a result in a more modular and abstract way. A list of ‘prescriptions’ for the use of category theory in computer science can be found in [Gog91].
Categorical logic is a branch of category theory that arises from the observation due to Lawvere that logical connectives can be suitably expressed by means of universal properties. In this way one represents the models of, say, intuitionistic propositional logic, as categories with certain closure properties where sentences are interpreted as objects and proofs as morphisms (cf. section 4.3).
The tools developed in categorical logic begin to play a central role in the study of programming languages. A link between these two apparently distant topics is suggested by:
The role of (typed) λ-calculi in the work of Landin, McCarthy, Strachey, and Scott on the foundations of programming languages.
In first approximation, typed λ-calculi are natural deduction presentations of certain fragments of minimal logic (a subsystem of intuitionistic logic). These calculi have a natural computational interpretation as core of typed functional languages where computation, intended as βη-reduction, corresponds to proof normalization. In this perspective, we reconsider in section 4.1 the simply typed λ-calculus studied in chapter 2. We exhibit a precise correspondence between the simply typed λ-calculus and a natural deduction formalization of the implicative fragment of propositional implicative logic.
Next, we address the problem of modelling the notions of βη-reduction and equivalence. It turns out that simple models can be found by interpreting types as sets and terms as functions between these sets. But, in general, which are the structural properties that characterize such models? The main problem considered in this chapter is that of understanding what is the model theory of simply typed and untyped λ-calculi. In order to answer this question, we introduce in section 4.2 the notion of cartesian closed category (CCC). We present CCC's as a natural categorical generalization of certain adjunctions found in Heyting algebras. As a main example, we show that the category of directed complete partial orders and continuous functions is a CCC.
The description of the models of a calculus by means of category theoretical notions will be a central and recurring topic of this book. We will not always fully develop the theory but in this chapter we can take advantage of the simplicity of the calculus to go into a complete analysis.
This chapter is devoted to the semantics of sequentiality. At first order, the notion of sequential function is well-understood, as summarized in theorem 6.5.4. At higher orders, the situation is not as simple. Building on theorem 13.3.18, Ehrhard and Bucciarelli have developped a model of strongly stable functions, which we have described in section 13.3. But in the strongly stable model an explicit reference to a concept of sequentiality is lost at higher orders. Here there is an intrinsic difficulty: there does not exist a cartesian closed category of sequential (set theoretical) functions (see theorem 14.1.12). Berry suggested that replacing functions by morphisms of a more concrete nature, and retaining information on the order in which the input is explored in order to produce a given part of the output, could be a way to develop a theory of higher order sequentiality. This intuition gave birth to the model of sequential algorithms of Berry and Curien, which is described in this chapter [BC82, BC85].
In section 14.1, we introduce Kahn and Plotkin's (filiform and stable) concrete data structures and sequential functions between concrete data structures [KP93]. This definition generalizes Vuillemin's definition 6.5.1. A concrete data structure consists of cells that can be filled with a value, much like a PASCAL record field can be given a value. A concrete data structure generates a cpo of states, which are sets of pairs (cell, value), also called events (cf. section 12.3).
This chapter presents general techniques for the solution of domain equations and the representation of domains and functors over a universal domain. Given a category of domains C we build the related category Cip (cf. chapter 3) that has the same objects as C and injection-projection pairs as morphisms (section 7.1). It turns out that this is a suitable framework for the solution of domain equations. The technique is applied in section 7.2 in order to solve a predicate equation. In turn, the solution of the predicate equation is used in proving an adequacy theorem for a simple declarative language with dynamic binding.
The category of injection-projection pairs is also a suitable framework for the construction of a universal homogeneous object (section 7.3). The latter is a domain in which every other domain (not exceeding a certain size) can be embedded. Once a universal object U is built, it is possible to represent the collection of domains as the domain FP(U) of finitary projections over U, and functors as continuous functions over FP(U). In this way, one obtains a rather handy poset theoretical framework for the solution of domain equations (section 7.4). If, moreover, FP(U) is itself (the image of a) projection, then projections can be used to give a model of second order typed λ-calculus (see exercise 7.4.8 and section 11.3).
A third approach to the solution of domain equations consists in working with concrete representations of domains like information systems, event structures, or concrete data structures (introduced in definitions 10.2.11, 12.3.3 and 14.1.1, respectively).
The main goal of this chapter is to introduce λ-calculi with dependent and second order types, to discuss their interpretation in the framework of traditional domain theory (chapter 15 will mention another approach based on realizability), and to present some of their relevant syntactic properties.
Calculi with dependent and second order types are rather complex syntactic objects. In order to master some of their complexity let us start with a discussion from a semantic viewpoint. Let T be a category whose objects are regarded as types. The category T contains atomic types like the singleton type 1, the type nat representing natural numbers, and the type bool representing boolean values. The collection T is also closed with respect to certain data type constructions. For example, if A and B are types then we can form new types such as a product type A × B, a sum type A + B, and an exponent type A → B.
In first approximation, a dependent type is a family of types indexed over another type A. We represent such a family as a transformation F from A into the collection of types T, say F : A → T. As an example of dependent type we can think of a family Prod. bool: nat → T that given a number n returns the type bool × … × bool (n times).
This chapter introduces the untyped λ-calculus. We establish some of its fundamental theorems, among which we count the syntactic continuity theorem, which offers another indication of the relevance of Scott continuity (cf. section 1.1 and theorem 1.3.1).
The λ-calculus was introduced around 1930 by Church as part of an investigation in the formal foundations of mathematics and logic [Chu41]. The related formalism of combinatory logic had been introduced some years ealier by Schönfinkel and Curry. While the foundational program was later relativized by such results as Gödel's incompleteness theorem, λ-calculus nevertheless provided one of the concurrent formalizations of partial recursive functions. Logical interest in λ-calculus was resumed by Girard's discovery of the second order λ-calculus in the early seventies (see chapter 11).
In computer science, the interest in λ-calculus goes back to Landin [Lan66] and Reynolds [Rey70]. The λ-notation is also important in LISP, designed around 1960 by MacCarthy [Mac60]. These pioneering works have eventually led to the development of functional programming languages like Scheme or ML. In parallel, Scott and Strachey used λ-calculus as a meta-language for the description of the denotational semantics of programming languages. The most comprehensive reference on λ-calculus is [Bar84]. A more introductory textbook is [HS86]. We refer to these books for more historical pointers.
In section 2.1, we present the untyped λ-calculus. The motivation to prove a strong normalization theorem leads us to the simply typed λ-calculus.
When considering the λ-calculus as the kernel of a programming language it is natural to concentrate on weak reduction strategies, that is, strategies where evaluation stops at λ-abstractions. In presenting the semantic counterpart of these calculi it is useful to emphasize the distinction between value and computation. A first example coming from recursion theory relies on the notions of total and partial morphism. In our jargon a total morphism when given a value always returns a value whereas a partial morphism when given a value returns a possibly infinite computation. This example suggests that the denotation of a partial recursive algorithm is a morphism from values to computations, and that values are particular kinds of computations.
In domain theory the divergent computation is represented by a bottom element, say ⊥, that we add to the collection of values. This can be seen as the motivation for the shift from sets to flat domains. More precisely, we have considered three categories (cf. definition 1.4.17).
The category Dcpo in which morphisms send values to values, say D → E. This category is adapted to a framework where every computation terminates.
The category pDcpo which is equivalent to the one of cpo's and strict functions, and in which morphisms send values to computations, say D → (E)⊥. This category naturally models call-by-value evaluation where functions' arguments are evaluated before application.