To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we provide a finer analysis of algebraicity. The central result – which was conjectured by Plotkin and was first proved in [Smy83a] – is that there exists a maximum cartesian closed full subcategory (full sub-CCC) of ω Acpo (the category of ω-algebraic cpo's). Jung has extended this result: he has characterized the maximal cartesian closed full subcategories of Acpo and Adcpo (and of ω Adcpo as well).
In section 5.1, we define continuous dcpo's, which are dcpo's where approximations exist without being necessarily compact. Continuous lattices have been investigated in depth from a mathematical perspective [GHK+80]. Our interest in continuous dcpo's arises from the fact that retracts of algebraic dcpo's are not algebraic in general, but are continuous. Much of the technical work involved in our quest of maximal full cartesian closed subcategories of (d)cpo's involves retracts. In section 5.2, we introduce two cartesian closed categories: the category of profinite dcpo's and the category of L-domains, both with continuous functions as morphisms. In section 5.3, we show that the algebraic L-domains and the bifinite domains form the two maximal cartesian closed full subcategories of Acpo, and derive Smyth's result for ωAcpo with little extra work. In section 5.4, we treat more sketchily the situation for Adcpo. The material of sections 5.3 and 5.4 is based on [Jun88]. In section 5.5, we show a technical result needed in section 5.3: a partial order is a dcpo if and only if all its well-founded subsets have a lub.
Girard's linear logic [Gir87] is an extension of propositional logic with new connectives providing a logical treatment of resource control. As a first hint, consider the linear λ-terms, which are the λ-terms defined with the following restriction: when an abstraction λx.M is formed, then x occurs exactly once in M. Linear λ-terms are normalized in linear time, that is, the number of reduction steps to their normal form is proportional to their size: a linear β-redex (λx.M)N involves no duplication of the argument N. Thus all the complexity of normalization comes from non-linearity.
Linear logic pushes the limits of constructivity much beyond intuitionistic logic. A proper proof theoretical introduction to linear logic is beyond the scope of this book. In this chapter, we content ourselves with a semantic introduction. By doing so, we actually follow the historical thread: the connectives of linear logic arose from the consideration of (a particularly simple version of) the stable model.
In section 13.1, we examine stable functions between coherence spaces, and discover two decompositions. First the function space E → E′ is isomorphic to a space (!E)⊸E′, where ⊸ constructs the space of linear functions, and where! is a constructor which allows reuse of data. Intuitively, linear functions, like linear terms, can use their input only once. On the other hand, the explicit declaration of reusability,!, allows us to recover all functions and terms.
We introduce a fundamental duality that arises in topology from the consideration of points versus open sets. A lot of work in topology can be done by working at the level of open sets only. This subject is called the pointless topology, and can be studied in [Joh82]. It leads generally to formulations and proofs of a more constructive nature than the ones ‘with points’. For the purpose of computer science, this duality is quite suggestive: points correspond to programs, and open sets to program properties. The investigation of Stone duality for domains has been pioneered by Martin-Löf [ML83] and by Smyth [Smy83b]. The work on intersection types, particularly in relation with the D∞ models, as exposed in chapter 3, appears as an even earlier precursor. We also recommend [Vic89], which offers a computer science oriented introduction to Stone duality.
In section 10.1, we introduce locales, and Stone duality in its most abstract form. In sections 10.2 and 10.4, we specialize the construction to various categories of dcpo's and continuous functions, most notably those of Scott domains and of profinite dcpo's (cf. definition 5.2.2). On the way, in section 10.3, we prove Stone's theorem: every Boolean algebra is order-isomorphic to an algebra of subsets of some set X, closed under set theoretical intersection, union, and complementation. The proof of Stone's theorem involves a form of the axiom of choice (Zorn's lemma), used in the proof of an important technical lemma known as Scott open filter theorem.
Denotational semantics is concerned with the mathematical meaning of programming languages. Programs are to be interpreted in categories with structure by which we mean initially sets and functions, and later, suitable topological spaces and continuous functions. The main goals of this branch of computer science are, in our belief:
To provide rigorous definitions that abstract away from implementation details, and that can serve as an implementation independent reference.
To provide mathematical tools for proving properties of programs: as in logic, semantic models are guides in designing sound proof rules, that can then be used in automated proof-checkers like LCF.
Historically the first goal came first. In the sixties Strachey was writing semantic equations involving recursively defined data types without knowing if they had mathematical solutions. Scott provided the mathematical framework, and advocated its use in a formal proof system called LCF. Thus denotational semantics has from the beginning been applied to the two goals.
In this book we aim to present in an elementary and unified way the theory of certain topological spaces, best presented as order theoretical structures, that have proved to be useful in the modelling of various families of typed λ-calculi considered as core programming languages and as meta-languages for denotational semantics. This theory is now known as Domain Theory, and has been founded as a subject by Scott and Plotkin.
The notion of continuity used in domain theory finds its origin in recursion theory.
Category theory has been tightly connected to abstract mathematics since the first paper on cohomology by Eilenberg and Mac Lane [EML45] which establishes its basic notions. This appendix is a reminder of a few elementary definitions and results in this branch of mathematics. We refer to [ML71, AL91] for adequate introductions and wider perspectives.
In mathematical practice, category theory is helpful in formalizing a problem, as it is a good habit to ask in which category we are working in, if a certain transformation is a functor, if a given subcategory is reflective, etc. Using category theoretical terminology, one can often express a result in a more modular and abstract way. A list of ‘prescriptions’ for the use of category theory in computer science can be found in [Gog91].
Categorical logic is a branch of category theory that arises from the observation due to Lawvere that logical connectives can be suitably expressed by means of universal properties. In this way one represents the models of, say, intuitionistic propositional logic, as categories with certain closure properties where sentences are interpreted as objects and proofs as morphisms (cf. section 4.3).
The tools developed in categorical logic begin to play a central role in the study of programming languages. A link between these two apparently distant topics is suggested by:
The role of (typed) λ-calculi in the work of Landin, McCarthy, Strachey, and Scott on the foundations of programming languages.
In first approximation, typed λ-calculi are natural deduction presentations of certain fragments of minimal logic (a subsystem of intuitionistic logic). These calculi have a natural computational interpretation as core of typed functional languages where computation, intended as βη-reduction, corresponds to proof normalization. In this perspective, we reconsider in section 4.1 the simply typed λ-calculus studied in chapter 2. We exhibit a precise correspondence between the simply typed λ-calculus and a natural deduction formalization of the implicative fragment of propositional implicative logic.
Next, we address the problem of modelling the notions of βη-reduction and equivalence. It turns out that simple models can be found by interpreting types as sets and terms as functions between these sets. But, in general, which are the structural properties that characterize such models? The main problem considered in this chapter is that of understanding what is the model theory of simply typed and untyped λ-calculi. In order to answer this question, we introduce in section 4.2 the notion of cartesian closed category (CCC). We present CCC's as a natural categorical generalization of certain adjunctions found in Heyting algebras. As a main example, we show that the category of directed complete partial orders and continuous functions is a CCC.
The description of the models of a calculus by means of category theoretical notions will be a central and recurring topic of this book. We will not always fully develop the theory but in this chapter we can take advantage of the simplicity of the calculus to go into a complete analysis.
This chapter is devoted to the semantics of sequentiality. At first order, the notion of sequential function is well-understood, as summarized in theorem 6.5.4. At higher orders, the situation is not as simple. Building on theorem 13.3.18, Ehrhard and Bucciarelli have developped a model of strongly stable functions, which we have described in section 13.3. But in the strongly stable model an explicit reference to a concept of sequentiality is lost at higher orders. Here there is an intrinsic difficulty: there does not exist a cartesian closed category of sequential (set theoretical) functions (see theorem 14.1.12). Berry suggested that replacing functions by morphisms of a more concrete nature, and retaining information on the order in which the input is explored in order to produce a given part of the output, could be a way to develop a theory of higher order sequentiality. This intuition gave birth to the model of sequential algorithms of Berry and Curien, which is described in this chapter [BC82, BC85].
In section 14.1, we introduce Kahn and Plotkin's (filiform and stable) concrete data structures and sequential functions between concrete data structures [KP93]. This definition generalizes Vuillemin's definition 6.5.1. A concrete data structure consists of cells that can be filled with a value, much like a PASCAL record field can be given a value. A concrete data structure generates a cpo of states, which are sets of pairs (cell, value), also called events (cf. section 12.3).
This chapter presents general techniques for the solution of domain equations and the representation of domains and functors over a universal domain. Given a category of domains C we build the related category Cip (cf. chapter 3) that has the same objects as C and injection-projection pairs as morphisms (section 7.1). It turns out that this is a suitable framework for the solution of domain equations. The technique is applied in section 7.2 in order to solve a predicate equation. In turn, the solution of the predicate equation is used in proving an adequacy theorem for a simple declarative language with dynamic binding.
The category of injection-projection pairs is also a suitable framework for the construction of a universal homogeneous object (section 7.3). The latter is a domain in which every other domain (not exceeding a certain size) can be embedded. Once a universal object U is built, it is possible to represent the collection of domains as the domain FP(U) of finitary projections over U, and functors as continuous functions over FP(U). In this way, one obtains a rather handy poset theoretical framework for the solution of domain equations (section 7.4). If, moreover, FP(U) is itself (the image of a) projection, then projections can be used to give a model of second order typed λ-calculus (see exercise 7.4.8 and section 11.3).
A third approach to the solution of domain equations consists in working with concrete representations of domains like information systems, event structures, or concrete data structures (introduced in definitions 10.2.11, 12.3.3 and 14.1.1, respectively).
The main goal of this chapter is to introduce λ-calculi with dependent and second order types, to discuss their interpretation in the framework of traditional domain theory (chapter 15 will mention another approach based on realizability), and to present some of their relevant syntactic properties.
Calculi with dependent and second order types are rather complex syntactic objects. In order to master some of their complexity let us start with a discussion from a semantic viewpoint. Let T be a category whose objects are regarded as types. The category T contains atomic types like the singleton type 1, the type nat representing natural numbers, and the type bool representing boolean values. The collection T is also closed with respect to certain data type constructions. For example, if A and B are types then we can form new types such as a product type A × B, a sum type A + B, and an exponent type A → B.
In first approximation, a dependent type is a family of types indexed over another type A. We represent such a family as a transformation F from A into the collection of types T, say F : A → T. As an example of dependent type we can think of a family Prod. bool: nat → T that given a number n returns the type bool × … × bool (n times).
This chapter introduces the untyped λ-calculus. We establish some of its fundamental theorems, among which we count the syntactic continuity theorem, which offers another indication of the relevance of Scott continuity (cf. section 1.1 and theorem 1.3.1).
The λ-calculus was introduced around 1930 by Church as part of an investigation in the formal foundations of mathematics and logic [Chu41]. The related formalism of combinatory logic had been introduced some years ealier by Schönfinkel and Curry. While the foundational program was later relativized by such results as Gödel's incompleteness theorem, λ-calculus nevertheless provided one of the concurrent formalizations of partial recursive functions. Logical interest in λ-calculus was resumed by Girard's discovery of the second order λ-calculus in the early seventies (see chapter 11).
In computer science, the interest in λ-calculus goes back to Landin [Lan66] and Reynolds [Rey70]. The λ-notation is also important in LISP, designed around 1960 by MacCarthy [Mac60]. These pioneering works have eventually led to the development of functional programming languages like Scheme or ML. In parallel, Scott and Strachey used λ-calculus as a meta-language for the description of the denotational semantics of programming languages. The most comprehensive reference on λ-calculus is [Bar84]. A more introductory textbook is [HS86]. We refer to these books for more historical pointers.
In section 2.1, we present the untyped λ-calculus. The motivation to prove a strong normalization theorem leads us to the simply typed λ-calculus.
When considering the λ-calculus as the kernel of a programming language it is natural to concentrate on weak reduction strategies, that is, strategies where evaluation stops at λ-abstractions. In presenting the semantic counterpart of these calculi it is useful to emphasize the distinction between value and computation. A first example coming from recursion theory relies on the notions of total and partial morphism. In our jargon a total morphism when given a value always returns a value whereas a partial morphism when given a value returns a possibly infinite computation. This example suggests that the denotation of a partial recursive algorithm is a morphism from values to computations, and that values are particular kinds of computations.
In domain theory the divergent computation is represented by a bottom element, say ⊥, that we add to the collection of values. This can be seen as the motivation for the shift from sets to flat domains. More precisely, we have considered three categories (cf. definition 1.4.17).
The category Dcpo in which morphisms send values to values, say D → E. This category is adapted to a framework where every computation terminates.
The category pDcpo which is equivalent to the one of cpo's and strict functions, and in which morphisms send values to computations, say D → (E)⊥. This category naturally models call-by-value evaluation where functions' arguments are evaluated before application.
In chapter 4, we have provided semantics for both typed and untyped λ-calculus. In this chapter we extend the approach to typed λ-calculus with fixpoints (λY-calculus), we suggest formal ways of reasoning with fixpoints, and we introduce a core functional language called PCF [Sco93, Plo77]. PCF has served as a basis for a large body of theoretical work in denotational semantics. We prove the adequacy of the interpretation with respect to the operational semantics, and we discuss the full abstraction problem, which has triggered a lot of research, both in syntax and semantics.
In section 6.1, we introduce the notion of cpo-enriched CCC's, which serves to interpret the λY-calculus. In section 6.2, we introduce fixpoint induction and show an application of this reasoning principle. In section 6.3, we introduce the language PCF, define its standard denotational semantics and its operational semantics, and we show a computational adequacy property: the meaning of a closed term of basic type is different from ⊥ if and only if its evaluation terminates. In section 6.4, we address a tighter correspondence between denotational and operational semantics, known as the full abstraction property. In section 6.5, we introduce Vuillemin's sequential functions, which capture first order PCF definability. In section 6.6, we show how a fully abstract model of PCF can be obtained by means of a suitable quotient of an (infinite) term model of PCF.
As the computation of a program proceeds, some (partial) information is read from the input, and portions of the output are gradually produced. This is true of mathematical reasoning too. Consider the following abstraction of a typical highschool problem for simple equation solving. The student is presented with three numerical figures – the data of the problem (which might themselves be obtained as the results of previous problems). Call them u, v, and w. The problem has two parts. In part 1, the student is required to compute a quantity x, and in the second part, using part 1 as a stepping stone, he (or she) is required to compute a quantity y. After some reasoning, the student will have found that, say, x = 3u + 4, and that y = x – v. Abstracting away from the actual values of u, v, w, x, and y, we can describe the problem in terms of information processing. We consider that the problem consists in computing x and y as a function of u, v, w, i.e., (x, y) = f(u, v, w). A first remark is that w is not used. In particular, if computing w was itself the result of a long, or even diverging, computation, the student would still be able to solve his problem. A second remark is that x depends on u only. Hence, again, if finding v is very painful, the student may still achieve at least part 1 of his problem.
In this chapter we address the fundamental domain equation D = D → D which serves to define models of the untyped λ-calculus. By ‘equation’, we actually mean that we seek a D together with an order-isomorphism D ≅ D → D. Taking D = {⊥} certainly yields a solution, since there is exactly one function f: {⊥} → {⊥}. But we are interested in a non-trivial solution, that is a D of cardinality at least 2, so that not all λ-terms will be identified! Domain equations will be treated in a general setting in chapter 7.
In section 3.1, we construct Scott's D∞ models as order theoretical limit constructions. In section 3.2, we first define a general notion of λ-model, and then discuss some specific properties of the D∞ models: Curry's fixpoint combinator is interpreted as the least fixpoint operator, and the theory induced by a D∞ model can be characterized syntactically, using Böhm trees. In section 3.3, we present a class of λ-models based on the idea that the meaning of a term should be the collection of properties it satisfies in a suitable ‘logic’. This point of view will be developed in more generality in chapter 10. In section 3.4, we relate the constructions of sections 3.1 and 3.3, following [CDHL82]. Finally, in section 3.5, we use intersection types as a tool for the syntactic theory of the λ-calculus [Kri91, RdR93].