To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Curry-style system F, that is, system F with no explicit types in terms, may be viewed as a core presentation of polymorphism from the point of view of programming languages.
This paper gives a characterisation of type isomorphisms for this language using a game model, whose intuitions come from both the syntax and the game semantics universe. The model is composed of an untyped part to interpret terms, a notion of arena to interpret types and a typed part to express the fact that an untyped strategy σ plays on an arena A.
By analysing isomorphisms in the model, we prove that the equational system corresponding to type isomorphisms for Curry-style system F is the extension of the equational system for Church-style isomorphisms with a new, non-trivial equation: ∀X.AA[∀Y.Y/X] if X appears only positively in A.
1. Philosophical background: iteration, ineffability, reflection. There are at least two heuristic motivations for the axioms of standard set theory, by which we mean, as usual, first-order Zermelo–Fraenkel set theory with the axiom of choice (ZFC): the iterative conception and limitation of size (see Boolos, 1989). Each strand provides a rather hospitable environment for the hypothesis that the set-theoretic universe is ineffable, which is our target in this paper, although the motivation is different in each case.
A common way to show the termination of the union of two abstract reduction systems, provided both systems terminate, is to prove that they enjoy a specific property (some sort of ‘commutation’ for instance). This specific property is actually used to show that, for the union not to terminate, one of the systems must itself be non-terminating, which leads to a contradiction. Unfortunately, the property may be impossible to prove because some of the objects that are reduced do not enjoy an adequate form.
Hence the purpose of this paper is threefold:
– First, it introduces an operator enabling us to insert a reduction step on such an object, and therefore to change its shape, while still preserving the ability to use the property. Of course, some new properties will need to be verified.
– Second, as an instance of our technique, the operator is applied to relax a well-known lemma stating the termination of the union of two termination abstract reduction systems.
– Finally, this lemma is applied in a peculiar and then in a more general way to show the termination of some lambda calculi with inductive types augmented with specific reductions dealing with:
A simple type σ is retractable to a simple type τ if there are two terms Cσ→τ and Dτ→σ such that D ○ C λx.x. The retractability of types is affine if the terms C and D are affine, that is, when every bound variable occurs in them at most once in the scope of its declaration. This paper presents a system that derives affine retractability for simple types. It also studies the complexity of constructing these affine retractions. The problem of affine retractability is NP-complete even for the class of types over a single type atom and having limited functional order. In addition, a polynomial algorithm for types of orders less than three is presented.
As McKinsey and Tarski showed, the Stone representation theorem for Boolean algebras extends to algebras with operators to give topological semantics for (classical) propositional modal logic, in which the “necessity” operation is modeled by taking the interior of an arbitrary subset of a topological space. In this article, the topological interpretation is extended in a natural way to arbitrary theories of full first-order logic. The resulting system of S4 first-order modal logic is complete with respect to such topological semantics.
A new formal theory DT of truth extending PA is introduced, whose language is that of PA together with one new unary predicate symbol T (x), for truth applied to Gödel numbers of suitable sentences in the extended language. Falsity of x, F(x), is defined as truth of the negation of x; then, the formula D(x) expressing that x is the number of a determinate meaningful sentence is defined as the disjunction of T(x) and F(x). The axioms of DT are those of PA extended by (I) full induction, (II) strong compositionality axioms for D, and (III) the recursive defining axioms for T relative to D. By (II) is meant that a sentence satisfies D if and only if all its parts satisfy D; this holds in a slightly modified form for conditional sentences. The main result is that DT has a standard model. As an improvement over earlier systems developed by the author, DT meets a number of leading criteria for formal theories of truth that have been proposed in the recent literature and comes closer to realizing the informal view that the domain of the truth predicate consists exactly of the determinate meaningful sentences.
Mathematics and philosophy have historically enjoyed a mutually beneficial and productive relationship, as a brief review of the work of mathematician–philosophers such as Descartes, Leibniz, Bolzano, Dedekind, Frege, Brouwer, Hilbert, Gödel, and Weyl easily confirms. In the last century, it was especially mathematical logic and research in the foundations of mathematics which, to a significant extent, have been driven by philosophical motivations and carried out by technically minded philosophers. Mathematical logic continues to play an important role in contemporary philosophy, and mathematically trained philosophers continue to contribute to the literature in logic. For instance, modal logics were first investigated by philosophers and now have important applications in computer science and mathematical linguistics.
As we have seen in Chapter 10, the main property of typed systems not possessed by untyped systems is that all reductions are finite, and hence every typed term has a normal form. In this appendix we shall prove this theorem for the simply typed systems in Chapter 10, and for an extended system from which the consistency of first-order arithmetic can be deduced.
The proofs will be variations on a method due to W. Tait, [Tai67]. (See also [TS00, Sections 6.8, 6.12.2] or [SU06, Sections 5.3.2–5.3.6].) Simpler methods are known for pure λ and CL, but Tait's is the easiest to extend to more complex type-systems.
We begin with two definitions which have meaning for any reduction concept defined by sequences of replacements. The first is a repetition of Definition 10.14. The second is the key to Tait's method.
Definition A3.1 (Normalizable terms) A typed or untyped CL-or λ-term X is called normalizable or weakly normalizable or WN with respect to a given reduction concept, iff it reduces to a normal form. It is called strongly normalizable (SN) iff all reductions starting at X are finite.
As noted in Chapter 10, SN implies WN. Also the concept of SN involves the distinction between finite and infinite reductions, whereas WN does not, so SN is a fundamentally more complex concept than WN.
In this chapter, a sequence of pure terms will be chosen to represent the natural numbers. It is then reasonable to expect that some of the other terms will represent functions of natural numbers, in some sense. This sense will be defined precisely below. The functions so representable will turn out to be exactly those computable by Turing machines.
In the 1930s, three concepts of computability arose independently: ‘Turing-computable function’, ‘recursive function’ and ‘λ-definable function’. The inventors of these three concepts soon discovered that all three gave the same set of functions. Most logicians took this as strong evidence that the informal notion of ‘computable function’ had been captured exactly by these three formally-defined concepts.
Here we shall look at the recursive functions, and prove that all these functions can be represented in λ and CL. (We shall not work with the Turing-computable functions because their representability-proof is longer.)
An outline definition of the recursive functions will be given here; more details and background can be found in many textbooks on computability or textbooks on logic which include computability, for example [Coh87], [Men97] or the old but thorough [Kle52].
Notation 4.1 This chapter is written in the same neutral notation as the last one, and its results will hold for both λ and CL unless explicitly stated otherwise.
The λ-calculus and combinatory logic are two systems of logic which can also serve as abstract programming languages. They both aim to describe some very general properties of programs that can modify other programs, in an abstract setting not cluttered by details. In some ways they are rivals, in others they support each other.
The λ-calculus was invented around 1930 by an American logician Alonzo Church, as part of a comprehensive logical system which included higher-order operators (operators which act on other operators). In fact the language of λ-calculus, or some other essentially equivalent notation, is a key part of most higher-order languages, whether for logic or for computer programming. Indeed, the first uncomputable problems to be discovered were originally described, not in terms of idealized computers such as Turing machines, but in λ-calculus.
Combinatory logic has the same aims as λ-calculus, and can express the same computational concepts, but its grammar is much simpler. Its basic idea is due to two people: Moses Schönfinkel, who first thought of it in 1920, and Haskell Curry, who independently re-discovered it seven years later and turned it into a workable technique.
The purpose of this book is to introduce the reader to the basic methods and results in both fields.
The reader is assumed to have no previous knowledge of these fields, but to know a little about propositional and predicate logic and recursive functions, and to have some experience with mathematical induction.
In first-order logic, a common question to ask about a formal theory is ‘what are its models like?’. For the theories λβ and CLw the first person to ask this was Dana Scott in the 1960s, while he was working on extending the concept of ‘computable’ from functions of numbers to functions of functions. The first non-trivial model, D∞, was constructed by Scott in 1969.
Since then many other models have been made. The present chapter will set the scene by introducing a few basic general properties of models of CLw, and the next will do the same for λβ, whose concept of model is more complicated. Then Chapter 16 will describe the model D∞ in detail and give outlines and references for some other models. Scott's D∞ is not the simplest model known, but it is a good introduction, as the concepts used in building it are also involved in discussions of other models.
But first, a comment: although λ-calculus and combinatory logic were invented as long ago as the 1920s, there was a 40-year gap before their first model was constructed; why was there this long delay?
There are two main reasons. The first is the origin of λβ and CLw. Both Church and Curry viewed these theories, not from within the semantics that most post-1950 logicians were trained in, but from the alternative viewpoint described in Discussion 3.27.
In Chapter 1 the technicalities of bound variables, substitution and α-conversion were merely outlined. This is the best approach at the beginning. Indeed, most accounts of λ omit details of these, and simply assume that clashes between bound and free variables can always be avoided without problems; see, for example, the ‘variable convention’ in [Bar84, Section 2.1.13]. The purpose of this appendix is to show how that assumption can be justified.
Before starting, it is worth mentioning two points. First, there is a notation for λ-calculus that avoids bound variables completely. It was invented by N. G. de Bruijn, see [Bru72], and in it each bound variable-occurrence is replaced by a number showing its ‘distance’ from its binding λ, in a certain sense. De Bruijn's notation has been found useful when coding λ-terms for machine manipulation; examples are in [Alt93, Hue94, KR95]. But, as remarked in [Pol93, pp. 314–315], it does not lead to a particularly simple definition of substitution, and most human workers still find the classical notation easier to read.
For such workers, the details of α-conversion would not be avoided by de Bruijn's notation, but would simply be moved from the stage of manipulating terms to that of translating between the two notations.
The second point to note is shown by the following two examples: if we simply deleted ≡α from the rules of λ-calculus, we would lose the confluence of both ⊳βη and ⊳β.
In mathematics the definition of a particular function usually includes a statement of the kind of inputs it will accept, and the kind of outputs it will produce. For example, the squaring function accepts integers n as inputs and produces integers n2 as outputs, and the zero-test function accepts integers and produces Boolean values (‘true’ or ‘false’ according as the input is zero or not).
Corresponding to this way of defining functions, λ and CL can be modified by attaching expressions called ‘types’ to terms, like labels to denote their intended input and output sets. In fact almost all programming languages that use λ and CL use versions with types.
This chapter and the next two will describe two different approaches to attaching types to terms: (i) called Church-style or sometimes explicit or rigid, and (ii) called Curry-style or sometimes implicit. Both are used extensively in programming.
The Church-style approach originated in [Chu40], and is described in the present chapter. In it, a term's type is a built-in part of the term itself, rather like a person's fingerprint or eye-colour is a built-in part of the person's body. (In Curry's approach a term's type will be assigned after the term has been built, like a passport or identity-card may be given to a person some time after birth.)
Having looked at the abstract definition of ‘model’ in the last two chapters, let us now study one particular model in detail. It will be a variant of Dana Scott's D∞, which was the first non-trivial model invented, and has been a dominant influence on the semantics of λ-calculus and programming languages ever since.
Actually, D∞ came as quite a surprise to all workers in λ − even to Scott. In autumn 1969 he wrote a paper which argued vigorously that an interpretation of all untyped λ-terms in set theory was highly unlikely, and that those who were interested in making models of λ should limit themselves to the typed version. (For that paper, see [Sco93].) The paper included a sketch of a new interpretation of typed terms. Then, only a month later, Scott realized that, by altering this new interpretation only slightly, he could make it into a model of untyped λ; this was D∞.
D∞ is a model of both CLw and λβ, and is also extensional. The description below will owe much to accounts by Dana Scott and Gordon Plotkin, and to the well-presented account in [Bar84], but it will give more details than these and will assume the reader has a less mathematical background.
The construction of D∞ involves notions from topology. These will be defined below. They are very different from the syntactical techniques used in this book so far, but they are standard tools in the semantics of programming languages.