To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book aims to be an introduction to model theory which can be used without any background in logic. We start from scratch, introducing first-order logic, structures, languages etc. but move on fairly quickly to the fundamental results in model theory and stability theory. We also decided to cover simple theories and Hrushovski constructions, which over the last decade have developed into an important subject. We try to give the necessary background in algebra, combinatorics and set theory either in the course of the text or in the corresponding section of the appendices. The exercises form an integral part of the book. Some of them are used later on, others complement the text or present aspects of the theory that we felt should not be completely ignored. For the most important exercises (and the more difficult ones) we include (hints for) solutions at the end of the book. Those exercises which will be used in the text have their solution marked with an asterisk.
The book falls into four parts. The first three chapters introduce the basics as would be contained in a course giving a general introduction to model theory. This first part ends with Chapter 4 which introduces and explores the notion of a type, the topology on the space of types and a way to make sure that a certain type will not be realized in a model to be constructed. The chapter ends with Fraïssé's amalgamation method, a simple but powerful tool for constructing models.
This book is an up-to-date introduction to simple theories and hyperimaginaries, with special attention to Lascar strong types and elimination of hyperimaginary problems. Assuming only knowledge of general model theory, the foundations of forking, stability and simplicity are presented in full detail. The treatment of the topics is as general as possible, working with stable formulas and types and assuming stability or simplicity of the theory only when necessary. The author offers an introduction to independence relations as well as a full account of canonical bases of types in stable and simple theories. In the last chapters the notions of internality and analyzability are discussed and used to provide a self-contained proof of elimination of hyperimaginaries in supersimple theories.
This book is devoted to recursion in programming, the technique by which the solution to a problem is expressed partly in terms of the solution to a simpler version of the same problem. Ultimately the solution to the simplest version must be given explicitly. In functional programming, recursion has received its full due since it is quite often the only repetitive construct. However, the programming language used here is Pascal and the examples have been chosen accordingly. It makes an interesting contrast with the use of recursion in functional and logic programming. The early chapters consider simple linear recursion using examples such as finding the highest common factor of a pair of numbers, and processing linked lists. Subsequent chapters move up through binary recursion, with examples which include the Towers of Hanoi problem and symbolic differentiation, to general recursion. The book contains well over 100 examples.
Driven by the question, 'What is the computational content of a (formal) proof?', this book studies fundamental interactions between proof theory and computability. It provides a unique self-contained text for advanced students and researchers in mathematical logic and computer science. Part I covers basic proof theory, computability and Gödel's theorems. Part II studies and classifies provable recursion in classical systems, from fragments of Peano arithmetic up to Π11–CA0. Ordinal analysis and the (Schwichtenberg–Wainer) subrecursive hierarchies play a central role and are used in proving the 'modified finite Ramsey' and 'extended Kruskal' independence results for PA and Π11–CA0. Part III develops the theoretical underpinnings of the first author's proof assistant MINLOG. Three chapters cover higher-type computability via information systems, a constructive theory TCF of computable functionals, realizability, Dialectica interpretation, computationally significant quantifiers and connectives and polytime complexity in a two-sorted, higher-type arithmetic with linear logic.
This book is about the deep connections between proof theory and recursive function theory. Their interplay has continuously underpinned and motivated the more constructively orientated developments in mathematical logic ever since the pioneering days of Hilbert, Gödel, Church, Turing, Kleene, Ackermann, Gentzen, Péter, Herbrand, Skolem, Malcev, Kolmogorov and others in the 1930s. They were all concerned in one way or another with the links between logic and computability. Gödel's theorem utilized the logical representability of recursive functions in number theory; Herbrand's theorem extracted explicit loop-free programs (sets of witnessing terms) from existential proofs in logic; Ackermann and Gentzen analysed the computational content of ε-reduction and cut-elimination in terms of transfinite recursion; Turing not only devised the classical machine-model of computation, but (what is less well known) already foresaw the potential of transfinite induction as a method for program verification; and of course the Herbrand–Gödel–Kleene equation calculus presented computability as a formal system of equational derivation (with “call by value” being modelled by a substitution rule which itself is a form of “cut” but at the level of terms).
That these two fields—proof and recursion—have developed side by side over the intervening seventy-five years so as to form now a cornerstone in the foundations of computer science, testifies to the power and importance of mathematical logic in transferring what was originally a body of philosophically inspired ideas and results, down to the frontiers of modern information technology.
The treatment of our subject—proof and computation—would be incomplete if we could not address the issue of extracting computational content from formalized proofs. The first author has over many years developed a machine-implemented proof assistant, Minlog, within which this can be done where, unlike many other similar systems, the extracted content lies within the logic itself. Many non-trivial examples have been developed, illustrating both the breadth and the depth of Minlog, and some of them will be seen in what follows. Here we shall develop the theoretical underpinnings of this system. It will be a theory of computable functionals (TCF), a self-generating system built from scratch and based on minimal logic, whose intended model consists of the computable functions on partial continuous objects, as treated in the previous chapter. The main tool will be (iterated) inductive definitions of predicates and their elimination (or least-fixed-point) axioms. Its computational strength will be roughly that of ID<ω, but it will be more adaptable and computationally applicable.
After developing the system TCF, we shall concentrate on delicate questions to do with finding computational content in both constructive and classical existence proofs. We discuss three “proof interpretations” which achieve this task: realizability for constructive existence proofs and, for classical proofs, the refined A-translation and Gödel's Dialectica interpretation. After presenting these concepts and proving the crucial soundness theorem for each of them, we address the question of how to implement such proof interpretations.
The main subject of Mathematical Logic is mathematical proof. In this introductory chapter we deal with the basics of formalizing such proofs and, via normalization, analysing their structure. The system we pick for the representation of proofs is Gentzen's natural deduction from [1935]. Our reasons for this choice are twofold. First, as the name says this is a natural notion of formal proof, which means that the way proofs are represented corresponds very much to the way a careful mathematician writing out all details of an argument would go anyway. Second, formal proofs in natural deduction are closely related (via the so-called Curry–Howard correspondence) to terms in typed lambda calculus. This provides us not only with a compact notation for logical derivations (which otherwise tend to become somewhat unmanageable tree-like structures), but also opens up a route to applying (in part 3) the computational techniques which underpin lambda calculus.
Apart from classical logic we will also deal with more constructive logics: minimal and intuitionistic logic. This will reveal some interesting aspects of proofs, e.g., that it is possible and useful to distinguish beween existential proofs that actually construct witnessing objects, and others that don't.
An essential point for Mathematical Logic is to fix a formal language to be used. We take implication → and the universal quantifier ∀ as basic. Then the logic rules correspond precisely to lambda calculus.
In this chapter we develop the basics of recursive function theory, or as it is more generally known, computability theory. Its history goes back to the seminal works of Turing, Kleene and others in the 1930s.
A computable function is one defined by a program whose operational semantics tell an idealized computer what to do to its storage locations as it proceeds deterministically from input to output, without any prior restrictions on storage space or computation time. We shall be concerned with various program styles and the relationships between them, but the emphasis throughout this chapter and in part 2 will be on one underlying data type, namely the natural numbers, since it is there that the most basic foundational connections between proof theory and computation are to be seen in their clearest light. This is not to say that computability over more general and abstract data types is less important. Quite the contrary. For example, from a logical point of view, Stoltenberg-Hansen and Tucker [1999], Tucker and Zucker [2000], [2006] and Moschovakis [1997] give excellent presentations of a more abstract approach, and our part 3 develops a theory in higher types from a completely general standpoint.
The two best-known models of machine computation are the Turing Machine and the (Unlimited) Register Machine of Shepherdson and Sturgis [1963]. We base our development on the latter since it affords the quickest route to the results we want to establish (see also Cutland [1980]).
In this final chapter we focus much of the technical/logical work of previous chapters onto theories with limited (more feasible) computational strength. The initial motivation is the surprising result of Bellantoni and Cook [1992] characterizing the polynomial-time functions by the primitive recursion schemes, but with a judicially placed semicolon first used by Simmons [1988], separating the variables into two kinds (or sorts). The first “normal” kind controls the length of recursions, and the second “safe” kind marks the places where substitutions are allowed. Various alternative names have arisen for the two sorts of variables, which will play a fundamental role throughout this chapter, thus “normal”/“input” and “safe”/“output”; we shall use the input–output terminology. The important distinction here is that input and output variables will not just be of base type, but may be of arbitrary higher type.
We begin by developing a basic version of arithmetic which incorporates this variable separation. This theory EA(;) will have elementary recursive strength (hence the prefix E) and sub-elementary (polynomially bounded) strength when restricted to its Σ1-inductive fragment. EA(;) is a first-order theory which we use as a means to illustrate the underlying principles available in such two-sorted situations. Our aim however is to extend the Bellantoni and Cook variable separation to also incorporate higher types. This produces a theory A(;) extending EA(;) with higher type variables and quantifiers, having as its term system a two-sorted version T(;) of Gödel's T. T(;) will thus give a functional interpretation for A(;), which has the same elementary computational strength, but is more expressive and applicable.
In this chapter we will develop a somewhat more general view of computability theory, where not only numbers and functions appear as arguments, but also functionals of any finite type.
Abstract computability via information systems
There are two principles on which our notion of computability will be based: finite support and monotonicity, both of which have already been used (at the lowest type level) in section 2.4.
It is a fundamental property of computation that evaluation must be finite. So in any evaluation of Φ(ϕ) the argument ϕ can be called upon only finitely many times, and hence the value—if defined—must be determined by some finite subfunction of ϕ. This is the principle of finite support (cf. section 2.4).
Let us carry this discussion somewhat further and look at the situation one type higher up. Let ℋ be a partial functional of type 3, mapping type-2 functionals Φ to natural numbers. Suppose Φ is given and ℋ(Φ) evaluates to a defined value. Again, evaluation must be finite. Hence the argument Φ can only be called on finitely many functions ϕ. Furthermore each such ϕ must be presented to Φ in a finite form (explicitly say, as a set of ordered pairs). In other words, ℋ and also any type-2 argument Φ supplied to it must satisfy the finite support principle, and this must continue to apply as we move up through the types.