To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the last chapter we gave informal but hopefully entirely persuasive arguments that key numerical properties and relations that arise from the arithmetization of the syntax of PA – such as Term, Wff and Prf – are primitive recursive.
Gödel, as we said, gives rigorous proofs of such results (or rather, he proves the analogues for his particular formal system). He shows how to define a sequence of more and more complex functions and relations by composition and recursion, eventually leading up to a p.r. definition of Prf. Inevitably, this is a laborious job: Gödel does it with masterly economy and compression but, even so, it takes him forty-five steps of function-building to show that Prf is p.r.
We have in fact already traced some of the first steps in Section 14.8. We showed, in particular, that extracting exponents of prime factors – the key operation used in decoding Gödöl numbers – can be done by a p.r. function, exf. To follow Gödel further, we need to keep going in the same vein, defining ever more complex functions. What I propose to do in this chapter is to fill in the next few steps moderately carefully, and then indicate rather more briefly how the remainder go. This should be quite enough to give you a genuine feel for Gödel's demonstration and to indicate how it can be completed, without going into too much unnecessary detail.
In the last chapter, we considered the theory IΔ0 built in the language LA, whose axioms are those of Q, plus (the universal closures of) all instances of the Induction Schema for Δ0 predicates. Now we lift that restriction on induction, and allow any LA predicate to appear in instances of the Schema. The result is (first-order) Peano Arithmetic.
Being generous with induction
(a) Given what we said in Section 9.1(a) about the motivation for the induction principle, any instance of the Induction Schema will be intuitively acceptable as an axiom, so long as we replace φ in the Schema by a suitable open wff which expresses a genuine property/relation.
We argued at the beginning of the last chapter that Δ0 wffs are eminently suitable, and we considered the theory you get by adding to Q the instances of the Induction Schema involving such wffs. But why should we be so very restrictive?
Take any open wff φ of LA at all. This will be built from no more than the constant term ‘0’, the familiar successor, addition and multiplication functions, plus identity and other logical apparatus. Therefore – you might very well suppose – it ought also to express a perfectly determinate arithmetical property or relation. So why not be generous and allow any open LA wff to be substituted for φ in the Induction Schema? The result of adding to Q (the universal closures of) every instance of the Schema is PA – First-order Peano Arithmetic.
In this chapter, we introduce Turing's classic analysis of effective computability. And then – in the next chapter – we will establish the crucial result that the Turing-computable total functions are exactly the μ-recursive functions. This result is fascinating in its own right; it is hugely important historically; and it enables us later to establish some further results about recursiveness and incompleteness in a particularly neat way. So let's dive in without more ado.
The basic conception
Think of executing an algorithmic computation ‘by hand’, using pen and paper. We follow strict rules for writing down symbols in various patterns. To keep things tidy, let's write the symbols neatly one-by-one in the squares of some suitable square-ruled paper. Eventually – assuming that we don't find ourselves carrying on generating output forever – the computation process stops and the result of the computation is left written down in some block of squares on the paper.
Now, Turing suggests, using a two-dimensional grid for writing down the computation is not of the essence. Imagine cutting up the paper into horizontal strips a square deep, and pasting these together into one long tape. We could use that as an equivalent workspace.
Using a rich repertoire of symbols is not of the essence either. Suppose some computational system uses 27 symbols. Number these off using a five-binarydigit code (so the 14th symbol, for example, gets the code ‘01110’). Then divide each of the original squares on our workspace tape into a row of five small cells.
The previous chapter talked about functions rather generally. We now narrow the focus and concentrate more specifically on effectively computable functions. Later in the book, we will want to return to some of the ideas we introduce here and give sharper, technical, treatments of them. But for present purposes, informal intuitive presentations are enough.
We also introduce the crucial related notion of an effectively enumerable set, i.e. a set that can be enumerated by an effectively computable function.
Effectively computable functions
(a) Familiar school-room arithmetic routines – e.g. for squaring a number or finding the highest common factor of two numbers – give us ways of effectively computing the value of some function for a given input: the routines are, we might say, entirely mechanical.
Later, in the logic classroom, we learn new computational routines. For example, there's a quite trivial syntactic computation which takes two well-formed formulae (wffs) and forms their conjunction, and there's an only slightly less trivial procedure for effectively computing the truth value of a propositional calculus wff as a function of the values of its atoms.
What is meant by talking of an effective computational procedure? The core idea is that an effective computation involves (1) executing an algorithm which (2) successfully terminates.
1. An algorithm is a set of step-by-step instructions (instructions which are pinned down in advance of their execution), with each small step clearly specified in every detail (leaving no room for doubt as to what does and what doesn't count as executing the step, and leaving no room for chance).
Gödel's Incompleteness Theorems tell us about the limits of theories of arithmetic. More precisely, they tell us about the limits of effectively axiomatized formal theories of arithmetic. But what exactly does that mean?
Formalization as an ideal
Rather than just dive into a series of definitions, it is well worth pausing to remind ourselves of why we might care about formalizing theories.
So let's get back to basics. In elementary logic classes, beginners are drilled in translating arguments into an appropriate formal language and then constructing formal deductions of the stated conclusions from given premisses.
Why bother with formal languages? Because everyday language is replete with redundancies and ambiguities, not to mention sentences which simply lack clear truth-conditions. So, in assessing complex arguments, it helps to regiment them into a suitable artificial language which is expressly designed to be free from obscurities, and where surface form reveals logical structure.
Why bother with formal deductions? Because everyday arguments often involve suppressed premisses and inferential fallacies. It is only too easy to cheat. Setting out arguments as formal deductions in one style or another enforces honesty: we have to keep a tally of the premisses we invoke, and of exactly what inferential moves we are using. And honesty is the best policy. For suppose things go well with a particular formal deduction. Suppose we get from the given premisses to some target conclusion by small inference steps each one of which is obviously valid (no suppressed premisses are smuggled in, and there are no suspect inferential moves).
As we noted at the end of Chapter 9, it is rather natural to suggest that the intuitive principle of arithmetical induction should be regimented as a secondorder principle that quantifies over numerical properties, and which therefore can't be directly expressed in a first-order theory that only quantifies over numbers. So why not work with a second-order theory, rather than hobble our formal arithmetic by forcing it into a first-order straightjacket?
True, we have discovered that – so long as it stays consistent and effectively axiomatized – any theory containing enough arithmetic will be incomplete. But still, we ought to say at least a little about second-order arithmetics, and this is as good a place as any. Indeed, if you have done a university mathematics course, you might very well be feeling rather puzzled by now. Typically, at some point, you are introduced to axioms for a version of ‘Second-order Peano Arithmetic’ and are given the elementary textbook proof that these axioms are categorical, i.e. pin down a unique type of structure. But if this second-order arithmetic does pin down the structure of the natural numbers, then – given that any arithmetic sentence makes a determinate claim about this structure – it apparently follows that this theory does enough to settle the truth-value of every arithmetic sentence. Which makes it sound as if there can after all be a (consistent) negation-complete axiomatic theory of arithmetic richer than first-order PA, flatly contradicting the Gödel-Rosser Theorem.
The title of Gödel's great paper is ‘On formally undecidable propositions of Principia Mathematica and related systems I’. And as we noted in Section 23.4, his First Incompleteness Theorem does indeed undermine Principia's logicist ambitions. But logicism wasn't really Gödel's main target. For, by 1931, much of the steam had already gone out of the logicist project. Instead, the dominant project for showing that classical infinitary mathematics is in good order was Hilbert's Programme, which we mentioned at the outset (Section 1.6). This provided the real impetus for Gödel's early work; it is time we filled out more of the story.
However, this book certainly isn't the place for a detailed treatment of the changing ideas of Hilbert and his followers as their ideas developed pre- and post-Gödel; nor is it the place for an extended discussion of the later fate of Hilbertian ideas.1 So our necessarily brief remarks will do no more than sketch the logical geography of some broadly Hilbertian territory: those with more of a bent for the history of logic can be left to fight over the question of Hilbert's precise path through the landscape.
Another, quite different, topic which we will take up in this Interlude is the vexed one of the impact of the incompleteness theorems, and in particular the Second Theorem, on the issue of mechanism: do Gödelian results show that minds cannot be machines?
In the last Interlude, we gave a five-stage map of our route to Gödel's First Incompleteness Theorem. The first two stages we mentioned are now behind us. They involved (1) introducing the standard theories Q and PA, then (2) defining the p.r. functions and – the hard bit! – proving Q's p.r. adequacy. In order to do the hard bit, we have already used one elegant idea from Gödel's epoch-making 1931 paper, namely the β-function trick. But most of his proof is still ahead of us: at the end of this Interlude, we will review the stages that remain.
But first, let's relax for a moment after all our labours, and pause to take a very short look at some of the scene-setting background. We will say more about the historical context in a later Interlude (Chapter 37). But for now, we'll say enough to explain the title of Gödel's great paper: ‘On formally undecidable propositions of Principia Mathematica and related systems I’.
Principia's logicism
Frege aimed in his Grundgesetze der Arithmetik to reconstruct arithmetic (and some analysis too) on a secure footing by deducing it from logic plus definitions. But as we noted in Section 13.4, Frege's overall logicist project – in its original form – founders on his disastrous fifth Basic Law. And the fatal contradiction that Russell exposed in Frege's system was not the only paradox to bedevil early treatments of the theory of classes.
Let's finish by taking stock one last time. At the end of the last Interlude, we gave a road-map for the final part of the book. So we won't repeat the gist of that detailed local guide to recent chapters; instead, we'll stand further back and give a global overview. And let's concentrate on the relationship between our various proofs of incompleteness. Think of the book, then, as falling into four main parts:
(a) The first part (Chapters 1 to 8), after explaining various key concepts, proves two surprisingly easy incompleteness theorems. Theorem 6.3 tells us that if T is a sound effectively axiomatized theory whose language is sufficiently expressive, then T can't be negation-complete. And Theorem 7.2 tells us that we can weaken the soundness condition and require only consistency if we strengthen the other condition (from one about what T can express to one about what it can prove): if T is a consistent effectively axiomatized theory which is sufficiently strong, then T again can't be negation-complete.
Here the ideas of being sufficiently expressive/sufficiently strong are defined in terms of expressing/capturing enough effectively decidable numerical properties or relations. So the arguments for our two initial incompleteness theorems depend on a number of natural assumptions about the intuitive idea of effective decidability. And the interest of those theorems depends on the assumption that being sufficiently expressive/sufficiently strong is a plausible desideratum on formalized arithmetics.
Back in Chapter 10, we introduced the weak arithmetic Q, and soon saw that it is boringly incomplete. In Chapter 12, the stronger arithmetic Δ0 was defined, and this too can be seen to be incomplete without invoking Gödelian methods. Then in Chapter 13 we introduced the much stronger first-order theory PA, and remarked that we couldn't in the same easy way show that it fails to decide some elementary arithmetical claims. However, in the last chapter it has turned out that PA also remains incomplete.
Still, that result in itself isn't yet hugely exciting, even if it is perhaps rather unexpected (see Section 13.3). After all, just saying that a particular theory T is incomplete leaves wide open the possibility that we can patch things up by adding an axiom or two more, to get a complete theory T+. As we said at the very outset, the real force of Gödel's arguments is that they illustrate general methods which can be applied to any theory satisfying modest conditions in order to show that it is incomplete. They reveal that a theory like PA is not only incomplete but in a good sense incompletable.
The present chapter explains these crucial points.
Generalizing the semantic argument
In Section 21.3, we showed that PA is incomplete on the semantic assumption that its axioms are true (given that its standard first-order logic is truth-preserving). In this section, we are going to extend the semantic argument for incompleteness to other theories.