To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In 1931, the young Kurt Gödel published his First and Second Incompleteness Theorems; very often, these are simply referred to as ‘Gödel's Theorems’. His startling results settled (or at least, seemed to settle) some of the crucial questions of the day concerning the foundations of mathematics. They remain of the greatest significance for the philosophy of mathematics – though just what that significance is continues to be debated. It has also frequently been claimed that Gödel's Theorems have a much wider impact on very general issues about language, truth and the mind.
This book gives proofs of the Theorems and related formal results, and touches – necessarily briefly – on some of their implications. Who is this book for? Roughly speaking, for those who want a lot more fine detail than you get in books for a general audience (the best of those is Franzén, 2005), but who find the rather forbidding presentations in classic texts in mathematical logic (like Mendelson, 1997) too short on explanatory scene-setting. So I hope philosophy students taking an advanced logic course will find the book useful, as will mathematicians who want a more accessible exposition.
But don't be misled by the relatively relaxed style; don't try to browse through too quickly. We do cover a lot of ground in quite a bit of detail, and new ideas often come thick and fast. Take things slowly!
We now, at long last, turn to considering the Second Incompleteness Theorem for PA.
We worked up to the First Theorem very slowly, spending a number of chapters proving various preliminary technical results before eventually taking the wraps off the main proofs in Chapters 16 and 17. But things seem to go rather more smoothly and accessibly if we approach the Second Theorem the other way about, working backwards from the target Theorem to proofs of the technical results needed to demonstrate it. So in this chapter, we simply assume a background technical result about PA which we will call the ‘Formalized First Theorem’: we then show that it immediately yields the Second Theorem for PA when combined with Theorem 20.2.
In the next chapter, we show that the Formalized First Theorem and hence the Second Theorem can similarly be derived in any arithmetic theory T for which certain ‘derivability conditions’ hold (or rather, hold in addition to the Diagonalization Lemma). Then in Chapter 26 we finally dig down to discover what it takes for those derivability conditions to obtain.
Defining Con
We begin with four reminders, and then motivate a pair of new definitions:
Recall, Prf(m, n) holds when m is the super g.n. of a PA-proof of the wff with g.n. n. And we defined Prov(n) to be true just when n is the g.n. of a PA theorem, i.e. just when ∃m Prf (m, n). Thus, Prov(⌜ϕ⌝) iff PA ⊢ ϕ.
We now start exploring further around and about our incompleteness theorems. But do be careful! Don't let the developments over these next three chapters obscure the relatively simple lines of the classic Gödelian arguments which we have already given in Chapters 16 and 17.
The main business of this chapter is to present two key ways of strengthening our incompleteness theorems.
We show how to extend the reach of both the semantic and syntactic versions of our incompleteness results, so that they apply not just to p.r. axiomatized theories, but to any formal theory that counts as axiomatized in the intuitive sense. (We also draw an easy corollary of our newly extended semantic theorem, and prove that the truths of basic arithmetic can't be axiomatized.)
We explain how we can do away with the assumption of ω-consistency (as used in Gödel's original First Theorem): we can prove that any nice theory is incomplete and incompletable, whether it is?-consistent or not. That's the Gödel-Rosser Theorem.
Then, after the main business, we explain another way of weakening the assumption of ω-consistency, this time involving the idea of so-called ‘1-consistency’.
Broadening the scope of the incompleteness theorems
Our intuitive characterization of a properly formalized theory T requires various properties like that of being an axiom of T to be effectively decidable. Or, what comes to the same given a sensible Gödel numbering scheme, the characteristic functions of numerical properties like that of numbering a T-axiom should be effectively computable (see Sections 3.3 and 11.6).
After the glory days of the 1930s, Gödel's comments on the details of his incompleteness theorems were few and far between. However, he did add a brief footnote to the 1967 translation of a much earlier piece on ‘Completeness and consistency’. And Gödel thought that his brisk remarks in that footnote were sufficiently important to repeat them in a short paper in 1972, in a section entitled ‘The best and most general version of the unprovability of consistency in the same system’.
Gödel makes two main points. We explain the first of them in Section 27.2. We then go on to prove some results about reflection principles which hopefully throw light on his second point. And we'll return to develop that second point further in the next chapter, where we touch on Hilbert's Programme.
It will do no harm at all, however, to begin with a summary of …
The Second Theorem: the story so far
(a) Start with the p.r. relation PrfT (m, n), which obtains when m is the super g.n. of a T-proof of the wff with g.n. n. Assuming T is p.r. adequate, this relation can be canonically captured in T by an open wff PrfT (x, y) whose components recapitulate step by step the natural p.r. definition of PrfT (along the lines we gave for the case of PA back in Section 15.9).
We now move on from the generalities of the previous chapters, and look at some particular formal arithmetics. In this chapter, we limber up by looking at Baby Arithmetic, and then we start exploring Robinson Arithmetic. Later, in Chapter 10, we'll be introducing Peano Arithmetic, the strongest of our initial range of formal arithmetics.
These theories differ in strength, but they do share one key feature: the theories' deductive apparatus is no richer than familiar first-order logic. So we can quantify, perhaps, over all numbers: but our theories will lack second-order quantifiers, i.e. we can't quantify over all numerical properties.
BA, Baby Arithmetic
We begin with a very simple theory which ‘knows’ about the addition of particular numbers, ‘knows’ its multiplication tables, but can't express general facts about numbers at all (it lacks the whole apparatus of quantification). Hence our label Baby Arithmetic, or BA for short. As with any formal theory, we need to characterize (a) its language, (b) its deductive apparatus, and (c) its axioms.
(a) BA's language is LB = 〈ℒB, IB〉. ℒB's non-logical vocabulary is the same as that of ℒA (Section 4.3): hence there is a single individual constant ‘0’, the one-place function symbol 's’, and the two-place function symbols ‘+’ and ‘×’. So ℒB contains the standard numerals. However, ℒB's logical apparatus is restricted. As we said, it lacks quantifiers and variables.
Gödel's Incompleteness Theorems tell us about the limits of theories of arithmetic. Or rather, more carefully, they tell us about the limits of axiomatized formal theories of arithmetic. But what exactly does this mean? This chapter starts exploring the idea and proves some elementary results about axiomatized formal theories in general.
Formalization as an ideal
Rather than just dive into a series of definitions, it is well worth pausing to remind ourselves of why we care about formalized theories.
Let's get back to basics. In elementary logic classes, we are drilled in translating arguments into an appropriate formal language and then constructing formal deductions of putative conclusions from given premisses. Why bother with formal languages? Because everyday language is replete with redundancies and ambiguities, not to mention sentences which simply lack clear truth-conditions. So, in assessing complex arguments, it helps to regiment them into a suitable artificial language which is expressly designed to be free from obscurities, and where surface form reveals logical structure.
Why bother with formal deductions? Because everyday arguments often involve suppressed premisses and inferential fallacies. It is only too easy to cheat. Setting out arguments as formal deductions in one style or another enforces honesty: we have to keep a tally of the premisses we invoke, and of exactly what inferential moves we are using. And honesty is the best policy. For suppose things go well with a particular formal deduction.
We are not going to write any more programs to show, case by case, that this or that particular function is Turing-computable, not just because it gets painfully tedious, but because we can now fairly easily establish that every µ-recursive function is Turing-computable and, conversely, every Turing-computable function is µ-recursive. This equivalence between our two different characterizations of computable functions is of key importance, and we'll be seeing its significance in the remaining chapters.
µ-Recursiveness entails Turing computability
Every µ-recursive function can be evaluated ‘by hand’, using pen and paper, prescinding from issues about the size of the computation. But we have tried to build into the idea of a Turing computation the essentials of any hand-computation. So we should certainly hope and expect to be able to prove:
Theorem 32.1Every µ-recursive function is Turing-computable.
Proof sketch We'll say that a Turing program is dextral (i.e. ‘right-handed’) if
i. in executing the program – starting by scanning the leftmost of some block(s) of digits – we never have to write in any cell to the left of the initial scanned cell (or scan any cell more than one to the left of that initial cell); and
ii. if and when the program halts standardly, the final scanned cell is the same cell as the initial scanned cell (in other words, the input block(s) of digits at the beginning of a computation and the final output block start in the same cell).
Theorem 13.6 tells us that Q can capture all p.r. functions. Our next theorem shows that Q can in fact capture all µ-recursive functions. With a bit of help from Church's Thesis, our new stronger theorem enables us very quickly to prove two new Big Results: first, any nice theory is undecidable; and second, theoremhood in first-order logic is undecidable too.
The old Theorem 13.6 is, of course, the key result which underlies incompleteness theorems like Theorem 17.2 (if T is nice and ω-consistent, then T is incomplete). Our new theorem correspondingly underlies some easy variations on that earlier incompleteness theorem and its relatives. We'll also prove a formal counterpart to the informal theorem of Chapter 6.
Q is recursively adequate
Recall that we said that a theory is p.r. adequate if it captures each p.r. function as a function (Section 12.4). Let's likewise say that
A theory is recursively adequate iff it captures each µ-recursive function as a function.
We showed that Q is p.r. adequate in Chapter 13. Overall that took some ingenuity; but given the work we've already done, it is now very easy to go on to establish
Theorem 30.1 Q is recursively adequate.
Proof Theorem 13.3 tells us that Q can capture any Σ1 function as a function. To establish that Q is recursively adequate, it therefore suffices to show that recursive functions are Σ1 (i.e. are expressible by Σ1 wffs).
Dependence logic introduces the concept of dependence into first order logic by adding a newkind of atomic formula. We call these new atomic formulas atomic dependence formulas. The definition of the semantics for dependence logic is reminiscent of the definition of the semantics for first order logic, presented in Chapter 2. But instead of defining satisfaction for assignments, we follow ref. [21] and jump one level up considering sets of assignments. This leads us to formulate the semantics of dependence logic in terms of the concept of the type of a set of assignments.
The reason for the transition to a higher level is, roughly speaking, that one cannot manifest dependence, or independencefor that matter, in a single assignment. To see a pattern of dependence, one needs a whole set of assignments.
This is because dependence notions can be best investigated in a context involving repeated actions by agents presumably governed by some possibly hidden rules. In such a context dependence is manifested by recurrence, and independence by lack of it.
Our framework consists of three components:
teams, agents, and features.
Teams are sets of agents. Agents are objects with features. Features are like variables which can have any value in a given fixed set.
If we have n features and m possible values foreach feature, we have altogethermn different agents. Teams are simply subsets of this space of all possible agents.
Although our treatment of dependence logic is entirely mathematical, our intuition of dependence phenomena comes from reallife examples, thinking of different ways dependence manifests itself in the real world.
We begin with a review of the well known game theoretic semantics of first order logic (see, e.g., ref. [17]). This is the topic of Section 5.1. There are two ways of extending the first order game to dependence logic. The first, presented in Section 5.2, corresponds to the transition in semantics from assignments to teams. The second game theoretic semantics for dependence logic is closer to the original semantics of independence friendly logic presented in refs. [16] and [19]. In the second game theoretic formulation, the dependence relation =(x0, …, xn) does not come up as an atomic formula but as the possibility to incorporate imperfect information into the game. A player who aims at securing =(x0, …, xn) when the game ends has to be able to choose a value for xn only on the basis of what the values of x0, …, xn–1 are. In this sense the player's information set is restricted to x0, …, xn–1 when he or she chooses xn.
Semantic game of first order logic
The game theoretic semantics of first order logic has a long history. The basic idea is that if a sentence is true, its truth, asserted by us, can be defended against a doubter. A doubter can question the truth of a conjunction s ∧ ψ by doubting the truth of, say, ψ. He can doubt the truth of a disjunction s ∧ ψ by asking which of s and ψ is the one that is true.