To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
§1. Introduction. In this survey of the history of constructivism. more space has been devoted to early developments (up till ca. 1965) than to the work of the later decades. Not only because most of the concepts and general insights have emerged before 1965, but also for practical reasons: much of the work since 1965 is of a too technical and complicated nature to be described adequately within the limits of this article.
Constructivism is a point of view (or an attitude) concerning the methods and objects of mathematics which is normative: not only does it interpret existing mathematics according to certain principles, but it also rejects methods and results not conforming to such principles as unfounded or speculative (the rejection is not always absolute, but sometimes only a matter of degree: a decided preference for constructive concepts and methods). In this sense the various forms of constructivism are all ‘ideological’ in character.
Constructivism as a specific viewpoint emerges in the final quarter of the 19th century, and may be regarded as a reaction to the rapidly increasing use of highly abstract concepts and methods of proof in mathematics, a trend exemplified by the works of R.Dedekind and G. Cantor.
The mathematics before the last quarter of the 19th century is, from the viewpoint of today, in the main constructive, with the notable exception of geometry, where proof by contradiction was commonly accepted and widely employed.
The proof of the irrationality of √2 involves proving that there cannot be positive integers n and m such that n2 = 2m2. This can be proved with a simple number-theoretic argument: First we note that n must be even, whence m must also be even, and hence both are divisible by 2. Then we observe that this is a contradiction if we assume that n is chosen minimally. There is also a geometric proof known already to Euclid, but the proof given by Tennenbaum seems to be entirely new. It is as follows: In Picture 1 we have on the left hand side two squares superimposed, one solid and one dashed. Let us assume that the area of the solid square is twice the area of the dashed square. Let us also assume that the side of each square is an integer and moreover the side of the solid square is as small an integer as possible. In the right hand side of Picture 1 we have added another copy of the dashed square to the lower left corner of the solid square, thereby giving rise to a new square in the middle and two small squares in the corners. The combined area of the two copies of the original dashed square is the same as the area of the original big solid square. In the superimposed picture the middle square gets covered by a dashed square twice while the small corner squares are not covered by the dashed squares at all. Hence the area of the middle square must equal the combined area of the two corner squares.
The work of Stanley Tennenbaum in set theory was centered on the investigation of Suslin's Hypothesis (SH), to which he made crucial contributions. In 1963 Tennenbaum established the relative consistency of ¬SH, and in 1965, together with Robert Solovay, the relative consistency of SH. In the formative period after Cohen's 1963 discovery of forcing when set theory was transmuting into a modern, sophisticated field of mathematics, this work on SH exhibited the power of forcing for elucidating a classical problem of mathematics and stimulated the development of new methods and areas of investigation.
§ 1 discusses the historical underpinnings of SH. § 2 then describes Tennenbaum's consistency result for ¬SH and related subsequent work. § 3 then turns to Tennenbaum's work with Solovay on SH and the succeeding work on iterated forcing and Martin's Axiom. To cast an amusing sidelight on the life and the times, I relate the following reminiscence of Gerald Sacks from this period, no doubt apocryphal: Tennenbaum let it be known that he had come into a great deal of money, $30,000,000 it was said, and started to borrow money against it. Gerald convinced himself that Tennenbaum seriously believed this, but nonetheless asked Simon Kochen about it. Kochen replied “Well, with Stan he might be one per-cent right. But then, that's still $300,000.”
§1. Suslin's problem. At the end of the first volume of Fundamenta Mathematicae there appeared a list of problems with one attributed to Mikhail Suslin [1920], a problem that would come to be known as Suslin's Problem.
To the memory of our unforgettable friend Stanley Tennenbaum (1927-2005), Mathematician, Educator, Free Spirit.
In this first of a series of papers on ultrafinitistic themes, we offer a short history and a conceptual pre-history of ultrafinistism. While the ancient Greeks did not have a theory of the ultrafinite, they did have two words, murios and apeiron, that express an awareness of crucial and often underemphasized features of the ultrafinite, viz. feasibility, and transcendence of limits within a context. We trace the flowering of these insights in the work of Van Dantzig, Parikh, Nelson and others, concluding with a summary of requirements which we think a satisfactory general theory of the ultrafinite should satisfy.
First papers often tend to take on the character of manifestos, road maps, or both, and this one is no exception. It is the revised version of an invited conference talk, and was aimed at a general audience of philosophers, logicians, computer scientists, and mathematicians. It is therefore not meant to be a detailed investigation. Rather, some proposals are advanced, and questions raised, which will be explored in subsequent works of the series.
Our chief hope is that readers will find the overall flavor somewhat “Tennenbaumian”.
§1. Introduction: The radical Wing of constructivism. In their Constructivism in Mathematics, A. Troelstra and D. Van Dalen dedicate only a small section to Ultrafinitism (UF in the following). This is no accident: as they themselves explain therein, there is no consistent model theory for ultrafinitistic mathematics.
§1. Introduction. It is a unique feature of the field of mathematical logic, that almost any technical result from its various subfields: set theory, models of arithmetic, intuitionism and ultrafinitism, to name just a few of these, touches upon deep foundational and philosophical issues. What is the nature of the infinite? What is the significance of set-theoretic independence, and can it ever be eliminated? Is the continuum hypothesis a meaningful question? What is the real reason behind the existence of non-standard models of arithmetic, and do these models reflect our numerical intuitions? Do our numerical intuitions extend beyond the finite at all? Is classical logic the right foundation for contemporary mathematics, or should our mathematics be built on constructive systems? Proofs must be correct, but they must also be explanatory. How does the aesthetic of simplicity play a role in these two ideals of proof, and is there ever a “simplest” proof of a given theorem?
The papers collected here engage each of these questions through the veil of particular technical results. For example, the new proof of the irrationality of the square root of two, given by Stanley Tennenbaum in the 1960s and included here, brings into relief questions about the role simplicity plays in our grasp of mathematical proofs. In 1900 Hilbert asked a question which was not given at the Paris conference but which has been recently found in his notes for the list: find a criterion of simplicity in mathematics. The Tennenbaum proof is a particularly striking example of the phenomenon Hilbert contemplated in his 24th Problem.
§1. A tale of two problems. The formal independence of Cantor' Continuum Hypothesis from the axioms of Set Theory (ZFC) is an immediate corollary of the following two theorems where the statement of the Cohen's theorem is recast in the more modern formulation of the Boolean valued universe.
Theorem 1 (Gödel, [3]). Assume V = L. Then the Continuum Hypothesis holds.
Theorem 2 (Cohen, [1]). There exists a complete Boolean algebra, B, such that
VB ⊨ “The Continuum Hypothesis is false”.
Is this really evidence (as is often cited) that the Continuum Hypothesis has no answer?
Another prominent problem from the early 20th century concerns the projective sets, [8]; these are the subsets of ℝn which are generated from the closed sets in finitely many steps taking images by continuous functions, f : ℝn → ℝn, and complements. A function, f : ℝ → ℝ, is projective if the graph of f is a projective subset of ℝ × ℝ. Let Projective Uniformization be the assertion:
For each projective set A ⊂ ℝ × ℝ there exists a projective function, f : ℝ → ℝ, such that for all x ∈ ℝ if there exists y ∈ ℝ such that (x, y) ∈ A then (x, f(x)) ∈ A.
The two theorems above concerning the Continuum Hypothesis have versions for Projective Uniformization. Curiously the Boolean algebra for Cohen's theorem is the same in both cases, but in case of the problem of Projective Uniformization an additional hypothesis on V is necessary. While Cohen did not explicitly note the failure of Projective Uniformization, it is arguably implicit in his results.
To honor and celebrate the memory of Stanley Tennenbaum
Stanley Tennenbaum's influential 1959 theorem asserts that there are no recursive nonstandard models of Peano Arithmetic (PA). This theorem first appeared in his abstract [42]; he never published a complete proof. Tennenbaum's Theorem has been a source of inspiration for much additional work on nonrecursive models. Most of this effort has gone into generalizing and strengthening this theorem by trying to find the extent to which PA can be weakened to a subtheory and still have no recursive nonstandard models. Kaye's contribution [12] to this volume has more to say about this direction.
This paper is concerned with another line of investigation motivated by two refinements of Tennenbaum's theorem in which not just the model is nonrecursive, but its additive and multiplicative reducts are each nonrecursive. For the following stronger form of Tennenbaum's Theorem credit should also be given to Kreisel [5] for the additive reduct and to McAloon [26] for the multiplicative reduct.
Tennenbaum's Theorem. If M = (M, +, ·, 0, 1, ≤) is a nonstandard model of PA, then neither its additive reduct (M, +) nor its multiplicative reduct (M, ·) is recursive.
What happens with other reducts? The behavior of the order reduct, as is well known, is quite different from that of the additive and multiplicative reducts. The order type of every countable nonstandard model is ω + (ω* + ω) · η, where ω and η are the order types of the nonnegative integers ℕ and the rationals ℚ, respectively.
Abstract. We completely characterize the logical hierarchy of various subsystems of weak arithmetic, namely: ZR, ZR + N, ZR + GCD, ZR + Bez, OI + N, OI + GCD, OI + Bez.
§1. Introduction. In 1964 Shepherdson [6] introduced a weak system of arithmetic, Open Induction (OI), in which the Tennenbaum phenomenon does not hold. More precisely, if we restrict induction just to open formulas (with parameters), then we have a recursive nonstandard model. Since then several authors have studied Open Induction and its related fragments of arithmetic. For instance, since Open Induction is too weak to prove many true statements of number theory (It cannot even prove the irrationality of √2), a number of algebraic first order properties have been suggested to be added to OI in order to obtain closer systems to number theory. These properties include: Normality [9] (abbreviated by N), having the GCD property [8], being a Bezout domain [3, 8] (abbreviated by Bez), and so on. We mention that GCD is stronger than N, Bez is stronger than GCD and Bez is weaker than IE1 (IE1 is the fragment of arithmetic based on the induction scheme for bounded existential formulas and by a result of Wilmers [11], does not have a recursive nonstandard model). Boughattas in [1, 2] studied the non-finite axiomatizability problem and established several new results, including: (1) OI is not finitely axiomatizable, (2) OI + N is not finitely axiomatizable.
In the Prisoner’s Dilemma, the need to choose between different actions is generated by the need to solve an achievement goal, obtained as the result of a request from the police to turn witness against your friend. The achievement goal, triggered by the external event, is the motivation of the action you eventually choose.
But in classical decision theory, the motivation of actions is unspecified. Moreover, you are expected to evaluate the alternatives by considering only their likely consequences.
This additional chapter explores the semantics of classical logic and conditional logic. In classical logic, the semantics of a set of sentences S is determined by the set of all the interpretations (or semantic structures), called models, that make all the sentences in S true. The main concern of classical logic is with the notion of a sentence C being a logical consequence of S, which holds when C is true in all models of S.
Semantic structures in classical logic are arbitrary sets of individuals and relationships, which constitute the denotations of the symbols of the language in which sentences are expressed. In this chapter, I argue the case for restricting the specification of semantic structures to sets of atomic sentences, called Herbrand interpretations.
In this chapter we revisit the ancient Greek fable of the fox and the crow, to show how the proactive thinking of the fox outwits the reactive thinking of the crow. In later chapters, we will see how reactive and proactive thinking can be combined.
The fox and the crow are a metaphor for different kinds of people. Some people are proactive, like the fox in the story. They like to plan ahead, foresee obstacles, and lead an orderly life. Other people are reactive, like the crow. They like to be open to what is happening around them, take advantage of new opportunities, and be spontaneous. Most people are both proactive and reactive, at different times and to varying degrees.
I have made a case for a comprehensive, logic-based theory of human intelligence, drawing upon and reconciling a number of otherwise competing paradigms in Artificial Intelligence and other fields. The most important of these paradigms are production systems, logic programming, classical logic and decision theory.
The production system cycle, suitably extended, provides the bare bones of the theory: the observe–think–decide–act agent cycle. It also provides some of the motivation for identifying an agent’s maintenance goals as the driving force of the agent’s life.
It is a common view in some fields that logic has little to do with search. For example, Paul Thagard (2005) in Mind: Introduction to Cognitive Science states on page 45: “In logic-based systems, the fundamental operation of thinking is logical deduction, but from the perspective of rule-based systems, the fundamental operation of thinking is search.”
Similarly, Jonathan Baron (2008) in his textbook Thinking and Deciding writes on page 6: “Thinking about actions, beliefs and personal goals can all be described in terms of a common framework, which asserts that thinking consists of search and inference. We search for certain objects and then make inferences from and about the objects we have found.” On page 97, Baron states that formal logic is not a complete theory of thinking because it “covers only inference”.
This additional chapter shows that both forward and backward reasoning are special cases of the resolution rule of inference. Resolution also includes compiling two clauses, like:
In the propositional case, given two clauses of the form:where B and D are conjunctions of atoms including the atom true, and C and E are disjunctions of atoms including the atom false, resolution derives the resolvent:The two clauses from which the resolvent is derived are called the parents of the resolvent, and the atom A is called the atom resolved upon.
To a first approximation, the negation as failure rule of inference is straightforward. Its name says it all:
to show that the negation of a sentence holds
try to show the sentence holds, and
if the attempt fails, then the negation holds.
But what does it mean to fail? Does it include infinite or only finite failure? To answer these questions, we need a better understanding of the semantics.
Consider, for example, the English sentence:
bob will go if no one goes
Ignore the fact that, if Bob were more normal, it would be more likely that bob will go if no one else goes. Focus instead on the problem of representing the sentence more formally as a logical conditional.
It’s easy to take negation for granted, and not give it a second thought. Either it will rain or it won’t rain. But definitely it won’t rain and not rain at the same time and in the same place. Looking at it like that, you can take your pick. Raining and not raining are on a par, like heads and tails. You can have one or the other, but not both.
So it may seem at first glance. But on closer inspection, the reality is different. The world is a positive, not a negative place, and human ways of organising our thoughts about the world are mainly positive too. We directly observe only positive facts, like this coin is showing heads, or it is raining. We have to derive the negation of a positive fact from the absence of the positive fact. The fact that this coin is showing heads implies that it is not showing tails, and the fact that it is sunny implies, everything else being equal, that it is not raining at the same place and the same time.
In this chapter, I will discuss two psychological experiments that challenge the view that people have an inbuilt ability to perform abstract logical reasoning. The first of these experiments, the “selection task”, has been widely interpreted as showing that, instead of logic, people use specialised procedures for dealing with problems that occur commonly in their environment. The second, the “suppression task”, has been interpreted as showing that people do not reason using rules of inference, like forward and backward reasoning, but instead construct a model of the problem and inspect the model for interesting properties. I will respond to some of the issues raised by these experiments in this chapter, but deal with them in greater detail in Chapter 16, after presenting the necessary background material.
Logical Extremism, which views life as all thought and no action, has given logic a bad name. It has overshadowed its near relation, Logical Moderation, which recognises that logic is only one way of thinking, and that thinking isn’t everything.
The antithesis of Logical Extremism is Extreme Behaviourism, which denies any “life of the mind” and views Life instead entirely in behavioural terms. Behaviourism, in turn, is easily confused with the condition–action rule model of thinking.