To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Suppose that, in addition to allowing quantifications over the elements of a domain, as in ordinary first-order logic, we allow also quantification over relations and functions on the domain. The result is called second-order logic. Almost all the major theorems we have established for first-order logic fail spectacularly for second-order logic, as is shown in the present short chapter. This chapter and those to follow generally presuppose the material in section 17.1. (They are also generally independent of each other, and the results of the present chapter will not be presupposed by later ones.)
Let us begin by recalling some of the major results we have established for first-order logic.
The compactness theorem: If every finite subset of a set of sentences has a model, the whole set has a model.
The (downward) Löwenheim—Skolem theorem: If a set of sentences has a model, it has an enumerable model.
The upward Löwenheim—Skolem theorem: If a set of sentences has an infinite model, it has a nonenumerable model.
The (abstract) Gödel completeness theorem: The set of valid sentences is semirecursive.
All of these results fail for second-order logic, which involves an extended notion of sentence, with a corresponding extension of the notion of truth of a sentence in an interpretation. In introducing these extended notions, we stress at the outset that we change neither the definition of language nor the definition of interpretation: a language is still an enumerable set of nonlogical symbols, and an interpretation of a language is still a domain together with an assignment of a denotation to each nonlogical symbol in the language.
Modal logic extends ‘classical’ logic by adding new logical operators □ and ⋄ for ‘necessity’ and ‘possibility’. Section 27.1 is an exposition of the rudiments of (sentential) modal logic. Section 27.2 indicates how a particular system of modal logic GL is related to the kinds of questions about provability in P we considered in Chapters 17 and 18. This connection motivates the closer examination of GL then undertaken in section 27.3.
Modal Logic
Introductory textbooks in logic devote considerable attention to a part of logic we have not given separate consideration: sentential logic. In this part of logic, the only nonlogical symbols are an enumerable infinity of sentence letters, and the only logical operators are negation, conjunction, and disjunction: ~, &, ∨. Alternatively, the operators may be taken to be the constant false (⊥) and the conditional (→). The syntax of sentential logic is very simple: sentence letters are sentences, the constant ⊥ is a sentence, and if A and B are sentences, so is (A → B).
The semantics is also simple: an interpretation is simply an assignment ω of truth values, true (represented by 1) or false (represented by 0), to the sentence letters. The valuation is extended to formulas by letting ω(⊥) = 0, and letting ω(A → B) = 1 if and only if, if ω(A) = 1, then ω(B) = 1.
The intuitive notion of an effectively computable function is the notion of a function for which there are definite, explicit rules, following which one could in principle compute its value for any given arguments. This chapter studies an extensive class of effectively computable functions, the recursively computable, or simply recursive, functions. According to Church's thesis, these are in fact all the effectively computable functions. Evidence for Church's thesis will be developed in this chapter by accumulating examples of effectively computable functions that turn out to be recursive. The subclass of primitive recursive functions is introduced in section 6.1, and the full class of recursive functions in section 6.2. The next chapter contains further examples. The discussion of recursive computability in this chapter and the next is entirely independent of the discussion of Turing and abacus computability in the preceding three chapters, but in the chapter after next the three notions of computability will be proved equivalent to each other.
Primitive Recursive Functions
Intuitively, the notion of an effectively computable function f from natural numbers to natural numbers is the notion of a function for which there is a finite list of instructions that in principle make it possible to determine the value f(x1, …, xn) for any arguments x1, …, xn. The instructions must be so definite and explicit that they require no external sources of information and no ingenuity to execute.
Reflections, a symposium on the foundations of mathematics, was held at Stanford University on December 11–13, 1998. The symposium was organized to honor Solomon Feferman who has played an enormous role in shaping the field over the last 40 years. It was timed so that its last day would coincide with Feferman's 70-th birthday; this provided a very special occasion to celebrate him and his career-long dedication to foundational research.
Jon Barwise and Wilfried Sieg, both doctoral students of Feferman, took the initiative in early 1996 of planning what became playfully called the Feferfest; Carolyn Talcott and Rick Sommer soon joined as the local Stanford organizers. Jon was instrumental in our subsequent venture to shape a program; he opened the symposium and gave a lecture on his latest work; he helped with the initial steps towards this volume, even after he had been diagnosed with cancer. Wemiss him.
The symposium was structured around proof-theoretically inspired themes. True to their origin in the work of David Hilbert and Paul Bernays, prooftheoretic investigations have sustained a special emphasis on or, at least a genuine connection to, broad philosophical issues. Stanford University has had an important role in fostering such work through actively engaged faculty, doctoral students, and visitors. Feferman has been at the very center of these activities.
This was an opportune moment to reflect broadly on such investigations, but also to connect them systematically with topics in Feferman's work. His primary contributions have been to proof theory, recursion theory and, in more recent years, to an analysis of the development of mathematical logic in the twentieth century. Indeed, all of thesematters are of intense interest in the current discussion concerning modern mathematical thought.
The symposium had six sessions. The details of the program - with the names of contributors and chairs - can be found at the very back of the book. The papers in this volume were submitted by symposium participants, as well as by some of Feferman's students and former collaborators, as a tribute to Feferman. They are grouped, somewhat differently from the symposium program, into four parts: Proof-theoretic analysis, Logic and computation,Applicative and self-applicative theories, and Philosophy of modern mathematical and logical thought.
In this chapter we study a family of adapted spaces that has been widely and successfully used in the nonstandard approach to stochastic analysis, the hyperfinite adapted spaces. The results in the monograph “An Infinitesimal Approach to Stochastic Analysis”, Keisler [1984], prompted a natural question: Why are these spaces so “rich” or “well behaved”?
In order to answer this question, we built a probability logic adequate for the study stochastic processes: adapted probability logic (see Keisler [1979], Keisler [1985], Keisler [1986a], Keisler [1986b], Hoover and Keisler [1984], Fajardo [1985a]). This is the origin of the theory we are describing in this book. We chose a somewhat different approach in Chapter 1 in order to introduce the theory in a smooth way without any need for a background in logic.
Basic nonstandard probability theory is a necessary prerequisite for most of this chapter. This theory is readily available to the interested mathematician without going through the technical literature on nonstandard analysis (see, among others, Albeverio, Fenstad, Hoegh-Krohn, and Lindstrom [1986], Cutland [1983], Fajardo [1990b], Lindstrom [1988], Stroyan and Bayod [1986] andKeisler [1988]). Nonetheless, in the following section we collect the main definitions and results needed in this book.
This seems to be an appropriate place to add a remark about the use of nonstandard analysis. It has been very hard to convince the mathematical community of the attractive characteristics of nonstandard analysis and its possible uses as a mathematical tool. This chapter, among other things, continues the task of showing with direct evidence the enormous potential that we believe nonstandard analysis has to offer to mathematics. The paper Keisler [1994] examines some of the reasons why nonstandard analysis has developed in the way we know it today and discusses the perspectives and possibilities in the years to come.
In this chapter we will establish additional properties of rich adapted spaces which can be applied to stochastic analysis. In Sections 8.1 and 8.2 we give an overview of the theory of neometric spaces, culminating in the Approximation Theorem. Section 8.C gives some typical applications of this theorem.
We briefly survey the evolution of the ideas in this chapter. The paper “From discrete to continuous time”, Keisler [1991], introduced a forcing procedure, resembling model theoretic forcing (see Hodges [1985]), which reduced statements about continuous time processes to approximate statements about discrete time processes without going through the full lifting procedure. After some refinements, in the series of papers Fajardo and Keisler [1996a] – Fajardo and Keisler [1995] we worked out a new theory, called the theory of neometric spaces, which has the following objectives: “First, to make the use of nonstandard analysis more accessible to mathematicians, and second, to gain a deeper understanding of why nonstandard analysis leads to new existence theorems. The neometric method is intended to be more than a proof technique—it has the potential to suggest new conjectures and new proofs in a wide variety of settings.” (From the introduction in Keisler [1995]).
This theory developed an axiomatic framework built around the notion of a neometric space, which is a metric space with a family of subsets called neocompact sets that are, like the compact sets, closed under operations such as finite union, countable intersection, and projection. In particular, for each adapted probability space there is an associated neometric space of random variables, and the adapted space is rich if the family of neocompact sets is countably compact.
In this book we take advantage of the results in the more recent paper Keisler [1997a] to give a simpler approach to the subject. In the Chapter 7 we defined rich adapted spaces directly without introducing neocompact sets. In this chapter we will give a quick overviewof the theory of neometric spaces. The neocompact sets will be defined here as countable intersections of basic sections, and their closure properties will be proved as theorems which hold in any rich adapted space.