To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Set theory originated in an attempt to understand and somehow classify small, or negligible, sets of real numbers. Cantor’s early explorations into the realm of the transfinite were motivated by a desire to understand the points of convergence of trigonometric series. The basic ideas quickly became a fundamental part of analysis, in addition to permeating many other areas of mathematics. Since then, set theory has become a way to unify mathematical practice, and the way in which mathematicians grapple with the infinite in all areas of mathematics.
In many areas of mathematics (like partial orderings, groups, or graphs), we write down some axioms and immediately have several different models of these axioms in mind. In the setting of first-order logic, this corresponds to writing down a set Σ of sentences in a language and looking at the elementary class . Since by Proposition 6.5.3, and Cn(Σ) is a theory by Proposition 6.5.4, we can view this situation as looking at the (elementary) class of models of a theory.
Our development of a formal definition of computability in the previous chapter might have seemed out of place. We used our generation template and some simple references to propositional connectives and (bounded) quantifiers, but otherwise there was seemingly little connection to logic. In this chapter, we establish that computability and logic are fundamentally intertwined.
We now embark on a careful study of propositional logic. As described in Chapter 1, in this setting, we start with an arbitrary set P, which we think of as our collection of primitive statements. From here, we build up more complicated statements by repeatedly applying connectives. The corresponding process generates a set of syntactic objects that we call formulas. In order to assign meaning to these formulas, we introduce truth assignments, which are functions on P that propagate upward through formulas of higher complexity.
Many of our powerful results about first-order logic, such as the Löwenheim–Skolem Theorem and the Łoś-Vaught Test, focused on countable structures in countable languages. Now that we have a well-developed theory of infinite cardinalities, we can extend these results into the uncountable realm. In addition to the satisfaction we obtain through such generalizations, we will be able to argue that some other important theories are complete, and further refine our intuition about the inability of first-order logic to delineate between infinite cardinalities.
Suppose that we have a (first-order) language . As emphasized in , the elements of are just syntactic sequences of symbols, and we only attach meaning to these formulas once we provide an -structure together with a variable assignment. The fundamental separation between syntactic formulas and semantic structures is incredibly important, because it opens up an interesting way to find both commonalities and differences across structures. That is, given two structures with variable assignments and , we can compare the two sets and . Although the two structures and variable assignments likely live in different worlds, these two sets both live inside the same set . In other words, the syntactic nature of the formulas provides a shared substrate where we can perform comparisons.
Proofs by induction and definitions by recursion are fundamental tools when working with the natural numbers. However, there are many other places where variants of these ideas apply. In fact, more delicate and exotic proofs by induction and definitions by recursion are two central tools in mathematical logic. We will eventually develop transfinite versions of these ideas in Chapter 9 to give us ways to continue into exotic, infinite realms, and these techniques are essential in both set theory and model theory. In this chapter, we develop the more modest tools of induction and recursion along structures that are generated by one-step processes, like the natural numbers. Occasionally, these types of induction and recursion are called structural.
Now that we have successfully worked through several important aspects of propositional logic, it is time to move on to a much more substantial and important logic: first-order logic. We gave a basic overview of the fundamental ideas in the introduction. Fundamentally, many areas of mathematics deal with mathematical structures consisting of special constants, relations, and functions, together with certain axioms that these structures obey. We want our logic to be able to handle different types of situations, so we allow ourselves to vary the number and types of these objects. For example, in group theory, we have a special identity element and a binary function corresponding to the group operation. If we want, we can also add in a unary function corresponding to the inverse operation. For ring theory, we have two constants for 0 and 1, along with two binary functions for addition and multiplication (and possibly a unary function for additive inverses). For partial orderings, we just have one binary relation. Any such choice gives rise to a language.
To understand logic is, first and foremost, to understand logical consequence. This Element provides an in-depth, accessible, up-to-date account of and philosophical insight into the semantic, model-theoretic conception of logical consequence, its Tarskian roots, and its ideas, grounding, and challenges. The topics discussed include: (i) the passage from Tarski's definition of truth (simpliciter) to his definition of logical consequence, (ii) the need for a non-proof-theoretic definition, (iii) the idea of a semantic definition, (iv) the adequacy conditions of preservation of truth, formality, and necessity, (v) the nature, structure, and totality of models, (vi) the logicality problem that threatens the definition of logical consequence (the problem of logical constants), (vii) a general solution to the logicality, formality, and necessity problems/challenges, based on the isomorphism-invariance criterion of logicality, (viii) philosophical background and justification of the isomorphism-invariance criterion, and (ix) major criticisms of the semantic definition and the isomorphism-invariance criterion.
One is often said to be reasoning well when they are reasoning logically. Many attempts to say what logical reasoning is have been proposed, but one commonly proposed system is first-order classical logic. This Element will examine the basics of first-order classical logic and discuss some surrounding philosophical issues. The first half of the Element develops a language for the system, as well as a proof theory and model theory. The authors provide theorems about the system they developed, such as unique readability and the Lindenbaum lemma. They also discuss the meta-theory for the system, and provide several results there, including proving soundness and completeness theorems. The second half of the Element compares first-order classical logic to other systems: classical higher order logic, intuitionistic logic, and several paraconsistent logics which reject the law of ex falso quodlibet.
This Element takes a deep dive into Gödel's 1931 paper giving the first presentation of the Incompleteness Theorems, opening up completely passages in it that might possibly puzzle the student, such as the mysterious footnote 48a. It considers the main ingredients of Gödel's proof: arithmetization, strong representability, and the Fixed Point Theorem in a layered fashion, returning to their various aspects: semantic, syntactic, computational, philosophical and mathematical, as the topic arises. It samples some of the most important proofs of the Incompleteness Theorems, e.g. due to Kuratowski, Smullyan and Robinson, as well as newer proofs, also of other independent statements, due to H. Friedman, Weiermann and Paris-Harrington. It examines the question whether the incompleteness of e.g. Peano Arithmetic gives immediately the undecidability of the Entscheidungsproblem, as Kripke has recently argued. It considers set-theoretical incompleteness, and finally considers some of the philosophical consequences considered in the literature.
This Element is an introduction to recent work proofs and models in philosophical logic, with a focus on the semantic paradoxes the sorites paradox. It introduces and motivates different proof systems and different kinds of models for a range of logics, including classical logic, intuitionistic logic, a range of three-valued and four-valued logics, and substructural logics. It also compares and contrasts the different approaches to substructural treatments of the paradox, showing how the structural rules of contraction, cut and identity feature in paradoxical derivations. It then introduces model theoretic treatments of the paradoxes, including a simple fixed-point model construction which generates three-valued models for theories of truth, which can provide models for a range of different non-classical logics. The Element closes with a discussion of the relationship between proofs and models, arguing that both have their place in the philosophers' and logicians' toolkits.
This Element is an exposition of second- and higher-order logic and type theory. It begins with a presentation of the syntax and semantics of classical second-order logic, pointing up the contrasts with first-order logic. This leads to a discussion of higher-order logic based on the concept of a type. The second Section contains an account of the origins and nature of type theory, and its relationship to set theory. Section 3 introduces Local Set Theory (also known as higher-order intuitionistic logic), an important form of type theory based on intuitionistic logic. In Section 4 number of contemporary forms of type theory are described, all of which are based on the so-called 'doctrine of propositions as types'. We conclude with an Appendix in which the semantics for Local Set Theory - based on category theory - is outlined.