To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In previous chapters we have looked at the basic theory of knowledge and belief, along with some extensions and applications in the realms of computer science and artificial intelligence. The emphasis in this theory (or rather these theories and applications) was put upon the question of what is known or believed by the agent, and the logical systems that we have seen enable one to derive the knowledge or belief of such an agent.
In this chapter we shall switch the emphasis to the other side of the picture, namely whether one can say something about the ignorance of an agent as well. This is not as easy as it might seem at first glance. Of course, we can employ epistemic logic to express ignorance of the agent as well as its knowledge, e.g. by formulas of the form ¬Kϕ, expressing that ϕ is not known, and that the agent is thus ignorant about the truth of ϕ. One may even express a kind of total ignorance of the agent about the assertion ϕ by considering a formula of the form ¬Kϕ ∧ ¬K¬ϕ: the agent does not know ϕ nor does he know ¬ϕ. This is all perfectly fine, but how can one infer that the agent knows neither ϕ nor ¬ϕ in an actual situation? Of course, epistemic logic enables one to derive the agent's ignorance in some cases. For instance, since Kp → ¬K¬p is valid in S5, we can derive that, given Kp, the agent knows p, it holds that the agent must be ignorant about ¬p (i.e. ¬K¬p). However, now consider the following situation.
In this chapter we shall occupy ourselves with default reasoning, or reasoning by default. In fact we indicate how default logic can be based on epistemic logic, and particularly how we may employ Halpern & Moses' minimal epistemic states for this purpose. In this way we obtain a simple and natural S5-based logic for default reasoning that is well-behaved in a certain way. (We show the logic to be cumulative in the sense of Kraus, Lehmann and Magidor [KLM90].)
Default logic, autoepistemic logic (AEL) and other approaches to non-monotonic reasoning suffer from a technical complexity that is not in line with naive common-sense reasoning. They employ fixed-point constructions or higher-order logic in order to define the belief sets that one would like to associate with some base set of knowledge.
Here we present a modal logic, called EDL, which is an extension of the epistemic logic of Chapter 1. The logic EDL was introduced in [MH91a, MH92], and in [MH93a, 95] we connected it to the theory of Halpern & Moses, as treated in Section 3.1, to obtain a logic for default reasoning. The combined approach is relatively simple compared with AEL, but, more importantly, it is better suited as a default logic than AEL, as we shall show subsequently.
Our approach — unlike AEL — does not involve any fixed points or higher-order formulas. The basis for this logic is the simple S5-modal logic of Chapter 1. EDL contains a knowledge (certainty) operator and (dual) possibility operator.
The previous chapters dealt mostly with the metamathematical properties of the systems of bounded arithmetic and of the propositional proof systems. We studied the provability and the definability in these systems and their various relations. The reader has by now perhaps some feeling for the strength of the systems. In this chapter we shall consider the provability of several combinatorial facts in bounded arithmetic.
In the first section we study the counting functions for predicates in PH, the boundedPHP, the approximate counting, and the provability of the infinitude of primes. In the second section we demonstrate that a lower bound on the size of constant-depth circuits can be meaningfully formalized and proved in bounded arithmetic. The last, third section studies some questions related to the main problem whether there is a model of S2 in which the polynomial-time hierarchy does not collapse.
Counting
A crucial property that allows a theory to prove a lot of elementary combinatorial facts is counting. In the context of bounded arithmetic this would require having definitions of the counting functions for predicates.
The uniform counting is not available.
Theorem 15.1.1. There is no -formula θ(a, a) that would define for each set a and each n υ ω the parity of the set {x υ n | a (x)}.
Fundamental problem. Is bounded arithmetic S2 finitely axiomatizable?
As we shall see (Theorem 10.2.4), this question is equivalent to the question whether there is a model of S2 in which the polynomial time hierarchy PH does not collapse.
Finite axiomatizability ofSandT
In this section we summarize the information about the fundamental problem that we have on the grounds of the knowledge obtained in the previous chapters.
Theorem 10.1.1. Each of the theories S and T is finitely axiomatizable for i ≤ 1.
Proof. By Lemma 6.1.4, for i ≤ 1 there is a formula UNIVi(x, y, z) that is a universal formula (provably in). This implies that and, i ≤ 1, are finitely axiomatizable over.
To see that is also finitely axiomatizable, verify that only a finite part of is needed in the proof of Lemma 6.1.4.
The next statement generalizes this theorem.
Theorem 10.1.2. Let 1 ≤ and 2 ≥ j. Then the set of the consequences of
Epistemic logic concerns the notions knowledge and belief ('επιστημη — episteme — is Greek for ‘knowledge’), and stems from philosophy where it has been developed to give a formal treatment of these notions. (Sometimes the logic of belief is separately referred to as doxastic logic, from the Greek word δoξα — doxa —, meaning ‘surmise’ or ‘presumption’. In this book we shall use epistemic logic for the logic of knowledge and belief.) In [Hin62] the Finnish logician and philosopher Jaakko Hintikka presented a logic for knowledge and belief that was based on modal logic. Modal logic is a so-called philosophical logic dealing with the notions of necessity and contingency (possibility) ([Kri63], [Che80], [HC68, HC84]), and it appeared that epistemic logic could be viewed as an instance of this more general logic by interpreting necessity and possibility in an epistemic manner. For a thorough treatment of epistemic logic from the perspective of philosophy we refer to [Len80].
Especially in the last decade the use of logic and logìcal formalisms in artificial intelligence (AI) has increased enormously, including that of those logics that have been developed originally in and for philosophy. Epistemic logic is one of these so-called philosophical logics that has been ‘discovered’ by computer scientists and AI researchers. Particularly, the relevance of epistemic logic has been realised by researchers interested in the formal description of knowledge of agents in distributed and intelligent systems in order to specify or verify protocols, and represent knowledge and formalise reasoning methods, respectively.
We shall study in this chapter the topic of hard tautologies: tautologies that are candidates for not having short proofs in a particular proof system. The closely related question is whether there is an optimal propositional proof system, that is, a proof system P such that no other system has more than a polynomial speed-up over P. We shall obtain a statement analogous to the NP-completeness results characterizing any propositional proof system as an extension of EF by a set of axioms of particular form. Recall the notions of a proof system and p-simulation from Section 4.1, the definitions of translations of arithmetic formulas into propositional ones in Section 9.2, and the relation between reflection principles (consistency statements) and p-simulations established in Section 9.3. We shall also use the notation previously used in Chapter 9.
Finitistic consistency statements and optimal proof systems
We shall denote by Taut (x) the formula Taut0 (x) from Section 9.3 defining the set of the (quantifierfree) tautologies, denoted TAUT itself.
Recall from Section 9.2 the definition of the translation
producing from a formula a sequence of propositional formulas (Definition 9.2.1, Lemma 9.2.2).
This chapter presents important definability results for fragments of bounded arithmetic.
A Turing machine M will be given by its set of states Q, the alphabet Σ, the number of working tapes, the transition function, and its clocks, that is, an explicit time bound. Most results of the form “Given machine M the theory T can prove …” could be actually proved in a bit stronger form: “For any k the theory T can prove that for any M running in time ≥ nk …” A natural formulation for such results is in terms of models of T and computations within such models, but in this chapter we shall omit these formulations.
An instantaneous description of a computation of machine M on input x consists of the current state, the positions of the heads, the content of all tapes, and the current time: That is, it is a sequence of symbols whose length is proportional to the time bound for n:= |x|.
A computation will be coded by the sequence of the consecutive instantaneous descriptions.
Now we shall consider several bounded formulas defining these elementary concepts. They are all in the language L+ and thus also (by Lemma 5.4.1) in L.
This chapter will present basic propositional calculus. By that I mean properties of propositional calculus established by direct combinatorial arguments as distinguished from high level arguments involving concepts (or motivations) from other parts of logic (bounded arithmetic) and complexity theory.
Examples of the former are various simulation results or the lower bound for resolution from Haken (1985). Examples of the latter are the simulation of the Frege system with substitution by the extended Frege system (Lemma 4.5.5 and Corollary 9.3.19), or the construction of the provably hardest tautologies from the fmitistic consistency statements (Section 14.2).
We shall define basic propositional proof systems: resolution R, extended resolution ER, Frege system F, extended Frege system EF, Frege system with the substitution rule SF, quantified propositional calculus G, and Gentzen's sequent calculus LK. We begin with the general concept of a propositional proof system.
Propositional proof systems
A property of the usual textbook calculus is that it can be checked in deterministic polynomial time whether a string of symbols is a proof in the system or not. This is generalized into the following basic definition of Cook and Reckhow (1979).
Definition 4.1.1. Let TA UT be the set of propositional tautologies in the language with propositional connectives: constants 0 (FALSE) and 1 (TRUE), ¬ (negation), ∨ (disjunction), and & (conjunction), and atoms p1, p2,…
A propositional proof system is a polynomial time function P whose range is the set TAUT.
This chapter is devoted primarily to proving several definability and witnessing theorems for the second order system and analogous to those in Chapters 6 and 7.
Our tool is the RSUV isomorphism (Theorem 5.5.13), or rather the definition of (Definition 5.5.3), together with the model-theoretic construction of Lemma 5.5.4.
The first section discusses and defines the second order computations. In the second section are proved some definability and witnessing theorems for the second order systems and further conservation results for first order theories (Corollaries 8.2.5-8.2.7). The proofs are sketched and the details of the RSUV isomorphism arguments are left to the reader.
Second order computations
Let A (a, βt(b)) be a second order bounded formula and (K, X) a model of. By Definition 5.5.3 we may think of K as of K = Log(M) for some M ⊨, with X being the subsets of K coded in M. Pick some a, b ∈ K of length n and some βt(b). Then
if and only if (see Theorem 5.5.13 for the notation)
In this chapter we briefly review the basic notions and facts from logic and complexity theory whose knowledge is assumed throughout the book. We shall always sketch important arguments, both from logic and from complexity theory, and so a determined reader can start with only a rough familiarity with the notions surveyed in the next two sections and pick the necessary material along the way.
For those readers who prefer to consult relevant textbooks we recommend the following books: The best introduction to logic are parts of Shoenfield (1967); for elements of structural complexity theory I recommend Balcalzár, Diáz, and Gabbarró (1988, 1990); for NP-completeness Garey and Johnson (1979); and for a Boolean complexity theory survey of lower bounds Boppana and Sipser (1990) or the comprehensive monograph Wegener (1987). A more advanced (but selfcontained) text on logic of first order arithmetic theories is Hájek and Pudlák (1993).
Logic
We shall deal with first order and second order theories of arithmetic. The second order theories are, in fact, just two-sorted first order theories: One sort are numbers; the other are finite sets. This phrase means that the underlying logic is always the first order predicate calculus; in particular, no set-theoretic assumptions are a part of the underlying logic.
From basic theorems we shall use Gödel completeness and incompleteness theorems, Tarski's undefinability of truth, and, in arithmetic, constructions of partial truth definitions.