To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Medvedev lattice was introduced in [5] as an attempt to make precise the idea, due to Kolmogorov, of identifying true propositional formulas with identically “solvable” problems. A mass problem is any set of functions (throughout this paper “function” means total function from ω to ω; the small Latin letters f, g, h,… will be used as variables for functions). Mass problems correspond to informal problems in the following sense: given any “informal problem”, a mass problem corresponding to it is a set of functions which “solve” the problem, and at least one such function can be “obtained” by any “solution” to the problem (see [10]).
Example 1.1 If A, B ⊆ ω are sets, and φ is a partial function, then the following are mass problems:
{CA} (where CA is the characteristic function of A): this is called the problem of solvability of A; this mass problem will be denoted by the symbol SA;
{f : range(f) = A}: the problem of enumerability of A; this mass problem will be denoted by the symbol εA;
(Other examples) The problem of separability of A and B, i.e. {f : f−1(0) = A & f−1(1) = B}; of course, this mass problem is empty if A∩B ≠ Ø: it is absolutely impossible to “solve” the problem in this case. The problem of many-one reducibility of A to B: {f : f−l(B) = A}. The problem of extendibility of φ: {f : f ⊇ φ}.
We consider genericity in the context of arithmetic. A set A ⊆ ε ω is called n-generic if it is Cohen-generic for n-quantifier arithmetic. By degree we mean Turing degree (of unsolvability). We call a degree n-generic if it has an n-generic representative. For a degree a, let D(≤ a) denote the set of degrees which are recursive in a. Since the set of n-generic sets is comeager, if some property is satisfied in D(≤ a) with a any generic degree, then in the sense of Baire category, we can say that it is satisfied in D(≤ a) for almost every degree a. So the structure of generic degrees plays an important role when we study the structure of D, the set of all degrees. For example, Slaman and Woodin [38] showed that there is a generic degree a such that if f is an automorphism of D and f(a) = a then f is identity. In this paper we mainly survey D(≤ a) when a is n-generic, as well as the properties of generic degrees in D. We assume the reader is familiar with the basic results of degree theory and arithmetical forcing. Feferman [4], Hinman [8], Hinman [9], and Lerman [25] are good references in this area. Odifreddi [29] is a good survey for basic notions and results for forcing and reducibilities. Jockusch [11] is a pioneering work in this area.
One of the most efficient methods for proving that a problem is undecidable is to code a second problem which is known to be undecidable into the given problem; a decision procedure for the original problem would then yield one for the second problem, so no such decision procedure can exist. Turing [1939] noticed that this method succeeds because of an inherent notion of information content, coded by a set of integers in the countable situation. This led him to introduce the relation of relative computability between sets as a way of expressing that the information content contained in one set was sufficient to identify the members of the second set.
Post [1944], and Kleene and Post [1954] tried to capture the notion of relative computability algebraically. They noticed that the pre-order relation induced on sets of integers by relative computability gave rise to an equivalence relation, and that the equivalence classes form a poset with least element. This structure, known as the degrees of unsolvability or just the degrees has since been intensively studied, and it is of interest whether the algebraic structure completely captures the notion of information content. This question reduces to the determination of whether the degrees are rigid, i.e., whether this algebraic structure has any non-trivial automorphisms, a question to which a positive result has recently been announced by Cooper.
One of the major problems one encounters in trying to produce, or rule out automorphisms of the degrees is that the structure is uncountable.
This volume is a collection of refereed research articles commemorating the Leeds Recursion Theory Year 1993-94. The year was funded principally by the (then) UK Science and Engineering Research Council, with additional support from the London Mathematical Society, European Twinning/Human Capital and Mobility Networks on ‘Complexity, Logic and Recursion Theory’, and on ‘Proof Theory and Computation’, a MURST-British Council travel grant, an EC PECO visiting fellowship, and with the backing of the Leeds University Department of Pure Mathematics. We thank them all for enabling an invigorating year.
It is fifteen years since the publication of the last Leeds Recursion Theory volume in this same series (LMS Lecture Notes 45). In that time the subject has made great strides. New methods have been developed and out of the immense technical machinery have finally emerged solutions to long-standing problems which originally motivated the pioneers some forty years ago, notably on definability, decidability and automorphisms for recursion theoretic structures. In addition the fundamental ideas concerning computation and recursion have naturally found their place at the interface between logic and theoretical computer science, and the feedback continues to motivate mathematical research in a variety of new directions. Thus the following contributions provide a picture of current ideas and methods in the ongoing investigations of the structure of the computable and non-computable universe. A number of the articles contain introductory and background material, which it is hoped will make the volume an invaluable source of information for specialist and non-specialist alike.
When can we embed one shift of finite type into another? When can we factor one shift of finite type onto another? The main results in this chapter, the Embedding Theorem and the Lower Entropy Factor Theorem, tell us the answers when the shifts have different entropies. For each theorem there is a simple necessary condition on periodic points, and this condition turns out to be sufficient as well. In addition, these periodic point conditions can be verified with relative ease.
We state and prove the Embedding Theorem in §10.1. The necessity of the periodic point condition here is easy. The sufficiency makes use of the fundamental idea of a marker set to construct sliding block codes. In §10.2 we prove the Masking Lemma, which shows how to represent embeddings in a very concrete form; we will use this in Chapter 11 to prove a striking application of symbolic dynamics to linear algebra. §10.3 contains the statement and proof of the Lower Entropy Factor Theorem, which is in a sense “dual” to the Embedding Theorem. The proof employs a marker construction similar to that of the Embedding Theorem. One consequence is an unequal entropy version of the Finite Equivalence Theorem.
The Embedding Theorem
Suppose that X and Y are irreducible shifts of finite type. When is there an embedding from X into Y? If the embedding is also onto, then X and Y are conjugate.
In previous chapters we have looked at the basic theory of knowledge and belief, along with some extensions and applications in the realms of computer science and artificial intelligence. The emphasis in this theory (or rather these theories and applications) was put upon the question of what is known or believed by the agent, and the logical systems that we have seen enable one to derive the knowledge or belief of such an agent.
In this chapter we shall switch the emphasis to the other side of the picture, namely whether one can say something about the ignorance of an agent as well. This is not as easy as it might seem at first glance. Of course, we can employ epistemic logic to express ignorance of the agent as well as its knowledge, e.g. by formulas of the form ¬Kϕ, expressing that ϕ is not known, and that the agent is thus ignorant about the truth of ϕ. One may even express a kind of total ignorance of the agent about the assertion ϕ by considering a formula of the form ¬Kϕ ∧ ¬K¬ϕ: the agent does not know ϕ nor does he know ¬ϕ. This is all perfectly fine, but how can one infer that the agent knows neither ϕ nor ¬ϕ in an actual situation? Of course, epistemic logic enables one to derive the agent's ignorance in some cases. For instance, since Kp → ¬K¬p is valid in S5, we can derive that, given Kp, the agent knows p, it holds that the agent must be ignorant about ¬p (i.e. ¬K¬p). However, now consider the following situation.
Invariants such as entropy, zeta function, and the dimension pair play an important role in studying shifts of finite type and sofic shifts. What values can these invariants take? Which numbers are entropies, which functions are zeta functions, which pairs are dimension pairs? Answers to these kinds of questions are called realization theorems.
In §11.1 we completely answer the entropy question. There is a simple algebraic description of the possible entropies of shifts of finite type and of sofic shifts. This amounts to characterizing the spectral radii of nonnegative integral matrices.
We focus on zeta functions in §11.2. Theorem 6.4.6 (see also Corollary 6.4.7) shows that the zeta function of an edge shift contains the same information as the nonzero spectrum of the adjacency matrix. Thus characterizing zeta functions of shifts of finite type is the same as characterizing the nonzero spectra of nonnegative integral matrices. We state an important partial result due to Boyle and Handelman [BoyH1].
The proof of this result is too complicated to include here, but we illustrate some of the main ideas involved by treating some special cases such as when all eigenvalues are integers. A remarkable feature of this work is that a significant theorem in linear algebra is proved by using important tools from symbolic dynamics: the Embedding Theorem and the Masking Lemma from Chapter 10. At the end of §11.2, we state a complete characterization of zeta functions of mixing sofic shifts [BoyH1].
In this chapter we shall occupy ourselves with default reasoning, or reasoning by default. In fact we indicate how default logic can be based on epistemic logic, and particularly how we may employ Halpern & Moses' minimal epistemic states for this purpose. In this way we obtain a simple and natural S5-based logic for default reasoning that is well-behaved in a certain way. (We show the logic to be cumulative in the sense of Kraus, Lehmann and Magidor [KLM90].)
Default logic, autoepistemic logic (AEL) and other approaches to non-monotonic reasoning suffer from a technical complexity that is not in line with naive common-sense reasoning. They employ fixed-point constructions or higher-order logic in order to define the belief sets that one would like to associate with some base set of knowledge.
Here we present a modal logic, called EDL, which is an extension of the epistemic logic of Chapter 1. The logic EDL was introduced in [MH91a, MH92], and in [MH93a, 95] we connected it to the theory of Halpern & Moses, as treated in Section 3.1, to obtain a logic for default reasoning. The combined approach is relatively simple compared with AEL, but, more importantly, it is better suited as a default logic than AEL, as we shall show subsequently.
Our approach — unlike AEL — does not involve any fixed points or higher-order formulas. The basis for this logic is the simple S5-modal logic of Chapter 1. EDL contains a knowledge (certainty) operator and (dual) possibility operator.
The previous chapters dealt mostly with the metamathematical properties of the systems of bounded arithmetic and of the propositional proof systems. We studied the provability and the definability in these systems and their various relations. The reader has by now perhaps some feeling for the strength of the systems. In this chapter we shall consider the provability of several combinatorial facts in bounded arithmetic.
In the first section we study the counting functions for predicates in PH, the boundedPHP, the approximate counting, and the provability of the infinitude of primes. In the second section we demonstrate that a lower bound on the size of constant-depth circuits can be meaningfully formalized and proved in bounded arithmetic. The last, third section studies some questions related to the main problem whether there is a model of S2 in which the polynomial-time hierarchy does not collapse.
Counting
A crucial property that allows a theory to prove a lot of elementary combinatorial facts is counting. In the context of bounded arithmetic this would require having definitions of the counting functions for predicates.
The uniform counting is not available.
Theorem 15.1.1. There is no -formula θ(a, a) that would define for each set a and each n υ ω the parity of the set {x υ n | a (x)}.
Fundamental problem. Is bounded arithmetic S2 finitely axiomatizable?
As we shall see (Theorem 10.2.4), this question is equivalent to the question whether there is a model of S2 in which the polynomial time hierarchy PH does not collapse.
Finite axiomatizability ofSandT
In this section we summarize the information about the fundamental problem that we have on the grounds of the knowledge obtained in the previous chapters.
Theorem 10.1.1. Each of the theories S and T is finitely axiomatizable for i ≤ 1.
Proof. By Lemma 6.1.4, for i ≤ 1 there is a formula UNIVi(x, y, z) that is a universal formula (provably in). This implies that and, i ≤ 1, are finitely axiomatizable over.
To see that is also finitely axiomatizable, verify that only a finite part of is needed in the proof of Lemma 6.1.4.
The next statement generalizes this theorem.
Theorem 10.1.2. Let 1 ≤ and 2 ≥ j. Then the set of the consequences of
Epistemic logic concerns the notions knowledge and belief ('επιστημη — episteme — is Greek for ‘knowledge’), and stems from philosophy where it has been developed to give a formal treatment of these notions. (Sometimes the logic of belief is separately referred to as doxastic logic, from the Greek word δoξα — doxa —, meaning ‘surmise’ or ‘presumption’. In this book we shall use epistemic logic for the logic of knowledge and belief.) In [Hin62] the Finnish logician and philosopher Jaakko Hintikka presented a logic for knowledge and belief that was based on modal logic. Modal logic is a so-called philosophical logic dealing with the notions of necessity and contingency (possibility) ([Kri63], [Che80], [HC68, HC84]), and it appeared that epistemic logic could be viewed as an instance of this more general logic by interpreting necessity and possibility in an epistemic manner. For a thorough treatment of epistemic logic from the perspective of philosophy we refer to [Len80].
Especially in the last decade the use of logic and logìcal formalisms in artificial intelligence (AI) has increased enormously, including that of those logics that have been developed originally in and for philosophy. Epistemic logic is one of these so-called philosophical logics that has been ‘discovered’ by computer scientists and AI researchers. Particularly, the relevance of epistemic logic has been realised by researchers interested in the formal description of knowledge of agents in distributed and intelligent systems in order to specify or verify protocols, and represent knowledge and formalise reasoning methods, respectively.
We shall study in this chapter the topic of hard tautologies: tautologies that are candidates for not having short proofs in a particular proof system. The closely related question is whether there is an optimal propositional proof system, that is, a proof system P such that no other system has more than a polynomial speed-up over P. We shall obtain a statement analogous to the NP-completeness results characterizing any propositional proof system as an extension of EF by a set of axioms of particular form. Recall the notions of a proof system and p-simulation from Section 4.1, the definitions of translations of arithmetic formulas into propositional ones in Section 9.2, and the relation between reflection principles (consistency statements) and p-simulations established in Section 9.3. We shall also use the notation previously used in Chapter 9.
Finitistic consistency statements and optimal proof systems
We shall denote by Taut (x) the formula Taut0 (x) from Section 9.3 defining the set of the (quantifierfree) tautologies, denoted TAUT itself.
Recall from Section 9.2 the definition of the translation
producing from a formula a sequence of propositional formulas (Definition 9.2.1, Lemma 9.2.2).
This chapter presents important definability results for fragments of bounded arithmetic.
A Turing machine M will be given by its set of states Q, the alphabet Σ, the number of working tapes, the transition function, and its clocks, that is, an explicit time bound. Most results of the form “Given machine M the theory T can prove …” could be actually proved in a bit stronger form: “For any k the theory T can prove that for any M running in time ≥ nk …” A natural formulation for such results is in terms of models of T and computations within such models, but in this chapter we shall omit these formulations.
An instantaneous description of a computation of machine M on input x consists of the current state, the positions of the heads, the content of all tapes, and the current time: That is, it is a sequence of symbols whose length is proportional to the time bound for n:= |x|.
A computation will be coded by the sequence of the consecutive instantaneous descriptions.
Now we shall consider several bounded formulas defining these elementary concepts. They are all in the language L+ and thus also (by Lemma 5.4.1) in L.