To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In computability theory, Godel's incompleteness theorem [1934] finds expression in the definition of the jump operator. Thus, the Priedberg [1957] jump inversion theorem completely characterises the scope of the Godel undecidability phenomenon within the Kleene-Post [1954] degree structure for classifying unsolvable problems.
Minimal degrees of unsolvability naturally arise as the structural counterpart of decision problems whose solutions are extremely specialised (their solution does not have any other nontrivial applications). Spector [1956] showed that minimal degrees exist, while Sacks [1961] constructed one below 0′.
The first result concerning the jumps of minimal degrees was due to Yates [1970], who obtained a low minimal degree as a corollary to his construction of a minimal m below any given nonzero computably enumerable (or r.e.) degree. A global characterisation of such jumps was provided by the Cooper [1973] jump inversion theorem, while in the same paper it was shown that such a theorem could not hold locally (there are no high minimal degrees). The intuition that minimal degrees are in a sense close to 0 (the degree of the computable sets) was reinforced by Jockusch and Posner [1978], who found that all minimal degrees are in fact generalised low2 This, with Sasso's [1974] construction of a non-low minimal degree below ό, supported Jockusch's conjecture (see Yates [1974], p.235) that the jumps of minimal degrees below ό can be characterised as those ό-REA degrees which are low over 0′.
The Medvedev lattice was introduced in [5] as an attempt to make precise the idea, due to Kolmogorov, of identifying true propositional formulas with identically “solvable” problems. A mass problem is any set of functions (throughout this paper “function” means total function from ω to ω; the small Latin letters f, g, h,… will be used as variables for functions). Mass problems correspond to informal problems in the following sense: given any “informal problem”, a mass problem corresponding to it is a set of functions which “solve” the problem, and at least one such function can be “obtained” by any “solution” to the problem (see [10]).
Example 1.1 If A, B ⊆ ω are sets, and φ is a partial function, then the following are mass problems:
{CA} (where CA is the characteristic function of A): this is called the problem of solvability of A; this mass problem will be denoted by the symbol SA;
{f : range(f) = A}: the problem of enumerability of A; this mass problem will be denoted by the symbol εA;
(Other examples) The problem of separability of A and B, i.e. {f : f−1(0) = A & f−1(1) = B}; of course, this mass problem is empty if A∩B ≠ Ø: it is absolutely impossible to “solve” the problem in this case. The problem of many-one reducibility of A to B: {f : f−l(B) = A}. The problem of extendibility of φ: {f : f ⊇ φ}.
We consider genericity in the context of arithmetic. A set A ⊆ ε ω is called n-generic if it is Cohen-generic for n-quantifier arithmetic. By degree we mean Turing degree (of unsolvability). We call a degree n-generic if it has an n-generic representative. For a degree a, let D(≤ a) denote the set of degrees which are recursive in a. Since the set of n-generic sets is comeager, if some property is satisfied in D(≤ a) with a any generic degree, then in the sense of Baire category, we can say that it is satisfied in D(≤ a) for almost every degree a. So the structure of generic degrees plays an important role when we study the structure of D, the set of all degrees. For example, Slaman and Woodin [38] showed that there is a generic degree a such that if f is an automorphism of D and f(a) = a then f is identity. In this paper we mainly survey D(≤ a) when a is n-generic, as well as the properties of generic degrees in D. We assume the reader is familiar with the basic results of degree theory and arithmetical forcing. Feferman [4], Hinman [8], Hinman [9], and Lerman [25] are good references in this area. Odifreddi [29] is a good survey for basic notions and results for forcing and reducibilities. Jockusch [11] is a pioneering work in this area.
One of the most efficient methods for proving that a problem is undecidable is to code a second problem which is known to be undecidable into the given problem; a decision procedure for the original problem would then yield one for the second problem, so no such decision procedure can exist. Turing [1939] noticed that this method succeeds because of an inherent notion of information content, coded by a set of integers in the countable situation. This led him to introduce the relation of relative computability between sets as a way of expressing that the information content contained in one set was sufficient to identify the members of the second set.
Post [1944], and Kleene and Post [1954] tried to capture the notion of relative computability algebraically. They noticed that the pre-order relation induced on sets of integers by relative computability gave rise to an equivalence relation, and that the equivalence classes form a poset with least element. This structure, known as the degrees of unsolvability or just the degrees has since been intensively studied, and it is of interest whether the algebraic structure completely captures the notion of information content. This question reduces to the determination of whether the degrees are rigid, i.e., whether this algebraic structure has any non-trivial automorphisms, a question to which a positive result has recently been announced by Cooper.
One of the major problems one encounters in trying to produce, or rule out automorphisms of the degrees is that the structure is uncountable.
This volume is a collection of refereed research articles commemorating the Leeds Recursion Theory Year 1993-94. The year was funded principally by the (then) UK Science and Engineering Research Council, with additional support from the London Mathematical Society, European Twinning/Human Capital and Mobility Networks on ‘Complexity, Logic and Recursion Theory’, and on ‘Proof Theory and Computation’, a MURST-British Council travel grant, an EC PECO visiting fellowship, and with the backing of the Leeds University Department of Pure Mathematics. We thank them all for enabling an invigorating year.
It is fifteen years since the publication of the last Leeds Recursion Theory volume in this same series (LMS Lecture Notes 45). In that time the subject has made great strides. New methods have been developed and out of the immense technical machinery have finally emerged solutions to long-standing problems which originally motivated the pioneers some forty years ago, notably on definability, decidability and automorphisms for recursion theoretic structures. In addition the fundamental ideas concerning computation and recursion have naturally found their place at the interface between logic and theoretical computer science, and the feedback continues to motivate mathematical research in a variety of new directions. Thus the following contributions provide a picture of current ideas and methods in the ongoing investigations of the structure of the computable and non-computable universe. A number of the articles contain introductory and background material, which it is hoped will make the volume an invaluable source of information for specialist and non-specialist alike.
In previous chapters we have looked at the basic theory of knowledge and belief, along with some extensions and applications in the realms of computer science and artificial intelligence. The emphasis in this theory (or rather these theories and applications) was put upon the question of what is known or believed by the agent, and the logical systems that we have seen enable one to derive the knowledge or belief of such an agent.
In this chapter we shall switch the emphasis to the other side of the picture, namely whether one can say something about the ignorance of an agent as well. This is not as easy as it might seem at first glance. Of course, we can employ epistemic logic to express ignorance of the agent as well as its knowledge, e.g. by formulas of the form ¬Kϕ, expressing that ϕ is not known, and that the agent is thus ignorant about the truth of ϕ. One may even express a kind of total ignorance of the agent about the assertion ϕ by considering a formula of the form ¬Kϕ ∧ ¬K¬ϕ: the agent does not know ϕ nor does he know ¬ϕ. This is all perfectly fine, but how can one infer that the agent knows neither ϕ nor ¬ϕ in an actual situation? Of course, epistemic logic enables one to derive the agent's ignorance in some cases. For instance, since Kp → ¬K¬p is valid in S5, we can derive that, given Kp, the agent knows p, it holds that the agent must be ignorant about ¬p (i.e. ¬K¬p). However, now consider the following situation.
In this chapter we shall occupy ourselves with default reasoning, or reasoning by default. In fact we indicate how default logic can be based on epistemic logic, and particularly how we may employ Halpern & Moses' minimal epistemic states for this purpose. In this way we obtain a simple and natural S5-based logic for default reasoning that is well-behaved in a certain way. (We show the logic to be cumulative in the sense of Kraus, Lehmann and Magidor [KLM90].)
Default logic, autoepistemic logic (AEL) and other approaches to non-monotonic reasoning suffer from a technical complexity that is not in line with naive common-sense reasoning. They employ fixed-point constructions or higher-order logic in order to define the belief sets that one would like to associate with some base set of knowledge.
Here we present a modal logic, called EDL, which is an extension of the epistemic logic of Chapter 1. The logic EDL was introduced in [MH91a, MH92], and in [MH93a, 95] we connected it to the theory of Halpern & Moses, as treated in Section 3.1, to obtain a logic for default reasoning. The combined approach is relatively simple compared with AEL, but, more importantly, it is better suited as a default logic than AEL, as we shall show subsequently.
Our approach — unlike AEL — does not involve any fixed points or higher-order formulas. The basis for this logic is the simple S5-modal logic of Chapter 1. EDL contains a knowledge (certainty) operator and (dual) possibility operator.
Epistemic logic concerns the notions knowledge and belief ('επιστημη — episteme — is Greek for ‘knowledge’), and stems from philosophy where it has been developed to give a formal treatment of these notions. (Sometimes the logic of belief is separately referred to as doxastic logic, from the Greek word δoξα — doxa —, meaning ‘surmise’ or ‘presumption’. In this book we shall use epistemic logic for the logic of knowledge and belief.) In [Hin62] the Finnish logician and philosopher Jaakko Hintikka presented a logic for knowledge and belief that was based on modal logic. Modal logic is a so-called philosophical logic dealing with the notions of necessity and contingency (possibility) ([Kri63], [Che80], [HC68, HC84]), and it appeared that epistemic logic could be viewed as an instance of this more general logic by interpreting necessity and possibility in an epistemic manner. For a thorough treatment of epistemic logic from the perspective of philosophy we refer to [Len80].
Especially in the last decade the use of logic and logìcal formalisms in artificial intelligence (AI) has increased enormously, including that of those logics that have been developed originally in and for philosophy. Epistemic logic is one of these so-called philosophical logics that has been ‘discovered’ by computer scientists and AI researchers. Particularly, the relevance of epistemic logic has been realised by researchers interested in the formal description of knowledge of agents in distributed and intelligent systems in order to specify or verify protocols, and represent knowledge and formalise reasoning methods, respectively.
Knowledge and belief play an important role in everyday life. In fact, most of what we do has to do with the things we know or believe. Likewise, it is not so strange that when we have to specify the behaviour of artificial agents in order to program or implement them in some particular way, it is thought to be important to be interested in the ‘knowledge’ and ‘belief’ of such an agent. In many areas of computer science and artificial intelligence one is concerned with the description or representation of knowledge of users or even the systems themselves. For example, in database theory one tries to model knowledge about parts of reality in certain formal ways to render it implementable and accessible to users. In AI one tries to design knowledge-based decision-support systems that are intended to assist professional users in some specialists field when making decisions by providing pieces of knowledge and preferably some deductions from the input data by means of some inference mechanism. The representation and manipulation of knowledge of some sort is ubiquitous in the information sciences.
This book is not about knowledge representation in general, but rather concentrates on the logic of knowledge and belief. What (logical) properties do knowledge and belief have? What is the difference between knowledge and belief? We do not intend to answer these questions in a deep philosophical discussion of these notions.