To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In a public lecture, C. A. R. Hoare (1989a) described his algorithm for finding the ith smallest integer in a collection. This algorithm is subtle, but Hoare described it with admirable clarity as a game of solitaire. Each playing card carried an integer. Moving cards from pile to pile by simple rules, the required integer could quickly be found.
Then Hoare changed the rules of the game. Each card occupied a fixed position, and could only be moved if exchanged with another card. This described the algorithm in terms of arrays. Arrays have great efficiency, but they also have a cost. They probably defeated much of the audience, as they defeat experienced programmers. Mills and Linger (1986) claim that programmers become more productive when arrays are restricted to stacks, queues, etc., without subscripting.
Functional programmers often process collections of items using lists. Like Hoare's stacks of cards, lists allow items to be dealt with one at a time, with great clarity. Lists are easy to understand mathematically, and turn out to be more efficient than commonly thought.
Chapter outline
This chapter describes how to program with lists in Standard ml. It presents several examples that would normally involve arrays, such as matrix operations and sorting.
The chapter contains the following sections:
Introduction to lists. The notion of list is introduced. Standard ml operates on lists using pattern-matching.
In this chapter we study the worst case setting. We shall present results already known as well as showing some new results. As already mentioned in the Overview, precise information about what is known and what is new can be found in the Notes and Remarks.
Our major goal is to obtain tight complexity bounds for the approximate solution of linear continuous problems that are defined on infinite dimensional spaces. We first explain what is to be approximated and how an approximation is obtained. Thus we carefully introduce the fundamental concepts of solution operator, noisy information and algorithm. Special attention will be devoted to information, which is most important in our analysis. Information is, roughly speaking, what we know about the problem to be solved. A crucial assumption is that information is noisy, i.e., it is given not exactly, but with some error.
Since information is usually partial (i.e., many elements share the same information) and noisy, it is impossible to solve the problem exactly. We have to be satisfied with only approximate solutions. They are obtained by algorithms that use information as data. In the worst case setting, the error of an algorithm is given by its worst performance over all problem elements and possible information. A sharp lower bound on the error is given by a quantity called radius of information. We are obviously interested in algorithms with the minimal error.
In the process of doing scientific computations we always rely on some information. In practice, this information is typically noisy, i.e., contaminated by error. Sources of noise include
previous computations,
inexact measurements,
transmission errors,
arithmetic limitations,
an adversary's lies.
Problems with noisy information have always attracted considerable attention from researchers in many different scientific fields, e.g., statisticians, engineers, control theorists, economists, applied mathematicians. There is also a vast literature, especially in statistics, where noisy information is analyzed from different perspectives.
In this monograph, noisy information is studied in the context of the computational complexity of solving mathematical problems.
Computational complexity focuses on the intrinsic difficulty of problems as measured by the minimal amount of time, memory, or elementary operations necessary to solve them. Information-based complexity (IBC) is a branch of computational complexity that deals with problems for which the available information is
partial,
noisy,
priced.
Information being partial means that the problem is not uniquely determined by the given information. Information is noisy since it may be contaminated by error. Information is priced since we must pay for getting it. These assumptions distinguish IBC from combinatorial complexity, where information is complete, exact, and free.
Since information about the problem is partial and noisy, only approximate solutions are possible. Approximations are obtained by algorithms that use this information.
This chapter deals with the average case setting. In this setting, we are interested in the average error and cost of algorithms. The structure of this chapter is similar to that of the previous chapter. That is, we first deal with optimal algorithms, then we analyze the optimal information, and finally, we present some complexity results.
To study the average error and/or cost, we have to replace the deterministic assumptions of the worst case setting by stochastic assumptions. That is, we assume some probability distribution µ on the space F of problem elements as well as some distribution of the information noise. The latter means that information is corrupted by random noise. Basically, we consider Gaussian distributions (measures) which seem to be most natural and are most often used in modeling.
In Section 3.2, we give a general formulation of the average case setting. We also introduce the concept of the (average) radius of information which, as in the worst case, provides a sharp lower bound on the (average) error of algorithms.
Then we pass to linear problems with Gaussian measures. These are problems where the solution operator is linear, µ is a Gaussian measure, and information is linear with Gaussian noise. In Section 3.3, we recall the definition of a Gaussian measure on a Banach space, listing some important properties. In Sections 3.4 to 3.6 we study optimal algorithms.
In the modern world, the importance of information can hardly be overestimated. Information also plays a prominent role in scientific computations. A branch of computational complexity which deals with problems for which information is partial, noisy and priced is called informationbased complexity.
In a number of information-based complexity books, the emphasis was on partial and exact information. In the present book, the emphasis is on noisy information. We consider deterministic and random noise. The analysis of noisy information leads to a variety of interesting new algorithms and complexity results.
The book presents a theory of computational complexity of continuous problems with noisy information. A number of applications is also given. It is based on results of many researchers in this area (including the results of the author) as well as new results not published elsewhere.
This work would not have been completed if I had not received support from many people. My special thanks go to H. Woźniakowski who encouraged me to write such a book and was always ready to offer his help. I appreciate the considerable help of J.F. Traub. I would also like to thank M. Kon, A. Werschulz, E. Novak, K. Ritter and other colleagues for their valuable comments on various portions of the manuscript.
I wish to express my thanks to the Institute of Applied Mathematics and Mechanics at the University of Warsaw, where the book was almost entirely written.
In Chapters 2 to 5, we fixed the set of problem elements and were interested in rinding single information and algorithm which minimize an error or cost of approximation. Depending on the deterministic or stochastic assumptions on the problem elements and information noise, we studied the four different settings: worst, average, worst-average, and average-worst case settings.
In this chapter, we study the asymptotic setting in which a problem element f is fixed and we wish to analyze asymptotic behavior of algorithms. The aim is to construct a sequence of information and algorithms such that the error of successive approximations vanishes as fast as possible, as the number of observations increases to infinity.
The asymptotic setting is often studied in computational practice. We mention only the Romberg algorithm for computing integrals, and finite element methods (FEM) for solving partial differential equations with the meshsize tending to zero. When dealing with these and other numerical algorithms, we are interested in how fast they converge to the solution.
One might hope that it will be possible to construct a sequence φn(yn) of approximations such that for the element f the error ∥S(f) − φn(yn)∥ vanishes much faster than the error over the whole set of problem elements (or, equivalently, faster than the corresponding radius of information). It turns out, however, that in many cases any attempts to construct such algorithms would fail. We show this by establishing relations between the asymptotic and other settings.
In previous chapters we have looked at the basic theory of knowledge and belief, along with some extensions and applications in the realms of computer science and artificial intelligence. The emphasis in this theory (or rather these theories and applications) was put upon the question of what is known or believed by the agent, and the logical systems that we have seen enable one to derive the knowledge or belief of such an agent.
In this chapter we shall switch the emphasis to the other side of the picture, namely whether one can say something about the ignorance of an agent as well. This is not as easy as it might seem at first glance. Of course, we can employ epistemic logic to express ignorance of the agent as well as its knowledge, e.g. by formulas of the form ¬Kϕ, expressing that ϕ is not known, and that the agent is thus ignorant about the truth of ϕ. One may even express a kind of total ignorance of the agent about the assertion ϕ by considering a formula of the form ¬Kϕ ∧ ¬K¬ϕ: the agent does not know ϕ nor does he know ¬ϕ. This is all perfectly fine, but how can one infer that the agent knows neither ϕ nor ¬ϕ in an actual situation? Of course, epistemic logic enables one to derive the agent's ignorance in some cases. For instance, since Kp → ¬K¬p is valid in S5, we can derive that, given Kp, the agent knows p, it holds that the agent must be ignorant about ¬p (i.e. ¬K¬p). However, now consider the following situation.
In this chapter we shall occupy ourselves with default reasoning, or reasoning by default. In fact we indicate how default logic can be based on epistemic logic, and particularly how we may employ Halpern & Moses' minimal epistemic states for this purpose. In this way we obtain a simple and natural S5-based logic for default reasoning that is well-behaved in a certain way. (We show the logic to be cumulative in the sense of Kraus, Lehmann and Magidor [KLM90].)
Default logic, autoepistemic logic (AEL) and other approaches to non-monotonic reasoning suffer from a technical complexity that is not in line with naive common-sense reasoning. They employ fixed-point constructions or higher-order logic in order to define the belief sets that one would like to associate with some base set of knowledge.
Here we present a modal logic, called EDL, which is an extension of the epistemic logic of Chapter 1. The logic EDL was introduced in [MH91a, MH92], and in [MH93a, 95] we connected it to the theory of Halpern & Moses, as treated in Section 3.1, to obtain a logic for default reasoning. The combined approach is relatively simple compared with AEL, but, more importantly, it is better suited as a default logic than AEL, as we shall show subsequently.
Our approach — unlike AEL — does not involve any fixed points or higher-order formulas. The basis for this logic is the simple S5-modal logic of Chapter 1. EDL contains a knowledge (certainty) operator and (dual) possibility operator.
The previous chapters dealt mostly with the metamathematical properties of the systems of bounded arithmetic and of the propositional proof systems. We studied the provability and the definability in these systems and their various relations. The reader has by now perhaps some feeling for the strength of the systems. In this chapter we shall consider the provability of several combinatorial facts in bounded arithmetic.
In the first section we study the counting functions for predicates in PH, the boundedPHP, the approximate counting, and the provability of the infinitude of primes. In the second section we demonstrate that a lower bound on the size of constant-depth circuits can be meaningfully formalized and proved in bounded arithmetic. The last, third section studies some questions related to the main problem whether there is a model of S2 in which the polynomial-time hierarchy does not collapse.
Counting
A crucial property that allows a theory to prove a lot of elementary combinatorial facts is counting. In the context of bounded arithmetic this would require having definitions of the counting functions for predicates.
The uniform counting is not available.
Theorem 15.1.1. There is no -formula θ(a, a) that would define for each set a and each n υ ω the parity of the set {x υ n | a (x)}.
Fundamental problem. Is bounded arithmetic S2 finitely axiomatizable?
As we shall see (Theorem 10.2.4), this question is equivalent to the question whether there is a model of S2 in which the polynomial time hierarchy PH does not collapse.
Finite axiomatizability ofSandT
In this section we summarize the information about the fundamental problem that we have on the grounds of the knowledge obtained in the previous chapters.
Theorem 10.1.1. Each of the theories S and T is finitely axiomatizable for i ≤ 1.
Proof. By Lemma 6.1.4, for i ≤ 1 there is a formula UNIVi(x, y, z) that is a universal formula (provably in). This implies that and, i ≤ 1, are finitely axiomatizable over.
To see that is also finitely axiomatizable, verify that only a finite part of is needed in the proof of Lemma 6.1.4.
The next statement generalizes this theorem.
Theorem 10.1.2. Let 1 ≤ and 2 ≥ j. Then the set of the consequences of
Epistemic logic concerns the notions knowledge and belief ('επιστημη — episteme — is Greek for ‘knowledge’), and stems from philosophy where it has been developed to give a formal treatment of these notions. (Sometimes the logic of belief is separately referred to as doxastic logic, from the Greek word δoξα — doxa —, meaning ‘surmise’ or ‘presumption’. In this book we shall use epistemic logic for the logic of knowledge and belief.) In [Hin62] the Finnish logician and philosopher Jaakko Hintikka presented a logic for knowledge and belief that was based on modal logic. Modal logic is a so-called philosophical logic dealing with the notions of necessity and contingency (possibility) ([Kri63], [Che80], [HC68, HC84]), and it appeared that epistemic logic could be viewed as an instance of this more general logic by interpreting necessity and possibility in an epistemic manner. For a thorough treatment of epistemic logic from the perspective of philosophy we refer to [Len80].
Especially in the last decade the use of logic and logìcal formalisms in artificial intelligence (AI) has increased enormously, including that of those logics that have been developed originally in and for philosophy. Epistemic logic is one of these so-called philosophical logics that has been ‘discovered’ by computer scientists and AI researchers. Particularly, the relevance of epistemic logic has been realised by researchers interested in the formal description of knowledge of agents in distributed and intelligent systems in order to specify or verify protocols, and represent knowledge and formalise reasoning methods, respectively.