To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An induced forest of a graph G is an acyclic induced subgraph of G. The present paper is devoted to the analysis of a simple randomized algorithm that grows an induced forest in a regular graph. The expected size of the forest it outputs provides a lower bound on the maximum number of vertices in an induced forest of G. When the girth is large and the degree is at least 4, our bound coincides with the best bound known to hold asymptotically almost surely for random regular graphs. This results in an alternative proof for the random case.
We show that for 0<α<1 and θ>−α, the Poisson–Dirichlet distribution with parameter (α, θ) is the unique reversible distribution of a rather natural fragmentation–coalescence process. This completes earlier results in the literature for certain split-and-merge transformations and the parameter α = 0.
We derive here the Friedland–Tverberg inequality for positive hyperbolic polynomials. This inequality is applied to give lower bounds for the number of matchings in r-regular bipartite graphs. It is shown that some of these bounds are asymptotically sharp. We improve the known lower bound for the three-dimensional monomer–dimer entropy.
What is this? Chicken Curry and Seafood Salad? Fine, but in the same plate? This is disgusting!
Johan Håstad at Grendel's, Cambridge (1985)
Summary: This appendix lumps together some preliminaries regarding probability theory and some advanced topics related to the role and use of randomness in computation. Needless to say, each of these topics appears in a separate section.
The probabilistic preliminaries include our conventions regarding random variables, which are used throughout the book. Also included are overviews of three useful probabilistic inequalities: Markov's Inequality, Chebyshev's Inequality, and the Chernoff Bound.
The advanced topics include hashing, sampling, and randomness extraction. For hashing, we describe constructions of pairwise (and t-wise independent) hashing functions and (a few variants of) the Leftover Hashing Lemma (used a few times in the main text). We then review the “complexity of sampling”: that is, the number of samples and the randomness complexity involved in estimating the average value of an arbitrary function defined over a huge domain. Finally, we provide an overview on the question of extracting almost-perfect randomness from sources of weak (or defected) randomness.
Probabilistic Preliminaries
Probability plays a central role in Complexity Theory (see, for example, Chapters 6–10). We assume that the reader is familiar with the basic notions of probability theory. In this section, we merely present the probabilistic notations that are used throughout the book and three useful probabilistic inequalities.
A fresh view at the question of randomness has been taken by Complexity Theory: It has been postulated that a distribution is random (or rather pseudorandom) if it cannot be told apart from the uniform distribution by any efficient procedure. Thus, (pseudo) randomness is not an inherent property of an object, but is rather subjective to the observer.
At the extreme, this approach says that the question of whether the world is deterministic or allows for some free choice (which may be viewed as sources of randomness) is irrelevant. What matters is how the world looks to us and to various computationally bounded devices. That is, if some phenomenon looks random, then we may just treat it as if it were random. Likewise, if we can generate sequences that cannot be told apart from the uniform distribution by any efficient procedure, then we can use these sequences in any efficient randomized application instead of the ideal coin tosses that are postulated in the design of this application.
The pivot of the foregoing approach is the notion of computational indistinguishability, which refers to pairs of distributions that cannot be told apart by efficient procedures. The most fundamental incarnation of this notion associates efficient procedures with polynomial-time algorithms, but other incarnations that restrict attention to other classes of distinguishing procedures also lead to important insights.
This book consists of ten chapters and seven appendices. The chapters constitute the core of this book and are written in a style adequate for a textbook, whereas the appendices provide either relevant background or additional perspective and are written in the style of a survey article. The relative length and ordering of the chapters (and appendices) do not reflect their relative importance, but rather an attempt at the best logical order (i.e., minimizing the number of forward pointers).
Following are brief summaries of the book's chapters and appendices. These summaries are more novice-friendly than those provided in Section 1.1.3 but less detailed than the summaries provided at the beginning of each chapter.
Chapter 1: Introduction and Preliminaries. The introduction provides a high-level overview of some of the content of Complexity Theory as well as a discussion of some of the characteristic features of this field. In addition, the introduction contains several important comments regarding the approach and conventions of the current book. The preliminaries provide the relevant background on computability theory, which is the setting in which complexity theoretic questions are being studied. Most importantly, central notions such as search and decision problems, algorithms that solve such problems, and their complexity are defined. In addition, this part presents the basic notions underlying non-uniform models of computation (like Boolean circuits).
Chapter 2: P, NP, and NP-Completeness. The P versus NP Question can be phrased as asking whether or not finding solutions is harder than checking the correctness of solutions.
A word of a Gentleman is better than a proof, but since you are not a Gentleman – please provide a proof.
Leonid A. Levin (1986)
The proofs presented in this appendix were not included in the main text for a variety of reasons (e.g., they were deemed too technical and/or out of pace for the corresponding location). On the other hand, since our presentation of them is sufficiently different from the original and/or standard presentation, we see a benefit in including them in the current book.
Summary: This appendix contains proofs of the following results:
PH is reducible to #P (and in fact to ⊕P) via randomized Karpreductions. The proof follows the underlying ideas of Toda's original proof, but the actual presentation is quite different.
For any integral function f that satisfies f(n) ∈ {2, …, poly(n)}, it holds that IP(f) ⊆ AM(O(f)) and AM(O(f)) ⊆ AM(f). The proofs differ from the original proofs (provided in and, respectively) only in the secondary details, but these details seem significant.
Is it indeed the case that the more resources one has, the more one can achieve? The answer may seem obvious, but the obvious answer (of yes) actually presumes that the worker knows what resources are at his/her disposal. In this case, when allocated more resources, the worker (or computation) can indeed achieve more. But otherwise, nothing may be gained by adding resources.
In the context of Computational Complexity, an algorithm knows the amount of resources that it is allocated if it can determine this amount without exceeding the corresponding resources. This condition is satisfied in all “reasonable” cases, but it may not hold in general. The latter fact should not be that surprising: We already know that some functions are not computable, and if these functions are used to determine resources then the algorithm may be in trouble. Needless to say, this discussion requires some formalization, which is provided in the current chapter.
Summary: When using “nice” functions to determine an algorithm's resources, it is indeed the case that more resources allow for more tasks to be performed. However, when “ugly” functions are used for the same purpose, increasing the resources may have no effect. By nice functions we mean functions that can be computed without exceeding the amount of resources that they specify (e.g., t(n) = n2 or t(n) = 2n). Naturally, “ugly” functions do not allow for presenting themselves in such nice forms. […]
Although we view specific (natural) computational problems as secondary to (natural) complexity classes, we do use the former for clarification and illustration of the latter. This appendix provides definitions of such computational problems, grouped according to the type of objects to which they refer (e.g., graphs, Boolean formula, etc.).
We start by addressing the central issue of the representation of the various objects that are referred to in the aforementioned computational problems. The general principle is that elements of all sets are “compactly” represented as binary strings (without much redundancy). For example, the elements of a finite set S (e.g., the set of vertices in a graph or the set of variables appearing in a Boolean formula) will be represented as binary strings of length log2 |S|.
Graphs
Graph theory has long become recognized as one of the more useful mathematical subjects for the computer science student to master. The approach which is natural in computer science is the algorithmic one; our interest is not so much in existence proofs or enumeration techniques, as it is in finding efficient algorithms for solving relevant problems, or alternatively showing evidence that no such algorithms exist. Although algorithmic graph theory was started by Euler, if not earlier, its development in the last ten years has been dramatic and revolutionary.
So saying she donned her beautiful, glittering golden–Ambrosial sandals, which carry her flying like the wind over the vast land and sea; she grasped the redoubtable bronze-shod spear, so stout and sturdy and strong, wherewith she quells the ranks of heroes who have displeased her, the [bright-eyed] daughter of her mighty father.
Homer, Odyssey, 1:96–101
The existence of natural computational problems that are (or seem to be) infeasible to solve is usually perceived as bad news, because it means that we cannot do things we wish to do. But this bad news has a positive side, because hard problems can be “put to work” to our benefit, most notably in cryptography.
It seems that utilizing hard problems requires the ability to efficiently generate hard instances, which is not guaranteed by the notion of worst-case hardness. In other words, we refer to the gap between “occasional” hardness (e.g., worst-case hardness or mild averagecase hardness) and “typical” hardness (with respect to some tractable distribution). Much of the current chapter is devoted to bridging this gap, which is known by the term hardness amplification. The actual applications of typical hardness are presented in Chapter 8 and Appendix C.
Summary: We consider two conjectures that are related to P ≠ NP. The first conjecture is that there are problems that are solvable in exponential time (i.e., in ε) but are not solvable by (non-uniform) families of small (say, polynomial-size) circuits. […]
Open are the double doors of the horizon; unlocked are its bolts.
Philip Glass, Akhnaten, Prelude
Whereas the number of steps taken during a computation is the primary measure of its efficiency, the amount of temporary storage used by the computation is also a major concern. Furthermore, in some settings, space is even more scarce than time.
In addition to the intrinsic interest in space complexity, its study provides an interesting perspective on the study of time complexity. For example, in contrast to the common conjecture by which NP ≠ coNP, we shall see that analogous space-complexity classes (e.g., Nℒ) are closed under complementation (e.g., Nℒ = coNℒ).
Summary: This chapter is devoted to the study of the space complexity of computations, while focusing on two rather extreme cases. The first case is that of algorithms having logarithmic space complexity. We view such algorithms as utilizing the naturally minimal amount of temporary storage, where the term “minimal” is used here in an intuitive (but somewhat inaccurate) sense, and note that logarithmic space complexity seems a more stringent requirement than polynomial time. The second case is that of algorithms having polynomial space complexity, which seems a strictly more liberal restriction than polynomial time complexity. Indeed, algorithms utilizing polynomial space can perform almost all the computational tasks considered in this book (e.g., the class PSP ACε contains almost all complexity classes considered in this book). […]
The philosophers have only interpreted the world, in various ways; the point is to change it.
Karl Marx, “Theses on Feuerbach”
In light of the apparent infeasibility of solving numerous useful computational problems, it is natural to ask whether these problems can be relaxed such that the relaxation is both useful and allows for feasible solving procedures. We stress two aspects about the foregoing question: On the one hand, the relaxation should be sufficiently good for the intended applications; but, on the other hand, it should be significantly different from the original formulation of the problem so as to escape the infeasibility of the latter. We note that whether a relaxation is adequate for an intended application depends on the application, and thus much of the material in this chapter is less robust (or generic) than the treatment of the non-relaxed computational problems.
Summary: We consider two types of relaxations. The first type of relaxation refers to the computational problems themselves; that is, for each problem instance we extend the set of admissible solutions. In the context of search problems this means settling for solutions that have a value that is “sufficiently close” to the value of the optimal solution (with respect to some value function). Needless to say, the specific meaning of “sufficiently close” is part of the definition of the relaxed problem. […]
Summary: This glossary includes self-contained definitions of most complexity classes mentioned in the book. Needless to say, the glossary offers a very minimal discussion of these classes, and the reader is referred to the main text for further discussion. The items are organized by topics rather than by alphabetic order. Specifically, the glossary is partitioned into two parts, dealing separately with complexity classes that are defined in terms of algorithms and their resources (i.e., time and space complexity of Turing machines) and complexity classes defined in terms of non-uniform circuits (and referring to their size and depth). The algorithmic classes include time complexity classes (such as P, NP, coNP, BPP, RP, coRP, PH, ε, εχP, and NεχP) and the space complexity classes, ℒ, Nℒ, Rℒ, and PSP ACε. The non-uniform classes include the circuit classes P/poly as well as NCk and ACk.
Definitions (and basic results) regarding many other complexity classes are available at the constantly evolving Complexity Zoo.
Preliminaries
Complexity classes are sets of computational problems, where each class contains problems that can be solved with specific computational resources. To define a complexity class one specifies a model of computation, a complexity measure (like time or space), which is always measured as a function of the input length, and a bound on the complexity (of problems in the class).
Farewell, Hans – whether you live or end where you are! Your chances are not good. The wicked dance in which you are caught up will last a few more sinful years, and we would not wager much that you will come out whole. To be honest, we are not really bothered about leaving the question open. Adventures in the flesh and spirit, which enhanced and heightened your ordinariness, allowed you to survive in the spirit what you probably will not survive in the flesh. There were majestic moments when you saw the intimation of a dream of love rising up out of death and the carnal body. Will love someday rise up out of this worldwide festival of death, this ugly rutting fever that inflames the rainy evening sky all round?
Thomas Mann, The Magic Mountain, “The Thunderbolt.”
We hope that this work has succeeded in conveying the fascinating flavor of the concepts, results, and open problems that dominate the field of Computational Complexity. We believe that the new century will witness even more exciting developments in this field, and urge the reader to try to contribute to them. But before bidding good-bye, we wish to express a few more thoughts.
As noted in Section 1.1.1, so far Complexity Theory has been far more successful in relating fundamental computational phenomena than in providing definite answers regarding fundamental questions. Consider, for example, the theory of NP-completeness versus the P-vs-NP Question, or the theory of pseudorandomness versus establishing the existence of one-way functions (even under P ≠ NP).
It is easier for a camel to go through the eye of a needle, than for a rich man to enter into the kingdom of God.
Matthew, 19:24.
Complexity Theory provides a clear definition of the intuitive notion of an explicit construction. Furthermore, it also suggests a hierarchy of different levels of explicitness, referring to the ease of constructing the said object.
The basic levels of explicitness are provided by considering the complexity of fully constructing the object (e.g., the time it takes to print the truth table of a finite function). In this context, explicitness often means outputting a full description of the object in time that is polynomial in the length of that description. Stronger levels of explicitness emerge when considering the complexity of answering natural queries regarding the object (e.g., the time it takes to evaluate a fixed function at a given input). In this context, (strong) explicitness often means answering such queries in polynomial time.
The aforementioned themes are demonstrated in our brief review of explicit constructions of error-correcting codes and expander graphs. These constructions are, in turn, used in various parts of the main text.
Summary: This appendix provides a brief overview of aspects of coding theory and expander graphs that are most relevant to Complexity Theory. Starting with coding theory, we review several popular constructions of error-correcting codes, culminating in the construction of a “good” binary code (i.e., a code that achieves constant relative distance and constant rate). […]