To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book starts by providing the relevant background on computability theory, which is the setting in which Complexity theoretic questions are being studied. Most importantly, this preliminary chapter (i.e., Chapter 1) provides a treatment of central notions, such as search and decision problems, algorithms that solve such problems, and their complexity. Special attention is given to the notion of a universal algorithm.
The main part of this book (i.e., Chapters 2–5) focuses on the P-vs-NP Question and on the theory of NP-completeness. Additional topics covered in this part include the general notion of an efficient reduction (with a special emphasis on reductions of search problems to corresponding decision problems), the existence of problems in NP that are neither NP-complete nor in P, the class coNP, optimal search algorithms, and promise problems. A brief overview of this main part follows.
The P-vs-NP Question. Loosely speaking, the P-vs-NP Question refers to search problems for which the correctness of solutions can be efficiently checked (i.e., there is an efficient algorithm that given a solution to a given instance determines whether or not the solution is correct). Such search problems correspond to the class NP, and the P-vs-NP Question corresponds to whether or not all these search problems can be solved efficiently (i.e., is there an efficient algorithm that given an instance finds a correct solution). Thus, the P-vs-NP Question can be phrased as asking whether finding solutions is harder than checking the correctness of solutions.
Overview: In light of the difficulty of settling the P-vs-NP Question, when faced with a hard problem H in NP, we cannot expect to prove that H is not in P (unconditionally), because this would imply P ≠ NP. The best we can expect is a conditional proof that H is not in P, based on the assumption that NP is different from P. The contrapositive is proving that if H is in P, then so is any problem in NP (i.e., NP equals P). One possible way of proving such an assertion is showing that any problem in NP is polynomial-time reducible to H. This is the essence of the theory of NP-completeness.
In this chapter we prove the existence of NP-complete problems, that is, the existence of individual problems that “effectively encode” a wide class of seemingly unrelated problems (i.e., all problems in NP). We also prove that deciding the satisfiability of a given Boolean formula is NP-complete. Other NP-complete problems include deciding whether a given graph is 3-colorable and deciding whether a given graph contains a clique of a given size. The core of establishing the NP-completeness of these problems is showing that each of them can encode any other problem in NP. Thus, these demonstrations provide a method of encoding instances of any NP problem as instances of the target NP-complete problem.
Organization. We start by defining NP-complete problems (see Section 4.1) and demonstrating their existence (see Section 4.2). […]
The following brief overview is intended to give a flavor of the questions addressed by Complexity Theory. It includes a brief review of the contents of the current book, as well as a brief overview of several more advanced topics. The latter overview is quite vague, and is merely meant as a teaser toward further study (cf., e.g., [13]).
Absolute Goals and Relative Results
Complexity Theory is concerned with the study of the intrinsic complexity of computational tasks. Its “final” goals include the determination of the complexity of any well-defined task. Additional goals include obtaining an understanding of the relations between various computational phenomena (e.g., relating one fact regarding Computational Complexity to another). Indeed, we may say that the former type of goals is concerned with absolute answers regarding specific computational phenomena, whereas the latter type is concerned with questions regarding the relation between computational phenomena.
Interestingly, so far Complexity Theory has been more successful in coping with goals of the latter (“relative”) type. In fact, the failure to resolve questions of the “absolute” type led to the flourishing of methods for coping with questions of the “relative” type. Musing for a moment, let us say that, in general, the difficulty of obtaining absolute answers may naturally lead to a search for conditional answers, which may in turn reveal interesting relations between phenomena. Furthermore, the lack of absolute understanding of individual phenomena seems to facilitate the development of methods for relating different phenomena. Anyhow, this is what happened in Complexity Theory.
Overview: Reductions are procedures that use “functionally specified” subroutines. That is, the functionality of the subroutine is specified, but its operation remains unspecified and its running time is counted at unit cost. Thus, a reduction solves one computational problem by using oracle (or subroutine) calls to another computational problem. Analogously to our focus on efficient (i.e., polynomial-time) algorithms, here we focus on efficient (i.e., polynomial-time) reductions.
We present a general notion of (polynomial-time) reductions among computational problems, and view the notion of a “Karp-reduction” (also known as “many-to-one reduction”) as an important special case that suffices (and is more convenient) in many cases. Reductions play a key role in the theory of NP-completeness, which is the topic of Chapter 4.
In the current chapter, we stress the fundamental nature of the notion of a reduction per se and highlight two specific applications: reducing search problems and optimization problems to decision problems. Furthermore, in these applications, it will be important to use the general notion of a reduction (i.e., “Cook-reduction” rather than “Karp-reduction”). We comment that the aforementioned reductions of search and optimization problems to decision problems further justify the common focus on the study of the decision problems.
Organization. We start by presenting the general notion of a polynomial-time reduction and important special cases of it (see Section 3.1). In Section 3.2, we present the notion of optimization problems and reduce such problems to corresponding search problems. […]
In this chapter we discuss three relatively advanced topics. The first topic, which was alluded to in previous chapters, is the notion of promise problems (Section 5.1). Next, we present an optimal algorithm for solving (“candid”) NP-search problems (Section 5.2). Finally, in Section 5.3, we briefly discuss the class (denoted coNP) of sets that are complements of sets in NP.
Teaching Notes
Typically, the foregoing topics are not mentioned in a basic course on complexity. Still, we believe that these topics deserve at least a mention in such a course. This holds especially with respect to the notion of promise problems. Furthermore, depending on time constraints, we recommend presenting all three topics in class (at least at an overview level).
We comment that the notion of promise problems was originally introduced in the context of decision problems, and is typically used only in that context. However, given the importance that we attach to an explicit study of search problems, we extend the formulation of promise problems to search problems as well. In that context, it is also natural to introduce the notion of a “candid search problem” (see Definition 5.2).
Promise Problems
Promise problems are natural generalizations of search and decision problems. These generalizations are obtained by explicitly considering a set of legitimate instances (rather than considering any string as a legitimate instance). As noted previously, this generalization provides a more adequate formulation of natural computational problems (and, indeed, this formulation is used in all informal discussions).
Although we view specific (natural) computational problems as secondary to (natural) complexity classes, we do use the former for clarification and illustration of the latter. This appendix provides definitions of such computational problems, grouped according to the type of objects to which they refer (i.e., graphs and Boolean formula).
We start by addressing the central issue of the representation of the various objects that are referred to in the aforementioned computational problems. The general principle is that elements of all sets are “compactly” represented as binary strings (without much redundancy). For example, the elements of a finite set S (e.g., the set of vertices in a graph or the set of variables appearing in a Boolean formula) will be represented as binary strings of length log2 |S|.
Graphs
Graph theory has long become recognized as one of the more useful mathematical subjects for the computer science student to master. The approach which is natural in computer science is the algorithmic one; our interest is not so much in existence proofs or enumeration techniques, as it is in finding efficient algorithms for solving relevant problems, or alternatively showing evidence that no such algorithms exist. Although algorithmic graph theory was started by Euler, if not earlier, its development in the last ten years has been dramatic and revolutionary.
Shimon Even, Graph Algorithms [8]
A simple graph G=(V,E) consists of a finite set of vertices V and a finite set of edges E, where each edge is an unordered pair of vertices; that is, E ⊆ {{u, v} : u, v∈V ∧ u≠v}.
According to a common opinion, the most important aspect of a scientific work is the technical result that it achieves, whereas explanations and motivations are merely redundancy introduced for the sake of “error correction” and/or comfort. It is further believed that, as with a work of art, the interpretation of the work should be left to the reader.
The author strongly disagrees with the aforementioned opinions, and argues that there is a fundamental difference between art and science, and that this difference refers exactly to the meaning of a piece of work. Science is concerned with meaning (and not with form), and in its quest for truth and/or understanding, science follows philosophy (and not art). The author holds the opinion that the most important aspects of a scientific work are the intuitive question that it addresses, the reason that it addresses this question, the way it phrases the question, the approach that underlies its answer, and the ideas that are embedded in the answer. Following this view, it is important to communicate these aspects of the work.
The foregoing issues are even more acute when it comes to Complexity Theory, firstly because conceptual considerations seem to play an even more central role in Complexity Theory than in other scientific fields. Secondly (and even more importantly), Complexity Theory is extremely rich in conceptual content. Thus, communicating this content is of primary importance, and failing to do so misses the most important aspects of Complexity Theory.
Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to promote viewing computation as a general phenomenon, which may capture both artificial and natural processes. Loosely speaking, a computation is a process that modifies a relatively large environment via repeated applications of a simple and predetermined rule. Although each application of the rule has a very limited effect, the effect of many applications of the rule may be very complex.
We are interested in the transformation of the environment effected by the computational process (or computation), where the computation rule is designed to achieve a desired effect. Typically, the initial environment to which the computation is applied encodes an input string, and the end environment (i.e., at termination of the computation) encodes an output string. Thus, the computation defines a mapping from inputs to outputs, and such amapping can be viewed as solving a search problem (i.e., given an instance x find a solution y that relates to x in some predetermined way) or a decision problem (i.e., given an instance x determine whether or not x has some predetermined property).
Indeed, our focus will be on solving computational tasks (mostly search and decision problems), where a computational task refers to an infinite set of instances such that each instance is associated with a set of valid solutions.
Overview: Our daily experience is that it is harder to solve problems than it is to check the correctness of solutions to these problems. Is this experience merely a coincidence or does it represent a fundamental fact of life (or a property of the world)? This is the essence of the P versus NP Question, where P represents search problems that are efficiently solvable and NP represents search problems for which solutions can be efficiently checked.
Another natural question captured by the P versus NP Question is whether proving theorems is harder that verifying the validity of these proofs. In other words, the question is whether deciding membership in a set is harder than being convinced of this membership by an adequate proof. In this case, P represents decision problems that are efficiently solvable, whereas NP represents sets that have efficiently verifiable proofs of membership.
These two formulations of the P versus NP Question are indeed equivalent, and the common belief is that P is different from NP. That is, we believe that solving search problems is harder than checking the correctness of solutions for them and that finding proofs is harder than verifying their validity.
Organization. The two formulations of the P versus NP Question are rigorously presented and discussed in Sections 2.2 and 2.3, respectively. The equivalence of these formulations is shown in Section 2.4, and the common belief that P is different from NP is further discussed in Section 2.7. We start by discussing the notion of efficient computation (see Section 2.1).
The quest for efficiency is ancient and universal, as time and other resources are always in shortage. Thus, the question of which tasks can be performed efficiently is central to the human experience.
A key step toward the systematic study of the aforementioned question is a rigorous definition of the notion of a task and of procedures for solving tasks. These definitions were provided by computability theory, which emerged in the 1930s. This theory focuses on computational tasks, considers automated procedures (i.e., computing devices and algorithms) that may solve such tasks, and studies the class of solvable tasks.
In focusing attention on computational tasks and algorithms, computability theory has set the stage for the study of the computational resources (like time) that are required by such algorithms. When this study focuses on the resources that are necessary for any algorithm that solves a particular task (or a task of a particular type), it is viewed as belonging to the theory of Computational Complexity (also known as Complexity Theory). In contrast, when the focus is on the design and analysis of specific algorithms (rather than on the intrinsic complexity of the task), the study is viewed as belonging to a related area that may be called Algorithmic Design and Analysis. Furthermore, Algorithmic Design and Analysis tends to be sub-divided according to the domain of mathematics, science, and engineering in which the computational tasks arise.
In r-neighbour bootstrap percolation on a graph G, a set of initially infected vertices A ⊂ V(G) is chosen independently at random, with density p, and new vertices are subsequently infected if they have at least r infected neighbours. The set A is said to percolate if eventually all vertices are infected. Our aim is to understand this process on the grid, [n]d, for arbitrary functions n = n(t), d = d(t) and r = r(t), as t → ∞. The main question is to determine the critical probability pc([n]d, r) at which percolation becomes likely, and to give bounds on the size of the critical window. In this paper we study this problem when r = 2, for all functions n and d satisfying d ≫ log n.
The bootstrap process has been extensively studied on [n]d when d is a fixed constant and 2 ⩽ r ⩽ d, and in these cases pc([n]d, r) has recently been determined up to a factor of 1 + o(1) as n → ∞. At the other end of the scale, Balogh and Bollobás determined pc([2]d, 2) up to a constant factor, and Balogh, Bollobás and Morris determined pc([n]d, d) asymptotically if d ≥ (log log n)2+ϵ, and gave much sharper bounds for the hypercube.
Here we prove the following result. Let λ be the smallest positive root of the equationso λ ≈ 1.166. Thenif d is sufficiently large, and moreoveras d → ∞, for every function n = n(d) with d ≫ log n.
Gilles Kahn was one of the most influential figures in the development of computer science and information technology, not only in Europe but throughout the world. This volume of articles by several leading computer scientists serves as a fitting memorial to Kahn's achievements and reflects the broad range of subjects to which he contributed through his scientific research and his work at INRIA, the French National Institute for Research in Computer Science and Control. The authors also reflect upon the future of computing: how it will develop as a subject in itself and how it will affect other disciplines, from biology and medical informatics, to web and networks in general. Its breadth of coverage, topicality, originality and depth of contribution, make this book a stimulating read for all those interested in the future development of information technology.
This book offers a new, algebraic, approach to set theory. The authors introduce a particular kind of algebra, the Zermelo-Fraenkel algebras, which arise from the familiar axioms of Zermelo-Fraenkel set theory. Furthermore the authors explicitly construct such algebras using the theory of bisimulations. Their approach is completely constructive, and contains both intuitionistic set theory and topos theory. In particular it provides a uniform description of various constructions of the cumulative hierarchy of sets in forcing models, sheaf models and realisability models. Graduate students and researchers in mathematical logic, category theory and computer science should find this book of great interest, and it should be accessible to anyone with some background in categorical logic.
The line graph G of a directed graph G has a vertex for every edge of G and an edge for every path of length 2 in G. In 1967, Knuth used the Matrix Tree Theorem to prove a formula for the number of spanning trees of G, and he asked for a bijective proof [6]. In this paper, we give a bijective proof of Knuth's formula. As a result of this proof, we find a bijection between binary de Bruijn sequences of degree n and binary sequences of length 2n−1. Finally, we determine the critical groups of all the Kautz graphs and de Bruijn graphs, generalizing a result of Levine [7].