To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
I owe this almost atrocious variety to an institution which other republics do not know or which operates in them in an imperfect and secret manner: the lottery.
Jorge Luis Borges, “The Lottery in Babylon”
So far, our approach to computing devices was somewhat conservative: We thought of them as executing a deterministic rule. A more liberal and quite realistic approach, which is pursued in this chapter, considers computing devices that use a probabilistic rule. This relaxation has an immediate impact on the notion of efficient computation, which is consequently associated with probabilistic polynomial-time computations rather than with deterministic (polynomial-time) ones. We stress that the association of efficient computation with probabilistic polynomial-time computation makes sense provided that the failure probability of the latter is negligible (which means that it may be safely ignored).
The quantitative nature of the failure probability of probabilistic algorithms provides one connection between probabilistic algorithms and counting problems. The latter are indeed a new type of computational problems, and our focus is on counting efficiently recognizable objects (e.g., NP-witnesses for a given instance of set in NP). Randomized procedures turn out to play an important role in the study of such counting problems.
Summary: Focusing on probabilistic polynomial-time algorithms, we consider various types of probabilistic failure of such algorithms (e.g., actual error versus failure to produce output). This leads to the formulation of complexity classes such as BPP, RP, and ƵPP. […]
It is possible to build a cabin with no foundations, but not a lasting building.
Eng. Isidor Goldreich (1906–95)
Summary: Cryptography is concerned with the construction of computing systems that withstand any abuse: Such a system is constructed so as to maintain a desired functionality, even under malicious attempts aimed at making it deviate from this functionality.
This appendix is aimed at presenting the foundations of cryptography, which are the paradigms, approaches, and techniques used to conceptualize, define, and provide solutions to natural security concerns. It presents some of these conceptual tools as well as some of the fundamental results obtained using them. The emphasis is on the clarification of fundamental concepts, and on demonstrating the feasibility of solving several central cryptographic problems. The presentation assumes basic knowledge of algorithms, probability theory, and complexity theory, but nothing beyond this.
The appendix augments the treatment of one-way functions, pseudorandom generators, and zero-knowledge proofs, given in Sections 7.1, 8.2, and 9.2, respectively. Using these basic primitives, the appendix provides a treatment of basic cryptographic applications such as encryption, signatures, and general cryptographic protocols.
Introduction and Preliminaries
The rigorous treatment and vast expansion of cryptography is one of the major achievements of theoretical computer science. In particular, classical notions such as secure encryption and unforgeable signatures were placed on sound grounds, and new (unexpected) directions and connections were uncovered.
Cast a cold eye On life, on death. Horseman, pass by!
W. B. Yeats, “Under Ben Bulben”
In this chapter we consider variations on the complexity classes P and NP. We refer specifically to the non-uniform version of P, and to the Polynomial-time Hierarchy (which extends NP). These variations are motivated by relatively technical considerations; still, the resulting classes are referred to quite frequently in the literature.
Summary: Non-uniform polynomial-time (P/poly) captures efficient computations that are carried out by devices that can each handle only inputs of a specific length. The basic formalism ignores the complexity of constructing such devices (i.e., a uniformity condition). A finer formalism that allows for quantifying the amount of non-uniformity refers to so-called “machines that take advice.”
The Polynomial-time Hierarchy (PH) generalizes NP by considering statements expressed by quantified Boolean formulae with a fixed number of alternations of existential and universal quantifiers. It is widely believed that each quantifier alternation adds expressive power to the class of such formulae.
An interesting result that refers to both classes asserts that if NP is contained in P/poly then the Polynomial-time Hierarchy collapses to its second level. This result is commonly interpreted as supporting the common belief that non-uniformity is irrelevant to the P-vs-NP Question; that is, although P/poly extends beyond the class P, it is believed that P/poly does not contain NP.
The glory attached to the creativity involved in finding proofs makes us forget that it is the less glorified process of verification that gives proofs their value. Conceptually speaking, proofs are secondary to the verification process, whereas technically speaking, proof systems are defined in terms of their verification procedures.
The notion of a verification procedure presumes the notion of computation and furthermore the notion of efficient computation. This implicit stipulation is made explicit in the definition of NP, where efficient computation is associated with deterministic polynomial-time algorithms. However, as argued next, we can gain a lot if we are willing to take a somewhat non-traditional step and allow probabilistic verification procedures.
In this chapter, we shall study three types of probabilistic proof systems, called interactive proofs, zero-knowledge proofs, and probabilistic checkable proofs. In each of these three cases, we shall present fascinating results that cannot be obtained when considering the analogous deterministic proof systems.
Summary: The association of efficient procedures with deterministic polynomial-time procedures is the basis for viewing NP-proof systems as the canonical formulation of proof systems (with efficient verification procedures). Allowing probabilistic verification procedures and, moreover, ruling by statistical evidence gives rise to various types of probabilistic proof systems. Indeed, these probabilistic proof systems carry a probability of error (which is explicitly bounded and can be reduced by successive applications of the proof system), yet they offer various advantages over the traditional (deterministic and errorless) proof systems. […]
The quest for efficiency is ancient and universal, as time and other resources are always in shortage. Thus, the question of which tasks can be performed efficiently is central to the human experience.
A key step toward the systematic study of the aforementioned question is a rigorous definition of the notion of a task and of procedures for solving tasks. These definitions were provided by computability theory, which emerged in the 1930s. This theory focuses on computational tasks, and considers automated procedures (i.e., computing devices and algorithms) that may solve such tasks.
In focusing attention on computational tasks and algorithms, computability theory has set the stage for the study of the computational resources (like time) that are required by such algorithms. When this study focuses on the resources that are necessary for any algorithm that solves a particular task (or a task of a particular type), the study becomes part of the theory of Computational Complexity (also known as Complexity Theory).
Complexity Theory is a central field of the theoretical foundations of computer science. It is concerned with the study of the intrinsic complexity of computational tasks. That is, a typical complexity theoretic study refers to the computational resources required to solve a computational task (or a class of such tasks), rather than referring to a specific algorithm or an algorithmic schema. Actually, research in Complexity Theory tends to start with and focus on the computational resources themselves, and addresses the effect of limiting these resources on the class of tasks that can be solved.
Alas, Philosophy, Medicine, Law, and unfortunately also Theology, have I studied in detail, and still remained a fool, not a bit wiser than before. Magister and even Doctor am I called, and for a decade am I sick and tired of pulling my pupils by the nose and understanding that we can know nothing.
J. W. Goethe, Faust, lines 354–64
Summary: This appendix briefly surveys some attempts at proving lower bounds on the complexity of natural computational problems. In the first part, devoted to circuit complexity, we describe lower bounds on the size of (restricted) circuits that solve natural computational problems. This can be viewed as a program whose long-term goal is proving that P ≠ NP. In the second part, devoted to proof complexity, we describe lower bounds on the length of (restricted) propositional proofs of natural tautologies. This can be viewed as a program whose long-term goal is proving that NP ≠ coNP.
We comment that while the activity in these areas is aimed toward developing proof techniques that may be applied to the resolution of the “big problems” (such as P versus NP), the current achievements (though very impressive) seem very far from reaching this goal. Current crown-jewel achievements in these areas take the form of tight (or strong) lower bounds on the complexity of computing (resp., proving) “relatively simple” functions (resp., claims) in restricted models of computation (resp., proof systems).
For as much as many have taken in hand to set forth in order a declaration of those things which are most surely believed among us; Even as they delivered them unto us, who from the beginning were eyewitnesses, and ministers of the word; It seems good to me also, having had perfect understanding of all things from the very first, to write unto thee in order, most excellent Theophilus; That thou mightest know the certainty of those things, wherein thou hast been instructed.
Luke, 1:1–4
The main focus of this chapter is the P-vs-NP Question and the theory of NP-completeness. Additional topics covered in this chapter include the general notion of a polynomial-time reduction (with a special emphasis on self-reducibility), the existence of problems in NP that are neither NP-complete nor in P, the class coNP, optimal search algorithms, and promise problems.
Summary: Loosely speaking, the P-vs-NP Question refers to search problems for which the correctness of solutions can be efficiently checked (i.e., if there is an efficient algorithm that given a solution to a given instance determines whether or not the solution is correct). Such search problems correspond to the class NP, and the question is whether or not all these search problems can be solved efficiently (i.e., if there is an efficient algorithm that given an instance finds a correct solution).
Thus, the P-vs-NP Question can be phrased as asking whether or not finding solutions is harder than checking the correctness of solutions. […]
We show that any positive integer is the least period of a factor of the Thue-Morse word.We also characterize the set of least periods of factors of a Sturmian word. In particular,the corresponding set for the Fibonacci word is the set of Fibonacci numbers.As a by-product of our results, we give several new proofs and tighteningsof well-known properties of Sturmian words.
We study the palindromic complexity of infinite words uβ,the fixed points of the substitution over a binary alphabet,φ(0) = 0a1, φ(1) = 0b1, with a - 1 ≥ b ≥ 1,which are canonically associated with quadratic non-simple Parrynumbers β.
A famous result of Freĭman describes the sets A, of integers, for which |A+A| ≤ K|A|. In this short note we address the analogous question for subsets of vector spaces over . Specifically we show that if A is a subset of a vector space over with |A+A| ≤ K|A| then A is contained in a coset of size at most 2O(K3/2 log K)|A|, which improves upon the previous best, due to Green and Ruzsa, of 2O(K2)|A|. A simple example shows that the size may need to be at least 2Ω(K)|A|.
We analyse the weighted height of random tries built from independent strings of i.i.d. symbols on the finite alphabet {1, . . .d}. The edges receive random weights whose distribution depends upon the number of strings that visit that edge. Such a model covers the hybrid tries of de la Briandais and the TST of Bentley and Sedgewick, where the search time for a string can be decomposed as a sum of processing times for each symbol in the string. Our weighted trie model also permits one to study maximal path imbalance. In all cases, the weighted height is shown to be asymptotic to c log n in probability, where c is determined by the behaviour of the core of the trie (the part where all nodes have a full set of children) and the fringe of the trie (the part of the trie where nodes have only one child and form spaghetti-like trees). It can be found by maximizing a function that is related to the Cramér exponent of the distribution of the edge weights.
We formulate and give partial answers to several combinatorial problems on volumes of simplices determined by n points in 3-space, and in general in d dimensions.
(i) The number of tetrahedra of minimum (non-zero) volume spanned by n points in 3 is at most , and there are point sets for which this number is . We also present an O(n3) time algorithm for reporting all tetrahedra of minimum non-zero volume, and thereby extend an algorithm of Edelsbrunner, O'Rourke and Seidel. In general, for every , the maximum number of k-dimensional simplices of minimum (non-zero) volume spanned by n points in d is Θ(nk).
(ii) The number of unit volume tetrahedra determined by n points in 3 is O(n7/2), and there are point sets for which this number is Ω(n3 log logn).
(iii) For every , the minimum number of distinct volumes of all full-dimensional simplices determined by n points in d, not all on a hyperplane, is Θ(n).
We analyse classes of planar graphs with respect to various properties such as polynomial-time solvability of the dominating set problem or boundedness of the tree-width. A helpful tool to address this question is the notion of boundary classes. The main result of the paper is that for many important properties there are exactly two boundary classes of planar graphs.
Let G be a graph with n vertices, and let k be an integer dividing n. G is said to be strongly k-colourable if, for every partition of V(G) into disjoint sets V1 ∪ ··· ∪ Vr, all of size exactly k, there exists a proper vertex k-colouring of G with each colour appearing exactly once in each Vi. In the case when k does not divide n, G is defined to be strongly k-colourable if the graph obtained by adding isolated vertices is strongly k-colourable. The strong chromatic number of G is the minimum k for which G is strongly k-colourable. In this paper, we study the behaviour of this parameter for the random graph Gn,p. In the dense case when p ≫ n−1/3, we prove that the strong chromatic number is a.s. concentrated on one value Δ + 1, where Δ is the maximum degree of the graph. We also obtain several weaker results for sparse random graphs.
Semi-graphoids are combinatorial structures that arise in statistical learning theory. They are equivalent to convex rank tests and to polyhedral fans that coarsen the reflection arrangement of the symmetric group Sn. In this paper we resolve two problems on semi-graphoids posed in Studený's book (2005), and we answer a related question of Postnikov, Reiner and Williams on generalized permutohedra. We also study the semigroup and the toric ideal associated with semi-graphoids.
A proper vertex colouring of a graph is equitable if the sizes of colour classes differ by at most one. We present a new shorter proof of the celebrated Hajnal–Szemerédi theorem: for every positive integer r, every graph with maximum degree at most r has an equitable colouring with r+1 colours. The proof yields a polynomial time algorithm for such colourings.
We prove tail estimates for variables of the form ∑if(Xi), where (Xi)i is a sequence of states drawn from a reversible Markov chain, or, equivalently, from a random walk on an undirected graph. The estimates are in terms of the range of the function f, its variance, and the spectrum of the graph. The purpose of our estimates is to determine the number of chain/walk samples which are required for approximating the expectation of a distribution on vertices of a graph, especially an expander. The estimates must therefore provide information for fixed number of samples (as in Gillman's [4]) rather than just asymptotic information. Our proofs are more elementary than other proofs in the literature, and our results are sharper. We obtain Bernstein- and Bennett-type inequalities, as well as an inequality for sub-Gaussian variables.