To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In what models of random graphs is it true that almost every graph is Hamiltonian? In particular, how large does M(n) have to be to ensure that a.e. GM is Hamiltonian? This is one of the important questions Erdős and Rényi (1961a) raised in their fundamental paper on the evolution of random graphs. After several preliminary results due to Palásti (1969a, b, 1971a, b), Perereplica (1970), Moon (1972d), Wright (1973a, 1974b, 1975b, 1977b), Komlós and Szemerédi (1975), a breakthrough was achieved by Pósa (1976) and Korshunov (1976). They proved that for some constant c almost every labelled graph with n vertices and at least cn log n edges is Hamiltonian. This result is essentially best possible since even almost sure connectedness needs more than ½n log n edges. A great many extensions and improvements of the Korshunov–Pósa result above have been proved by D. Angluin and Valiant (1979), Korshunov (1977), Komlós and Szemerédi (1983), Shamir (1983, 1985), Bollobás (1983a, 1984a), Bollobás, Fenner and Frieze (1987), Bollobás and Frieze (1987) and Frieze (1985b).
Another basic problem concerns the maximal length of a path in Gc/n, where c is a constant. We know that for c > 1 a.e. Gc/n contains a giant component—in fact a component of order {1 – t(c) + o(1)}n—but the results of Chapter 6 tell us nothing about the existence of long cycles.
Perhaps the most basic property of a graph is that of being connected. Thus it is not surprising that the study of connectedness of a r.g. has a vast literature. In fact, for fear of upsetting the balance of the book we cannot attempt to give an account of all the results in the area.
Appropriately, the very first random graph paper of Erdős and Rényi (1959) is devoted to the problem of connectedness, and so are two other of the earliest papers on r.gs: Gilbert (1959) and Austin et al. (1959). Erdős and Rényi proved that (n/2) log n is a sharp threshold function for connectedness. Gilbert gave recurrence formulae for the probability of connectedness of Gp (see Exx. 1 and 2). S. A. Stepanov (1969a, 1970a, b) and Kovalenko (1971) extended results of Erdős and Rényi to the model G{n, (pij)}, and Kelmans (1967a) extended the recurrence formulae of Gilbert. Other extensions are due to Ivchenko (1973b, 1975), Ivchenko and Medvedev (1973), Kordecki (1973) and Kovalenko (1975). In §1 we shall present some of these results in the context of the evolution of random graphs.
We know from Chapter 6 that a.e. graph process is such that a giant component appears shortly after time n/2, and the number of vertices not on the giant component decreases exponentially.
Random graph techniques are very useful in several areas of computer science. In this book we do not intend to present a great variety of such applications, but we shall study a small group of problems that can be tackled by random graph methods.
Suppose we are given n objects in a linear order unknown to us. Our task is to determine this linear order by as few probes as possible, i.e. by asking as few questions as possible. Each probe or question is a binary comparison: which of two given elements a and b is greater? Since there are n! possible orders and k questions result in 2k different sequences of answers, log2(n!) = {1 + o(1)}n log2n questions may be needed to determine the order completely. It is only a little less obvious that with {1 + o(1)}n log2n questions we can indeed determine the order. However, if we wish to use only {1 + o(1)}n log2n questions, then our later questions have to depend on the answers to the earlier questions. In other words, our questions have to be asked in many rounds, and in each round they have to be chosen according to the answers obtained in the previous rounds.
For a sorting algorithm, define the width as the maximum number of probes we perform in any one round and the depth as the maximal number of rounds needed by the algorithm.
Most of the results in the previous chapters concern spaces of random graphs of order n as n → ∞: even our inequalities were claimed to hold only ‘if n is sufficiently large’. Nevertheless, asymptotic results are often applied for rather small values of n, so the question arises as to how good these approximations are when n is not too large. The main aim of this chapter is to reproduce some of the tables concerning graphs of fairly small order, given by Bollobás and Thomason (1985). These tables are still far beyond the bound for which exact calculations are possible; for exact tables concerning graphs of small order (mostly general graphs of order at most 14 and trees of order at most 27), the reader should consult, among others, Bussemaker et al. (1976), Quintas, Stehlik and Yarmish (1979), Quintas, Schiano and Yarmish (1980), Brown et al. (1981) and Halberstam and Quintas (1982, 1984). For the use of information about small subgraphs see, among others, Frank and Frisch (1971), Frank (1977, 1979a, b), Frank and Harary (1980a, b), Gordon and Leonis (1976) and Quintas and Yarmish (1981).
Connectivity
We know from Theorem 7.3 that if c ∈ ℝ is a constant, p =(log + c)/n and M = ⌊(n/2)(log n + c)⌋, then limn→∞P(Gp is connected) = limn→∞P(GM is connected) = 1 − e−e−c.
This first volume contains only material on the basic tools of modern cryptography, that is, one-way functions, pseudorandomness, and zero-knowledge proofs. These basic tools are used in the construction of the basic applications (to be covered in the second volume). The latter will cover encryption, signatures, and general cryptographic protocols. In this appendix we provide brief summaries of the treatments of these basic applications.
Encryption: Brief Summary
Both private-key and public-key encryption schemes consist of three efficient algorithms: key generation, encryption, and decryption. The difference between the two types of schemes is reflected in the definition of security: The security of a public-key encryption scheme should also hold when the adversary is given the encryption key, whereas that is not required for private-key encryption schemes. Thus, public-key encryption schemes allow each user to broadcast its encryption key, so that any other user can send it encrypted messages (without needing to first agree on a private encryption key with the receiver). Next we present definitions of security for private-key encryption schemes. The public-key analogies can be easily derived by considering adversaries that get the encryption key as additional input. (For private-key encryption schemes, we can assume, without loss of generality, that the encryption key is identical to the decryption key.)
Definitions
For simplicity, we consider only the encryption of a single message; however, this message can be longer than the key (which rules out information-theoretic secrecy [200]). We present two equivalent definitions of security.
In this chapter we define and study one-way functions. One-way functions capture our notion of “useful” computational difficulty and serve as a basis for most of the results presented in this book. Loosely speaking, a one-way function is a function that is easy to evaluate but hard to invert (in an average-case sense). (See the illustration in Figure 2.1.) In particular, we define strong and weak one-way functions and prove that the existence of weak one-way functions implies the existence of strong ones. The proof provides a good example of a reducibility argument, which is a strong type of “reduction” used to establish most of the results in the area. Furthermore, the proof provides a simple example of a case where a computational statement is much harder to prove than its “information-theoretic analogue.”
In addition, we define hard-core predicates and prove that every one-way function has a hard-core predicate. Hard-core predicates will play an important role in almost all subsequent chapters (the chapter on signature scheme being the exception).
Organization. In Section 2.1 we motivate the definition of one-way functions by arguing informally that it is implicit in various natural cryptographic primitives. The basic definitions are given in Section 2.2, and in Section 2.3 we show that weak one-way functions can be used to construct strong ones. A more efficient construction (for certain restricted cases) is postponed to Section 2.6.