To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this chapter is to present the definitions, formulae and results of probability theory we shall need in the main body of the book. Although we assume that the reader has had only a rather limited experience with probability theory and, if somewhat vaguely, we do define almost everything, this chapter is not intended to be a systematic introduction to probability theory. The main purpose is to identify the facts we shall rely on, so only the most important—and perhaps not too easily accessible—results will be proved. Since the book is primarily for mathematicians interested in graph theory, combinatorics and computing, some of the results will not be presented in full generality. It is inevitable that for the reader who is familiar with probability theory this introduction contains too many basic definitions and familiar facts, while the reader who has not studied probability before will find the chapter rather difficult.
There are many excellent introductions to probability theory: Feller (1966), Breiman (1968), K. L. Chung (1974) and H. Bauer (1981), to name only four. The interested reader is urged to consult one of these texts for a thorough introduction to the subject.
Notation and Basic Facts
A probability space is a triple (Ω, ∑, P), where Ω is a set, ∑ is a σ-field of subsets of Ω, P is a non-negative measure on ∑ and P(Ω) = 1.
The period since the publication of the first edition of this book has seen the theory of random graphs go from strength to strength. Indeed, its appearance happened to coincide with a watershed in the subject; the emergence in the subsequent few years of significant new ideas and tools, perhaps most notably concentration methods, has had a major impact. It could be argued that the subject is now qualitatively different, insofar as results that would previously have been inaccessible are now regarded as routine. Several longstanding issues have been resolved, including the value of the chromatic number of a random graph Gn,p, the existence of Hamilton cycles in random cubic graphs, and precise bounds on certain Ramsey numbers. It remains the case, though, that most of the material in the first edition of the book is vital for gaining an insight into the theory of random graphs.
It would be impossible, in a single volume, to prove all the substantial new results that we would wish to, so we have chosen instead to give brief descriptions and to sketch a number of proofs.
In what models of random graphs is it true that almost every graph is Hamiltonian? In particular, how large does M(n) have to be to ensure that a.e. GM is Hamiltonian? This is one of the important questions Erdős and Rényi (1961a) raised in their fundamental paper on the evolution of random graphs. After several preliminary results due to Palásti (1969a, b, 1971a, b), Perereplica (1970), Moon (1972d), Wright (1973a, 1974b, 1975b, 1977b), Komlós and Szemerédi (1975), a breakthrough was achieved by Pósa (1976) and Korshunov (1976). They proved that for some constant c almost every labelled graph with n vertices and at least cn log n edges is Hamiltonian. This result is essentially best possible since even almost sure connectedness needs more than ½n log n edges. A great many extensions and improvements of the Korshunov–Pósa result above have been proved by D. Angluin and Valiant (1979), Korshunov (1977), Komlós and Szemerédi (1983), Shamir (1983, 1985), Bollobás (1983a, 1984a), Bollobás, Fenner and Frieze (1987), Bollobás and Frieze (1987) and Frieze (1985b).
Another basic problem concerns the maximal length of a path in Gc/n, where c is a constant. We know that for c > 1 a.e. Gc/n contains a giant component—in fact a component of order {1 – t(c) + o(1)}n—but the results of Chapter 6 tell us nothing about the existence of long cycles.
Perhaps the most basic property of a graph is that of being connected. Thus it is not surprising that the study of connectedness of a r.g. has a vast literature. In fact, for fear of upsetting the balance of the book we cannot attempt to give an account of all the results in the area.
Appropriately, the very first random graph paper of Erdős and Rényi (1959) is devoted to the problem of connectedness, and so are two other of the earliest papers on r.gs: Gilbert (1959) and Austin et al. (1959). Erdős and Rényi proved that (n/2) log n is a sharp threshold function for connectedness. Gilbert gave recurrence formulae for the probability of connectedness of Gp (see Exx. 1 and 2). S. A. Stepanov (1969a, 1970a, b) and Kovalenko (1971) extended results of Erdős and Rényi to the model G{n, (pij)}, and Kelmans (1967a) extended the recurrence formulae of Gilbert. Other extensions are due to Ivchenko (1973b, 1975), Ivchenko and Medvedev (1973), Kordecki (1973) and Kovalenko (1975). In §1 we shall present some of these results in the context of the evolution of random graphs.
We know from Chapter 6 that a.e. graph process is such that a giant component appears shortly after time n/2, and the number of vertices not on the giant component decreases exponentially.
Random graph techniques are very useful in several areas of computer science. In this book we do not intend to present a great variety of such applications, but we shall study a small group of problems that can be tackled by random graph methods.
Suppose we are given n objects in a linear order unknown to us. Our task is to determine this linear order by as few probes as possible, i.e. by asking as few questions as possible. Each probe or question is a binary comparison: which of two given elements a and b is greater? Since there are n! possible orders and k questions result in 2k different sequences of answers, log2(n!) = {1 + o(1)}n log2n questions may be needed to determine the order completely. It is only a little less obvious that with {1 + o(1)}n log2n questions we can indeed determine the order. However, if we wish to use only {1 + o(1)}n log2n questions, then our later questions have to depend on the answers to the earlier questions. In other words, our questions have to be asked in many rounds, and in each round they have to be chosen according to the answers obtained in the previous rounds.
For a sorting algorithm, define the width as the maximum number of probes we perform in any one round and the depth as the maximal number of rounds needed by the algorithm.