To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter develops some classical theory and fundamental tools for Gaussian random processes. We start with the basic definitions of Gaussian processes indexed by abstract parameter spaces and, by way of introduction to the subject, derive some elementary yet powerful properties. We present the isoperimetric and log-Sobolev inequalities for Gaussian measures in ℝn and apply them to establish concentration properties for the supremum of a Gaussian process about its median and mean, which are some of the deepest and most useful results on Gaussian processes. Then we introduce Dudley's metric entropy bounds for moments of suprema of (sub-) Gaussian processes as well as for their a.s. modulus of continuity. The chapter also contains a thorough discussion of convexity and comparison properties of Gaussian measures and of reproducing kernel Hilbert spaces and ends with an exposition of the limit theory for suprema of stationary Gaussian processes.
Definitions, Separability, 0-1 Law, Concentration
We start with some preliminaries about stochastic processes, mainly to fix notation and terminology. Then these concepts are specialised to Gaussian processes, and some first properties of Gaussian processes are developed. The fundamental observation is that a Gaussian process X indexed by a a set T induces an intrinsic distance dX on T (dX(s,t) is the L2-distance between X(s) and X(t)), and all the probabilistic information about X is contained in the metric or pseudo-metric space (T,d). This is tested on some of the first properties, such as the 0-1 law and the existence of separable versions of X. One of the main properties of Gaussian processes, namely, their concentration about the mean, is introduced; this subject will be treated in the next section, but a first result on it, which is not sharp but that has been chosen for its simplicity, is given in this section.
From social networks such as Facebook, the World Wide Web and the Internet, to the complex interactions between proteins in the cells of our bodies, we constantly face the challenge of understanding the structure and development of networks. The theory of random graphs provides a framework for this understanding, and in this book the authors give a gentle introduction to the basic tools for understanding and applying the theory. Part I includes sufficient material, including exercises, for a one semester course at the advanced undergraduate or beginning graduate level. The reader is then well prepared for the more advanced topics in Parts II and III. A final part provides a quick introduction to the background material needed. All those interested in discrete mathematics, computer science or applied probability and their applications will find this an ideal introduction to the subject.
Our purpose in writing this book is to provide a gentle introduction to a subject that is enjoying a surge in interest. We believe that the subject is fascinating in its own right, but the increase in interest can be attributed to several factors. One factor is the realization that networks are “everywhere.” From social networks such as Facebook, the World Wide Web and the Internet to the complex interactions between proteins in the cells of our bodies, we face the challenge of understanding their structure and development. By and large natural networks grow in an unpredictable manner and this is often modeled by a random construction. Another factor is the realization by Computer Scientists that NP-hard problems are often easier to solve than their worst-case suggests and that an analysis of running times on random instances can be informative.
History
Random graphs were used by Erdős [274] to give a probabilistic construction of a graph with large girth and large chromatic number. It was only later that Erdős and Rényi began a systematic study of random graphs as objects of interest in their own right. Early on they defined the random graph Gn,m and founded the subject. Often neglected in this story is the contribution of Gilbert [367] who introduced the model Gn,p, but clearly the credit for getting the subject started goes to Erdős and Rényi. Their seminal series of papers [275], [277], [278], [279] and, in particular, [276] on the evolution of random graphs laid the groundwork for other mathematicians to become involved in studying properties of random graphs.
In the early eighties the subject was beginning to blossom and it received a boost from two sources. First was the publication of the landmark book of Béla Bollobás [130] on random graphs. Around the same time, the Discrete Mathematics group at Adam Mickiewicz University began a series of conferences in 1983. This series continues biennially to this day and is now a conference attracting more and more participants.
The next important event in the subject was the start of the journal Random Structures and Algorithms in 1990 followed by Combinatorics, Probability and Computing a few years later. These journals provided a dedicated outlet for work in the area and are flourishing today.
The properties of various kinds of trees are one of the main objects of study in graph theory mainly due to their wide range of application in various areas of science. Here we concentrate our attention on the “average” properties of two important classes of trees: labeled and recursive. The first class plays an important role in both the sub-critical and super-critical phase of the evolution of random graphs. While random recursive trees serve as an example of the very popular random preferential attachment models. In particular we will point out, an often overlooked fact, that the first demonstration of a power law for the degree distribution in the preferential attachment model was shown in a special class of inhomogeneous random recursive trees.
The families of random trees, whose properties are analyzed in this chapter, fall into two major categories according to the order of their heights: they are either of square root (labeled trees) or logarithmic (recursive trees) height. While most of square-root-trees appear in probability context, most log-trees are encountered in algorithmic applications.
Labeled Trees
Consider the family Tn of all nn−2 labeled trees on vertex set [n] = {1, 2, …,n}. Let us choose a tree Tn uniformly at random from the family Tn. The tree Tn is called a random tree (random Cayley tree).
The Prüfer code [638] establishes a bijection between labeled trees on vertex set [n] and the set of sequences [n]n−2 of length n−2 with items in [n]. Such a coding also implies that there is a one-to-one correspondence between the number of labeled trees on n vertices with a given degree sequence d1,d2, …,dn and the number of ways in which one can distribute n−2 particles into n cells, such that the ith cell contains exactly di − 1 particles.