To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Lévy Laplacian is an infinite-dimensional generalization of the well-known classical Laplacian. The theory has become well developed in recent years and this book was the first systematic treatment of the Lévy–Laplace operator. The book describes the infinite-dimensional analogues of finite-dimensional results, and more especially those features which appear only in the generalized context. It develops a theory of operators generated by the Lévy Laplacian and the symmetrized Lévy Laplacian, as well as a theory of linear and nonlinear equations involving it. There are many problems leading to equations with Lévy Laplacians and to Lévy–Laplace operators, for example superconductivity theory, the theory of control systems, the Gauss random field theory, and the Yang–Mills equation. The book is complemented by an exhaustive bibliography. The result is a work that will be valued by those working in functional analysis, partial differential equations and probability theory.
This book treats the very special and fundamental mathematical properties that hold for a family of Gaussian (or normal) random variables. Such random variables have many applications in probability theory, other parts of mathematics, statistics and theoretical physics. The emphasis throughout this book is on the mathematical structures common to all these applications. This will be an excellent resource for all researchers whose work involves random variables.
In this chapter we are mainly concerned with one-dimensional electrostatic problems; that is, with measures on the circle or the real line that represent charge distributions subject to logarithmic interaction and an external potential field. First we consider configurations of electrical charges on the circle and their equilibrium configuration. Then we review some classical results of function theory and introduce the notion of free entropy for suitable probability densities on the circle; these ideas extend naturally to spheres in Euclidean space. The next step is to introduce free entropy for probability distributions on the real line, and show that an equilibrium distribution exists for a very general class of potentials. For uniformly convex potentials, we present an effective method for computing the equilibrium distribution, and illustrate this by introducing the semicircle law. Then we present explicit formulæ for the equilibrium measures for quartic potentials with positive and negative leading term. Finally we introduce McCann's notion of displacement convexity for energy functionals, and show that uniform convexity of the potential implies a transportation inequality.
Logarithmic energy and equilibrium measure
Suppose that N unit positive charges of strength β > 0 are placed upon a circular conductor of unit radius, and that the angles of the charges are 0 ≤ θ1 < θ2 < … < θN < 2π.
In this chapter we combine the results from Chapter 3 about concentration of measure with the notion of equilibrium from Chapter 4, and prove convergence to equilibrium of empirical eigenvalue distributions of n × n matrices from suitable ensembles as n → ∞. We introduce various notions of convergence for eigenvalue ensembles from generalized orthogonal, unitary and symplectic ensembles. Using concentration inequalities from Chapter 3, we prove that the empirical eigenvalue distributions, from ensembles that have uniformly convex potentials, converge almost surely to their equilibrium distribution as the number of charges increases to infinity. Furthermore, we obtain the Marchenko–Pastur distribution as the limit of singular numbers of rectangular Gaussian matrices. To illustrate how concentration implies convergence, the chapter starts with the case of compact groups, where the equilibrium measure is simply normalized arclength on the circle.
Convergence to arclength
Suppose that n unit positive charges of strength β > 0 are placed upon a circular conductor of unit radius, and that the angles of the charges are 0 ≤ θ1 < θ2 < … < θn < 2π. Suppose that the θj are random, subject to the joint distribution
Then we would expect that the θj would tend to form a uniform distribution round the circle as n → ∞ since the uniform distribution appears to minimize the energy. We prove this for β = 2.
In this chapter we introduce various functionals such as entropy and free entropy that are defined for suitable probability density functions on Rn. Then we introduce the derivatives of such functionals in the style of the calculus of variations. This leads us to the gradient flows of probability density functions associated with a given functional; thus we recover the famous Fokker–Planck equation and the Ornstein–Uhlenbeck equation. A significant advantage of this approach is that the free analogues of the classical diffusion equations arise from the corresponding free functionals. We also prove logarithmic Sobolev inequalities, and use them to prove convergence to equilibrium of the solutions to gradient flows of suitable energy functionals. Positive curvature is a latent theme in this chapter; for recent progress in metric geometry has recovered analogous results on metric spaces with uniformly positive Ricci curvature, as we mention in the final section.
Variation of functionals and gradient flows
In this chapter we are concerned with evolutions of families of probability distributions under partial differential equations. We use ρ for a probability density function on Rn and impose various smoothness conditions as required. For simplicity, the reader may suppose that ρ is C∞ and of compact support so that various functionals are defined. The fundamental examples of functionals are:
Shannon's entropy S(ρ) = -∫ ρ(x) log ρ(x) dx;
Potential energy F(ρ) = ∫ v(x)ρ(x)dx with respect to a potential function v;
The purpose of this book is to introduce readers to certain topics in random matrix theory that specifically involve the phenomenon of concentration of measure in high dimension. Partly this work was motivated by researches in the EC network Phenomena in High Dimension, which applied results from functional analysis to problems in statistical physics. Pisier described this as the transfer of technology, and this book develops this philosophy by discussing applications to random matrix theory of:
(i) optimal transportation theory;
(ii) logarithmic Sobolev inequalities;
(iii) exponential concentration inequalities;
(iv) Hankel operators.
Recently some approaches to functional inequalities have emerged that make a unified treatment possible; in particular, optimal transportation links together seemingly disparate ideas about convergence to equilibrium. Furthermore, optimal transportation connects familiar results from the calculus of variations with the modern theory of diffusions and gradient flows.
I hope that postgraduate students will find this book useful and, with them in mind, have selected topics with potential for further development. Prerequisites for this book are linear algebra, calculus, complex analysis, Lebesgue integration, metric spaces and basic Hilbert space theory. The book does not use stochastic calculus or the theory of integrable systems, so as to widen the possible readership.
In their survey of random matrices and Banach spaces, Davidson and Szarek present results on Gaussian random matrices and then indicate that some of the results should extend to a wider context by the theory of concentration of measure [152].
The distribution of the eigenvalues of a random matrix gives a random point field. This chapter outlines Soshnikov's version of the general theory. Starting with kernels, we introduce correlation functions via determinants. Gaudin and Mehta developed a theory of correlation functions for the generalized unitary ensemble which is known as the orthogonal polynomial technique, and we show that it fits neatly into the theory of determinantal random point fields. Particular determinantal random point fields are generated by the sine kernel, the Airy kernel, and the continuous Bessel kernels; in random matrix theory these are widely held to be universal kernels in that they describe the possible asymptotic distribution of eigenvalues from large Hermitian matrices. In the final section we introduce an abstract framework in which one can describe the convergence of families of determinantal random point fields, and which we apply to fundamental examples in Chapters 9 and 10.
In Sections 8.1 and 8.2, we describe how kernels can be used to form determinantal random point fields, and in Section 8.3 we express some classical results about unitary ensembles in terms of determinantal random point fields. In Sections 9.3, 9.4 and 9.5 we look at these specific examples in more detail, and see how they arise in random matrix theory.
Determinantal random point fields
In this section, we introduce Soshnikov's theory of determinantal random point fields as it applies to point fields on Z.
The purpose of this chapter is to give a brief introduction to the theory of Lie groups and matrix algebras in a style that is suited to random matrix theory. Ensembles are probability measures on spaces of random matrices that are invariant under the action of certain compact groups, and the basic examples are known as the orthogonal, unitary and symplectic ensembles according to the group action. One of the main objectives is the construction of Dyson's circular ensembles in Sections 2.7–2.9, and the generalized ensembles from the affine action of classical compact Lie groups on suitable matrix spaces in Section 2.5. As our main interest is in random matrix theory, our discussion of the classification is patchy and focuses on the examples that are of greatest significance in RMT. We present some computations on connections and curvature, as these are important in the analysis in Chapter 3. The functional calculus of matrices is also significant, and Section 2.2 gives a brief treatment of this topic. The chapter begins with a list of the main examples and some useful results on eigenvalues and determinants.
The classical groups, their eigenvalues and norms
Throughout this chapter,
R = real numbers;
C = complex numbers;
H = quaternions;
T = unit circle.
By a well-known theorem of Frobenius, R, C and H are the only finitedimensional division algebras over R, and the dimensions are β = 1, 2 and 4 respectively; see [90].
The contents of this chapter are introductory and covered in many standard books on probability theory, but perhaps not all conveniently in one place. In Section 1.1 we give a summary of results concerning probability measures on compact metric spaces. Section 1.2 concerns the existence of invariant measure on a compact metric group, which we later use to construct random matrix ensembles. In Section 1.3, we resume the general theory with a discussion of weak convergence of probability measures on (noncompact) Polish spaces; the results here are technical and may be omitted on a first reading. Section 1.4 contains the Brunn–Minkowski inequality, which is our main technical tool for proving isoperimetric and concentration inequalities in subsequent chapters. The fundamental example of Gaussian measure and the Gaussian orthogonal ensemble appear in Section 1.5, then in Section 1.6 Gaussian measure is realised as the limit of surface area measure on the spheres of high dimension. In Section 1.7 we state results from the general theory of metric measure spaces. Some of the proofs are deferred until later chapters, where they emerge as important special cases of general results. A recurrent theme of the chapter is weak convergence, as defined in Sections 1.1 and 1.3, and which is used throughout the book. Section 1.8 shows how weak convergence gives convergence for characteristic functions, cumulative distribution functions and Cauchy transforms.