To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is concerned with random matrices. Given the ubiquitous role that matrices play in mathematics and its application in the sciences and engineering, it seems natural that the evolution of probability theory would eventually pass through random matrices. The reality, however, has been more complicated (and interesting). Indeed, the study of random matrices, and in particular the properties of their eigenvalues, has emerged from the applications, first in data analysis (in the early days of statistical sciences, going back to Wishart [Wis28]), and later as statistical models for heavy-nuclei atoms, beginning with the seminal work of Wigner [Wig55]. Still motivated by physical applications, at the able hands of Wigner, Dyson, Mehta and co-workers, a mathematical theory of the spectrum of random matrices began to emerge in the early 1960s, and links with various branches of mathematics, including classical analysis and number theory, were established. While much progress was initially achieved using enumerative combinatorics, gradually, sophisticated and varied mathematical tools were introduced: Fredholm determinants (in the 1960s), diffusion processes (in the 1960s), integrable systems (in the 1980s and early 1990s), and the Riemann–Hilbert problem (in the 1990s) all made their appearance, as well as new tools such as the theory of free probability (in the 1990s). This wide array of tools, while attesting to the vitality of the field, presents, however, several formidable obstacles to the newcomer, and even to the expert probabilist.
In this chapter, we introduce several tools useful in the study of matrix ensembles beyond GUE, GOE and Wigner matrices. We begin by setting up in Section 4.1 a general framework for the derivation of joint distribution of eigenvalues in matrix ensembles and then we use it to derive joint distribution results for several classical ensembles, namely, the GOE/GUE/GSE, the Laguerre ensembles (corresponding to Gaussian Wishart matrices), the Jacobi ensembles (corresponding to random projectors) and the unitary ensembles (corresponding to random matrices uniformly distributed in classical compact Lie groups). In Section 4.2, we study a class of point processes that are determinantal; the eigenvalues of the GUE, as well as those for the unitary ensembles, fall within this class. We derive a representation for determinantal processes and deduce from it a CLT for the number of eigenvalues in an interval, as well as ergodic consequences. In Section 4.3, we analyze time-dependent random matrices, where the entries are replaced by Brownian motions. The introduction of Brownian motion allows us to use the powerful theory of Ito integration. Generalizations of the Wigner law, CLTs, and large deviations are discussed. We then present in Section 4.4 a discussion of concentration inequalities and their applications to random matrices, substantially extending Section 2.3. Concentration results for matrices with independent entries, as well as for matrices distributed according to Haar measure on compact groups, are discussed.
The Lévy Laplacian is an infinite-dimensional generalization of the well-known classical Laplacian. The theory has become well developed in recent years and this book was the first systematic treatment of the Lévy–Laplace operator. The book describes the infinite-dimensional analogues of finite-dimensional results, and more especially those features which appear only in the generalized context. It develops a theory of operators generated by the Lévy Laplacian and the symmetrized Lévy Laplacian, as well as a theory of linear and nonlinear equations involving it. There are many problems leading to equations with Lévy Laplacians and to Lévy–Laplace operators, for example superconductivity theory, the theory of control systems, the Gauss random field theory, and the Yang–Mills equation. The book is complemented by an exhaustive bibliography. The result is a work that will be valued by those working in functional analysis, partial differential equations and probability theory.
This book treats the very special and fundamental mathematical properties that hold for a family of Gaussian (or normal) random variables. Such random variables have many applications in probability theory, other parts of mathematics, statistics and theoretical physics. The emphasis throughout this book is on the mathematical structures common to all these applications. This will be an excellent resource for all researchers whose work involves random variables.
In this chapter we are mainly concerned with one-dimensional electrostatic problems; that is, with measures on the circle or the real line that represent charge distributions subject to logarithmic interaction and an external potential field. First we consider configurations of electrical charges on the circle and their equilibrium configuration. Then we review some classical results of function theory and introduce the notion of free entropy for suitable probability densities on the circle; these ideas extend naturally to spheres in Euclidean space. The next step is to introduce free entropy for probability distributions on the real line, and show that an equilibrium distribution exists for a very general class of potentials. For uniformly convex potentials, we present an effective method for computing the equilibrium distribution, and illustrate this by introducing the semicircle law. Then we present explicit formulæ for the equilibrium measures for quartic potentials with positive and negative leading term. Finally we introduce McCann's notion of displacement convexity for energy functionals, and show that uniform convexity of the potential implies a transportation inequality.
Logarithmic energy and equilibrium measure
Suppose that N unit positive charges of strength β > 0 are placed upon a circular conductor of unit radius, and that the angles of the charges are 0 ≤ θ1 < θ2 < … < θN < 2π.
In this chapter we combine the results from Chapter 3 about concentration of measure with the notion of equilibrium from Chapter 4, and prove convergence to equilibrium of empirical eigenvalue distributions of n × n matrices from suitable ensembles as n → ∞. We introduce various notions of convergence for eigenvalue ensembles from generalized orthogonal, unitary and symplectic ensembles. Using concentration inequalities from Chapter 3, we prove that the empirical eigenvalue distributions, from ensembles that have uniformly convex potentials, converge almost surely to their equilibrium distribution as the number of charges increases to infinity. Furthermore, we obtain the Marchenko–Pastur distribution as the limit of singular numbers of rectangular Gaussian matrices. To illustrate how concentration implies convergence, the chapter starts with the case of compact groups, where the equilibrium measure is simply normalized arclength on the circle.
Convergence to arclength
Suppose that n unit positive charges of strength β > 0 are placed upon a circular conductor of unit radius, and that the angles of the charges are 0 ≤ θ1 < θ2 < … < θn < 2π. Suppose that the θj are random, subject to the joint distribution
Then we would expect that the θj would tend to form a uniform distribution round the circle as n → ∞ since the uniform distribution appears to minimize the energy. We prove this for β = 2.
In this chapter we introduce various functionals such as entropy and free entropy that are defined for suitable probability density functions on Rn. Then we introduce the derivatives of such functionals in the style of the calculus of variations. This leads us to the gradient flows of probability density functions associated with a given functional; thus we recover the famous Fokker–Planck equation and the Ornstein–Uhlenbeck equation. A significant advantage of this approach is that the free analogues of the classical diffusion equations arise from the corresponding free functionals. We also prove logarithmic Sobolev inequalities, and use them to prove convergence to equilibrium of the solutions to gradient flows of suitable energy functionals. Positive curvature is a latent theme in this chapter; for recent progress in metric geometry has recovered analogous results on metric spaces with uniformly positive Ricci curvature, as we mention in the final section.
Variation of functionals and gradient flows
In this chapter we are concerned with evolutions of families of probability distributions under partial differential equations. We use ρ for a probability density function on Rn and impose various smoothness conditions as required. For simplicity, the reader may suppose that ρ is C∞ and of compact support so that various functionals are defined. The fundamental examples of functionals are:
Shannon's entropy S(ρ) = -∫ ρ(x) log ρ(x) dx;
Potential energy F(ρ) = ∫ v(x)ρ(x)dx with respect to a potential function v;
The purpose of this book is to introduce readers to certain topics in random matrix theory that specifically involve the phenomenon of concentration of measure in high dimension. Partly this work was motivated by researches in the EC network Phenomena in High Dimension, which applied results from functional analysis to problems in statistical physics. Pisier described this as the transfer of technology, and this book develops this philosophy by discussing applications to random matrix theory of:
(i) optimal transportation theory;
(ii) logarithmic Sobolev inequalities;
(iii) exponential concentration inequalities;
(iv) Hankel operators.
Recently some approaches to functional inequalities have emerged that make a unified treatment possible; in particular, optimal transportation links together seemingly disparate ideas about convergence to equilibrium. Furthermore, optimal transportation connects familiar results from the calculus of variations with the modern theory of diffusions and gradient flows.
I hope that postgraduate students will find this book useful and, with them in mind, have selected topics with potential for further development. Prerequisites for this book are linear algebra, calculus, complex analysis, Lebesgue integration, metric spaces and basic Hilbert space theory. The book does not use stochastic calculus or the theory of integrable systems, so as to widen the possible readership.
In their survey of random matrices and Banach spaces, Davidson and Szarek present results on Gaussian random matrices and then indicate that some of the results should extend to a wider context by the theory of concentration of measure [152].
The distribution of the eigenvalues of a random matrix gives a random point field. This chapter outlines Soshnikov's version of the general theory. Starting with kernels, we introduce correlation functions via determinants. Gaudin and Mehta developed a theory of correlation functions for the generalized unitary ensemble which is known as the orthogonal polynomial technique, and we show that it fits neatly into the theory of determinantal random point fields. Particular determinantal random point fields are generated by the sine kernel, the Airy kernel, and the continuous Bessel kernels; in random matrix theory these are widely held to be universal kernels in that they describe the possible asymptotic distribution of eigenvalues from large Hermitian matrices. In the final section we introduce an abstract framework in which one can describe the convergence of families of determinantal random point fields, and which we apply to fundamental examples in Chapters 9 and 10.
In Sections 8.1 and 8.2, we describe how kernels can be used to form determinantal random point fields, and in Section 8.3 we express some classical results about unitary ensembles in terms of determinantal random point fields. In Sections 9.3, 9.4 and 9.5 we look at these specific examples in more detail, and see how they arise in random matrix theory.
Determinantal random point fields
In this section, we introduce Soshnikov's theory of determinantal random point fields as it applies to point fields on Z.