To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the final two lectures we want to treat one of the most important and inspiring realizations of free independence. Canonical examples for free random variables appeared in the context of group algebras of free products of groups and in the context of creation and annihilation operators on full Fock spaces. These are two (closely related) examples where the occurrence of free independence is not very surprising, because its definition was just modeled according to the situation on the group (or von Neumann) algebra of the free group.
But there are objects from a quite different mathematical universe which are also free (at least asymptotically), namely special random matrices. A priori, random matrices have nothing to do with free independence and this surprising connection is one of the key results in free probability theory. It establishes links between quite different fields.
We will present in this and the next lecture the fundamental results of Voiculescu on the asymptotic free independence of special random matrices. Our approach will be quite combinatorial and fits well with our combinatorial description of free independence. In a sense, we will show that the combinatorics of free probability theory arises as the limit N → ∞ of the combinatorics of the considered N × N random matrices.
Moments of Gaussian random variables
Random matrices are matrices whose entries are classical random variables, and the most important class of random matrices are the socalled Gaussian random matrices whose entries form a Gaussian family of classical random variables.
Our main concern in this lecture will be the understanding and effective description of the sum of freely independent random variables. How can we calculate the distribution of a + b if a and b are free and if we know the distribution of a and the distribution of b. Of particular interest is the case of selfadjoint random variables x and y in a C*-probability space. In this case their distributions can be identified with probability measures on ℝ and thus taking the sum of free random variables gives rise to a binary operation on probability measures on ℝ. We will call this operation “free convolution,” in analogy with the usual concept of convolution of probability measures which corresponds to taking the sum of classically independent random variables. Our combinatorial approach to free probability theory, resting on the notion of free cumulants, will give us very easy access to the main results of Voiculescu on this free convolution via the so-called “R-transform.”
Free convolution
Definition 12.1. Let μ and ν be probability measures on ℝ with compact support. Let x and y be selfadjoint random variables in some C*-probability space such that x has distribution μ, y has distribution ν, and such that x and y are freely independent. Then the distribution of the sum x + y is called the free convolution of μ and μ and is denoted by μ ⊞ ν.
The original use of the R-transform was in connection to the problem of describing the distribution of a sum of free random variables (via the formula Ra+b = Ra + Rb, which always holds when a is free from b in some non-commutative probability space – cf. Lectures 12 and 16). Similarly, the S-transform was introduced to solve the problem of multiplication of free random variables (cf. Lecture 18). By following these lines, it is natural to ask what happens when one considers the commutator ab−ba, or the anti-commutator ab+ba of two free elements. Some remarks about this have already been made in Lecture 15. In the present lecture we will continue the discussion started there, by using the convenient language of the operation of boxed convolution.
The problem of the free commutator can be treated on two levels, which will be discussed separately.
First there is a level where one considers even random variables. At this level the problem can be solved as an application of the results on R-diagonal elements. One comes to a formula which is at the same time valid for the anti-commutator (of two free, even random variables), and which was already presented in Theorem 15.20.
Then there is the general level, where the assumption that the random variables are even is dropped. Quite surprisingly, it turns out that the free commutator (unlike the free anti-commutator) is still described in this case by the same formula as we had in the even case.
Free probability theory is a quite recent theory, bringing together many different fields of mathematics, for example operator algebras, random matrices, combinatorics, or representation theory of symmetric groups. So it has a lot to offer to various mathematical communities, and interest in free probability has steadily increased in recent years.
However, this diversity of the field also has the consequence that it is considered hard to access for a beginner. Most of the literature on free probability consists of a mixture of operator algebraic and probabilistic notions and arguments, interwoven with random matrices and combinatorics.
Whereas more advanced operator algebraic or probabilistic expertise might indeed be necessary for a deeper appreciation of special applications in the respective fields, the basic core of the theory, however, can be mostly freed from this and it is possible to give a fairly elementary introduction to the main notions, ideas and problems of free probability theory. The present lectures are intended to provide such an introduction.
Our main emphasis will be on the combinatorial side of free probability. Even when stripped from analytical structure, the main features of free independence are still present; moreover, even on this more combinatorial level it is important to organize all relevant information about the considered variables in the right way. Anyone who has tried to perform computations of joint distributions for non-commuting variables will probably agree that they tend to be horribly messy if done in a naive way.
Another important random matrix ensemble is given by Haar unitary random matrices – these are unitary matrices equipped with the Haar measure as corresponding probability measure. We will see that one can get asymptotic freeness results for Haar unitary random matrices similar to those we derived for Gaussian random matrices in the last lecture. We will also see that we have asymptotic freeness between constant matrices which are randomly rotated by a Haar unitary random matrix. (This will follow from the fact that conjugation by a free Haar unitary can be used to make general random variables free.)
Our calculations for the unitary random matrices will be of a similar kind to those from the last lecture. The main ingredient is a Wick type formula for correlations of the entries of the Haar unitary random matrices.
Haar unitary random matrices
Remark 23.1. A fundamental fact in abstract harmonic analysis is that any compact group has an analog of the Lebesgue measure, the so-called Haar measure, which is characterized by the fact that it is invariant under translations by group elements. This Haar measure is finite and unique up to multiplication with a constant, thus we can normalize it to a probability measure – the unique Haar probability measure on the compact group.
One of the main ideas in free probability theory is to consider the notion of free independence in analogy with the notion of classical or tensor independence. In this spirit, the first investigations of Voiculescu in free probability theory focused on free analogs of some of the most fundamental statements from classical probability theory. In particular, he proved a free analog of a central limit theorem and introduced and described a free analog of “convolution.” His investigations were quite analytical and centered around the concept of the “R-transform,” an analytic function which plays the same role in free probability theory as the logarithm of the Fourier transform in classical probability theory. However, in this analytic approach it is not so obvious why the R-transform and the logarithm of the Fourier transform should be analogous.
Our approach to free probability theory is much more combinatorial in nature and will reveal in a clearer way the parallelism between classical and free probability theory.
In order to see what kind of combinatorial objects are relevant for free probability theory, we will begin by giving an algebraic proof of the free central limit theorem. This approach will show the similar nature of classical and free probability theory very clearly, because the same kind of proof can be given for the classical central limit theorem. Most of the arguments will be the same, only in the very end one has to distinguish whether one is in the classical or in the free situation.
In the preceding lecture we saw that a special type of partitions seems to lie underneath the structure of free probability. These are the so-called “non-crossing” partitions. The study of the lattices of non-crossing partitions was started by combinatorialists quite some time before the development of free probability. In this and the next lecture we will introduce these objects in full generality and present their main combinatorial properties which are of relevance for us.
The preceding lecture has also told us that, from a combinatorial point of view, classical probability and free probability should behave as all partitions versus non-crossing partitions. Thus, we will also keep an eye on similarities and differences between these two cases.
Non-crossing partitions of an ordered set
Definitions 9.1. Let S be a finite totally ordered set.
(1) We call π = {V1, …, Vr} a partition of the set S if and only if the Vi (1 ≤ i ≤ r) are pairwise disjoint, non-void subsets of S such that V1 ∪ … ∪ Vr = S. We call V1, …, Vr the blocks of π. The number of blocks of π is denoted by |π|. Given two elements p, q ∈ S, we write p ∼πq if p and q belong to the same block of π.
(2) The set of all partitions of S is denoted by P(S). In the special case S = {1, …, n}, we denote this by P (n).