To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter develops a non-asymptotic theory of random matrices. It starts with a quick refresher on linear algebra, including the perturbation theory for matrices and featuring a short proof of the Davis–Kahan inequality. Three key concepts are introduced – nets, covering numbers, and packing numbers – and linked to volume and error-correcting codes. Bounds on the operator norm and singular values of random matrices are established. Three applications are given: community detection in networks, covariance estimation, and spectral clustering. Exercises explore the power method to compute the top singular value, the Schur bound on the operator norm, Hermitian dilation,Walsh matrices, the Wedin theorem on matrix perturbations, a semidefinite relaxation of the cut norm, the volume of high-dimensional balls, and Gaussian mixture models.
This chapter explores methods of concentration that do not rely on independence. We introduce the isoperimetric approach and discuss concentration inequalities across a variety of metric measure spaces – including the sphere, Gaussian space, discrete and continuous cubes, the symmetric group, Riemannian manifolds, and the Grassmannian. As an application, we derive the Johnson–Lindenstrauss lemma, a fundamental result in dimensionality reduction for high-dimensional data. We then develop matrix concentration inequalities, with an emphasis on the matrix Bernstein inequality, which extends the classical Bernstein inequality to random matrices. Applications include community detection in sparse networks and covariance estimation for heavy-tailed distributions. Exercises explore binary dimension reduction, matrix calculus, additional matrix concentration results, and matrix sketching.