To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The culmination of years of teaching experience, this book provides a modern introduction to the mathematical theory of interacting particle systems. Assuming a background in probability and measure theory, it has been designed to support a one-semester course at a Master or Ph.D. level. It also provides a useful reference for researchers, containing several results that have not appeared in print in this form before. An emphasis is placed on graphical representations, which are used to give a construction that is intuitively easier to grasp than the traditional generator approach. Also included is an extensive look at duality theory, along with discussions of mean-field methods, phase transitions and critical behaviour. The text is illustrated with the results of numerical simulations and features exercises in every chapter. The theory is demonstrated on a range of models, reflecting the modern state of the subject and highlighting the scope of possible applications.
Based on courses taught at the University of Cambridge, this text presents core contemporary statistical methods and theory in an accessible, self-contained and rigorous fashion, with a focus on finite-sample guarantees as opposed to asymptotic arguments. Many of the topics and results have not appeared in book form previously, and some constitute new research. The prerequisites are relatively light (primarily a good grasp of linear algebra and real analysis) and complete solutions to all 250+ exercises are available online. It is the perfect entry point to the subject for master's and graduate-level students in statistics, data science and machine learning, as well as related disciplines such as artificial intelligence, signal processing, information theory, electrical engineering and econometrics. Researchers in these fields will also find it an invaluable resource. This title is also available as Open Access on Cambridge Core.
This comprehensive yet accessible guide to enterprise risk management for financial institutions contains all the tools needed to build and maintain an ERM framework. It discusses the internal and external contexts within which risk management must be carried out, and it covers a range of qualitative and quantitative techniques that can be used to identify, model and measure risks. This third edition has been thoroughly revised and updated to reflect new regulations and legislation. It includes additional detail on machine learning, a new section on vine copulas, and significantly expanded information on sustainability. A range of new case studies include Theranos and FTX. Suitable as a course book or for self-study, this book forms part of the core reading for the Institute and Faculty of Actuaries' examination in enterprise risk management.
From social networks to biological systems, networks are a fundamental part of modern life. Network analysis is increasingly popular across the mathematical, physical, life and social sciences, offering insights into a range of phenomena, from developing new drugs based on intracellular interactions, to understanding the influence of social interactions on behaviour patterns. This book provides a toolkit for analyzing random networks, together with theoretical justification of the methods proposed. It combines methods from both probability and statistics, teaching how to build and analyze plausible models for random networks, and how to validate such models, to detect unusual features in the data, and to make predictions. Theoretical results are motivated by applications across a range of fields, and classical data sets are used for illustration throughout the book. This book offers a comprehensive introduction to the field for graduate students and researchers.
This chapter presents the matrix deviation inequality, a uniform deviation bound for random matrices over general sets. Applications include two-sided bounds for random matrices, refined estimates for random projections, covariance estimation in low dimensions, and an extension of the Johnson–Lindenstrauss lemma to infinite sets. We prove two geometric results: the M* bound, which shows how random slicing shrinks high-dimensional sets, and the escape theorem, which shows how slicing can completely miss them. These tools are applied to a fundamental data science task – learning structured high-dimensional linear models. We extend the matrix deviation inequality to arbitrary norms and use it to strengthen the Chevet inequality and derive the Dvoretzky– Milman theorem, which states that random low-dimensional projections of high-dimensional sets appear nearly round. Exercises cover matrix and process-level deviation bounds, high-dimensional estimation techniques such as the Lasso for sparse regression, the Garnaev–Gluskin theorem on random slicing of the cross-polytope, and general-norm extensions of the Johnson–Lindenstrauss lemma.
This chapter introduces techniques for bounding random processes. We develop Gaussian interpolation to derive powerful comparison inequalities for Gaussian processes, including the Slepian, Sudakov–Fernique, and Gordon inequalities. We use this to get sharp bounds on the operator norm of Gaussian random matrices. We also prove the Sudakov lower bound using covering numbers. We introduce the concept of Gaussian width, which connects probabilistic and geometric perspectives, and apply it to analyze the size of random projections of high-dimensional sets. Exercises cover symmetrization and contraction inequalities for random processes, the Gordon min–max inequality, sharp bounds for Gaussian matrices, the nuclear norm, effective dimension, random projections, and matrix sketching.
This chapter introduces sub-Gaussian and sub-exponential distributions and develops basic concentration inequalities. We prove the Hoeffding, Chernoff, Bernstein, and Khintchine inequalities. Applications include robust mean estimation and analyzing degrees in random graphs. The exercises explore Mills ratio, small ball probabilities, Le Cam’s two-point method, the expander mixing lemma for random graphs, stochastic dominance, Orlicz norms, and the Bennett inequality.
Most of the material in this chapter is from basic analysis and probability courses. Key concepts and results are recalled here, including convexity, norms and inner products, random variables and random vectors, union bound, conditioning, basic inequalities (Jensen, Minkowski, Cauchy–Schwarz, Hölder, Markov, and Chebyshev), the integrated tail formula, the law of large numbers, the central limit theorem, normal and Poisson distributions, and handy bounds on the factorial.
This chapter presents some foundational methods for bounding random processes. We begin with the chaining technique and prove the Dudley inequality, which bounds a random process using covering numbers. Applications include Monte Carlo integration and uniform bounds for empirical processes. We then develop VC (Vapnik– Chervonenkis) theory, offering combinatorial insights into random processes and applying it to statistical learning. Building on chaining, we introduce generic chaining to obtain optimal two-sided bounds using Talagrand’s g2 functional. A key consequence is the Talagrand comparison inequality, a generalization of the Sudakov–Fernique inequality for sub-Gaussian processes. This is used to derive the Chevet inequality, a powerful tool for analyzing random bilinear forms over general sets. Exercises explore the Lipschitz law of large numbers in higher dimensions, one-bit quantization, and the small ball method for heavy-tailed random matrices.
This chapter begins with Maurey’s empirical method – a probabilistic approach to constructing economical convex combinations. We apply it to bound covering numbers and the volumes of polytopes, revealing their counterintuitive behavior in high dimensions. The exercises refine these bounds and culminate in the Carl–Pajor theorem on the volume of polytopes.
This chapter introduces several basic tools in high-dimensional probability: decoupling, concentration for quadratic forms (the Hanson–Wright inequality), symmetrization, and contraction. These techniques are illustrated through estimates of the operator norm of a random matrix. This is applied to matrix completion, where the goal is to recover a low-rank matrix from a random subset of its entries. Exercises explore variants of the Hanson–Wright inequality, mean estimation, concentration of the norm for anisotropic random vectors, distances to subspaces, graph cutting, the concept of type in normed spaces, non-Euclidean versions of the approximate Caratheodory theorem, and covariance estimation.
This chapter begins the study of random vectors in high dimensions, starting by showing their norm concentrates. We give a probabilistic proof of the Grothendieck inequality and apply it to semidefinite optimization. We explore a semidefinite relaxation for the maximum cut, presenting the Goemans–Williamson randomized approximation algorithm. We also give an alternative proof of the Grothendieck inequality with nearly the best known constant using the kernel trick, a method widely used in machine learning. The exercises explore invariant ensembles of random matrix theory, various versions of the Grothendieck inequality, semidefinite relaxations, and the notion of entropy.
This chapter develops a non-asymptotic theory of random matrices. It starts with a quick refresher on linear algebra, including the perturbation theory for matrices and featuring a short proof of the Davis–Kahan inequality. Three key concepts are introduced – nets, covering numbers, and packing numbers – and linked to volume and error-correcting codes. Bounds on the operator norm and singular values of random matrices are established. Three applications are given: community detection in networks, covariance estimation, and spectral clustering. Exercises explore the power method to compute the top singular value, the Schur bound on the operator norm, Hermitian dilation,Walsh matrices, the Wedin theorem on matrix perturbations, a semidefinite relaxation of the cut norm, the volume of high-dimensional balls, and Gaussian mixture models.
This chapter explores methods of concentration that do not rely on independence. We introduce the isoperimetric approach and discuss concentration inequalities across a variety of metric measure spaces – including the sphere, Gaussian space, discrete and continuous cubes, the symmetric group, Riemannian manifolds, and the Grassmannian. As an application, we derive the Johnson–Lindenstrauss lemma, a fundamental result in dimensionality reduction for high-dimensional data. We then develop matrix concentration inequalities, with an emphasis on the matrix Bernstein inequality, which extends the classical Bernstein inequality to random matrices. Applications include community detection in sparse networks and covariance estimation for heavy-tailed distributions. Exercises explore binary dimension reduction, matrix calculus, additional matrix concentration results, and matrix sketching.