To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces sub-Gaussian and sub-exponential distributions and develops basic concentration inequalities. We prove the Hoeffding, Chernoff, Bernstein, and Khintchine inequalities. Applications include robust mean estimation and analyzing degrees in random graphs. The exercises explore Mills ratio, small ball probabilities, Le Cam’s two-point method, the expander mixing lemma for random graphs, stochastic dominance, Orlicz norms, and the Bennett inequality.
This paper proposes a method for reconstructing three-dimensional turbulent flows from sparse measurements without the need for ground truth data during training. A weight-sharing network is developed to infer the full flow fields from measurements of velocity sampled at three planes and boundary pressure at one additional plane, inspired by experimental configurations. The weight-sharing network shares identical parameters along homogeneous directions, which results in efficient data utilization and reduced computational memory requirements. First, we compare the weight-sharing network to the PC-DualConvNet, adapted from prior work, by reconstructing a 3D Kolmogorov flow from noise-free measurements with a snapshot-enforced loss. Both networks accurately recover time-averaged 3D flow fields and the correct energy spectrum up to wavenumber 10. The weight-sharing network has the ability to infer flow structures distant from measurement planes. Second, we carry out reconstruction from measurements corrupted with white noise (SNR 15) using a mean-enforced loss. We show that, for the weight-sharing network, validation sensor loss on unseen data decreases with training sensor loss—unlike PC-DualConvNet. This shows that the training sensor loss is a good estimate of the generalization error. The weight-sharing network offers good generalization, parameter efficiency, and hyperparameter robustness. The proposed method opens the possibility of three-dimensional flow reconstruction from experiments.
A global increase in severe group A Streptococcus (GAS) infections has been reported following the COVID-19 pandemic, but data from Asia remain limited. We examined epidemiology and clinical characteristics of severe paediatric GAS infections across 86 Japanese hospitals, focusing on patients under 18 years of age, hospitalized between 1 January 2019 and 31 March 2024. Severe GAS infection was defined by the isolation of GAS from sterile sites, or from non-sterile sites under specific conditions, such as streptococcal toxic shock syndrome (STSS). A total of 83 cases were analysed. Cases increased from the summer of 2023, exceeding pre-pandemic levels. The median age was 4 (interquartile range: 1–8) years, with the highest number among 1-year-olds. No cases were reported in Hokkaido, northern Japan. Only 6% (5/83) of the cases had preceding GAS pharyngitis. Pneumonia was the most prevalent diagnosis (25%), with 76% of these cases being complicated by empyema, often necessitating intensive care or surgical intervention. Only 17% (14/83) of cases were reported as STSS in Japan’s national surveillance system. This study represents the first multicentre nationwide hospital-based investigation of severe paediatric GAS infections in Japan, identifying the recent increase in cases, thereby highlighting limitations of current STSS-based surveillance.
This chapter demonstrates that finite convex risk measures on Lp spaces (for p ∈ [1, ∞) are inherently lower semicontinuous, ensuring the validity of their dual representations, while for L∞ spaces, the Fatou property is required.
Most of the material in this chapter is from basic analysis and probability courses. Key concepts and results are recalled here, including convexity, norms and inner products, random variables and random vectors, union bound, conditioning, basic inequalities (Jensen, Minkowski, Cauchy–Schwarz, Hölder, Markov, and Chebyshev), the integrated tail formula, the law of large numbers, the central limit theorem, normal and Poisson distributions, and handy bounds on the factorial.
This chapter presents some foundational methods for bounding random processes. We begin with the chaining technique and prove the Dudley inequality, which bounds a random process using covering numbers. Applications include Monte Carlo integration and uniform bounds for empirical processes. We then develop VC (Vapnik– Chervonenkis) theory, offering combinatorial insights into random processes and applying it to statistical learning. Building on chaining, we introduce generic chaining to obtain optimal two-sided bounds using Talagrand’s g2 functional. A key consequence is the Talagrand comparison inequality, a generalization of the Sudakov–Fernique inequality for sub-Gaussian processes. This is used to derive the Chevet inequality, a powerful tool for analyzing random bilinear forms over general sets. Exercises explore the Lipschitz law of large numbers in higher dimensions, one-bit quantization, and the small ball method for heavy-tailed random matrices.
This chapter begins with Maurey’s empirical method – a probabilistic approach to constructing economical convex combinations. We apply it to bound covering numbers and the volumes of polytopes, revealing their counterintuitive behavior in high dimensions. The exercises refine these bounds and culminate in the Carl–Pajor theorem on the volume of polytopes.
This chapter introduces several basic tools in high-dimensional probability: decoupling, concentration for quadratic forms (the Hanson–Wright inequality), symmetrization, and contraction. These techniques are illustrated through estimates of the operator norm of a random matrix. This is applied to matrix completion, where the goal is to recover a low-rank matrix from a random subset of its entries. Exercises explore variants of the Hanson–Wright inequality, mean estimation, concentration of the norm for anisotropic random vectors, distances to subspaces, graph cutting, the concept of type in normed spaces, non-Euclidean versions of the approximate Caratheodory theorem, and covariance estimation.
This chapter introduces financial risk as a random variable representing uncertain future gains or losses. A risk measure quantifies this uncertainty by mapping random variables to real numbers. The prominent example of Value-at-Risk (V@R) is discussed.
The Average-Value-at-Risk (AV@R) has emerged as a superior, coherent risk measure that accounts for the magnitude of potential losses beyond a given quantile and consistently favors diversification.
This chapter begins the study of random vectors in high dimensions, starting by showing their norm concentrates. We give a probabilistic proof of the Grothendieck inequality and apply it to semidefinite optimization. We explore a semidefinite relaxation for the maximum cut, presenting the Goemans–Williamson randomized approximation algorithm. We also give an alternative proof of the Grothendieck inequality with nearly the best known constant using the kernel trick, a method widely used in machine learning. The exercises explore invariant ensembles of random matrix theory, various versions of the Grothendieck inequality, semidefinite relaxations, and the notion of entropy.