To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This comprehensive yet accessible guide to enterprise risk management for financial institutions contains all the tools needed to build and maintain an ERM framework. It discusses the internal and external contexts within which risk management must be carried out, and it covers a range of qualitative and quantitative techniques that can be used to identify, model and measure risks. This third edition has been thoroughly revised and updated to reflect new regulations and legislation. It includes additional detail on machine learning, a new section on vine copulas, and significantly expanded information on sustainability. A range of new case studies include Theranos and FTX. Suitable as a course book or for self-study, this book forms part of the core reading for the Institute and Faculty of Actuaries' examination in enterprise risk management.
From social networks to biological systems, networks are a fundamental part of modern life. Network analysis is increasingly popular across the mathematical, physical, life and social sciences, offering insights into a range of phenomena, from developing new drugs based on intracellular interactions, to understanding the influence of social interactions on behaviour patterns. This book provides a toolkit for analyzing random networks, together with theoretical justification of the methods proposed. It combines methods from both probability and statistics, teaching how to build and analyze plausible models for random networks, and how to validate such models, to detect unusual features in the data, and to make predictions. Theoretical results are motivated by applications across a range of fields, and classical data sets are used for illustration throughout the book. This book offers a comprehensive introduction to the field for graduate students and researchers.
'High-Dimensional Probability,' winner of the 2019 PROSE Award in Mathematics, offers an accessible and friendly introduction to key probabilistic methods for mathematical data scientists. Streamlined and updated, this second edition integrates theory, core tools, and modern applications. Concentration inequalities are central, including classical results like Hoeffding's and Chernoff's inequalities, and modern ones like the matrix Bernstein inequality. The book also develops methods based on stochastic processes – Slepian's, Sudakov's, and Dudley's inequalities, generic chaining, and VC-based bounds. Applications include covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, and machine learning. New to this edition are 200 additional exercises, alongside extra hints to assist with self-study. Material on analysis, probability, and linear algebra has been reworked and expanded to help bridge the gap from a typical undergraduate background to a second course in probability.
This extensive revision of the 2007 book 'Random Graph Dynamics,' covering the current state of mathematical research in the field, is ideal for researchers and graduate students. It considers a small number of types of graphs, primarily the configuration model and inhomogeneous random graphs. However, it investigates a wide variety of dynamics. The author describes results for the convergence to equilibrium for random walks on random graphs as well as topics that have emerged as mature research areas since the publication of the first edition, such as epidemics, the contact process, voter models, and coalescing random walk. Chapter 8 discusses a new challenging and largely uncharted direction: systems in which the graph and the states of their vertices coevolve.
This chapter delves into the theory and application of reversible Markov Chain Monte Carlo (MCMC) algorithms, focusing on their role in Bayesian inference. It begins with the Metropolis–Hastings algorithm and explores variations such as component-wise updates, and the Metropolis-Adjusted Langevin Algorithm (MALA). The chapter also discusses Hamiltonian Monte Carlo (HMC) and the importance of scaling MCMC methods for high-dimensional models or large datasets. Key challenges in applying reversible MCMC to large-scale problems are addressed, with a focus on computational efficiency and algorithmic adjustments to improve scalability.
This chapter provides a comprehensive overview of the foundational concepts essential for scalable Bayesian learning and Monte Carlo methods. It introduces Monte Carlo integration and its relevance to Bayesian statistics, focusing on techniques such as importance sampling and control variates. The chapter outlines key applications, including logistic regression, Bayesian matrix factorization, and Bayesian neural networks, which serve as illustrative examples throughout the book. It also offers a primer on Markov chains and stochastic differential equations, which are critical for understanding the advanced methods discussed in later chapters. Additionally, the chapter introduces kernel methods in preparation for their application in scalable Markov Chain Monte Carlo (MCMC) diagnostics.
This chapter focuses on continuous-time MCMC algorithms, particularly those based on piecewise deterministic Markov processes (PDMPs). It introduces PDMPs as a scalable alternative to traditional MCMC, with a detailed explanation of their simulation, invariant distribution, and limiting processes. Various continuous-time samplers, including the bouncy particle sampler and zig-zag process, are compared in terms of efficiency and performance. The chapter also addresses practical aspects of simulating PDMPs, including techniques for exploiting model sparsity and data subsampling. Extensions to these methods, such as handling discontinuous target distributions or distributions defined on spaces of different dimensions, are discussed.
The development of more sophisticated and, especially, approximate sampling algorithms aimed at improving scalability in one or more of the senses already discussed in this book raises important considerations about how a suitable algorithm should be selected for a given task, how its tuning parameters should be determined, and how its convergence should be as- sessed. This chapter presents recent solutions to the above problems, whose starting point is to derive explicit upper bounds on an appropriate distance between the posterior and the approximation produced by MCMC. Further, we explain how these same tools can be adapted to provide powerful post-processing methods that can be used retrospectively to improve approximations produced using scalable MCMC.
This chapter explores the benefits of non-reversible MCMC algorithms in improving sampling efficiency. Revisiting Hamiltonian Monte Carlo (HMC), the chapter discusses the advantages of breaking detailed balance and introduces lifting schemes as a tool to enhance exploration of the parameter space. It reviews non-reversible HMC and alternative algorithms like Gustafson’s method. The chapter also covers techniques like delayed rejection and the discrete bouncy particle sampler, offering a comparison between reversible and non-reversible methods. Theoretical insights and practical implementations are provided to highlight the efficiency gains from non-reversibility.
This chapter introduces stochastic gradient MCMC (SG-MCMC) algorithms, designed to scale Bayesian inference to large datasets. Beginning with the unadjusted Langevin algorithm (ULA), it extends to more sophisticated methods such as stochastic gradient Langevin dynamics (SGLD). The chapter emphasises controlling the stochasticity in gradient estimators and explores the role of control variates in reducing variance. Convergence properties of SG-MCMC methods are analysed, with experiments demonstrating their performance in logistic regression and Bayesian neural networks. It concludes by outlining a general framework for SG-MCMC and offering practical guidance for efficient, scalable Bayesian learning.
A graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC), as applied broadly in the Bayesian computational context. The topics covered have emerged as recently as the last decade and include stochastic gradient MCMC, non-reversible MCMC, continuous time MCMC, and new techniques for convergence assessment. A particular focus is on cutting-edge methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI. Examples are woven throughout the text to demonstrate how scalable Bayesian learning methods can be implemented. This text could form the basis for a course and is sure to be an invaluable resource for researchers in the field.
In this chapter we present two spatial dependent models: one based on defining a latent variable for each area, and the other by defining one latent variable for each pair of latent areas. We call the latter the latent edges model. We compare both models with a real data set. Extensions to spatio-temporal constructions are also considered.
In this chapter we define what a conjugate family is in a Bayesian analysis context and develop detailed examples of some cases; in particular, we review the beta and binomial case, the Pareto and inverse Pareto case, the gamma and gamma case and the gamma and Poisson case. We conclude by providing a list of conjugate models.