We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This extensive revision of the 2007 book 'Random Graph Dynamics,' covering the current state of mathematical research in the field, is ideal for researchers and graduate students. It considers a small number of types of graphs, primarily the configuration model and inhomogeneous random graphs. However, it investigates a wide variety of dynamics. The author describes results for the convergence to equilibrium for random walks on random graphs as well as topics that have emerged as mature research areas since the publication of the first edition, such as epidemics, the contact process, voter models, and coalescing random walk. Chapter 8 discusses a new challenging and largely uncharted direction: systems in which the graph and the states of their vertices coevolve.
Play of Chance and Purpose emphasizes learning probability, statistics, and stochasticity by developing intuition and fostering imagination as a pedagogical approach. This book is meant for undergraduate and graduate students of basic sciences, applied sciences, engineering, and social sciences as an introduction to fundamental as well as advanced topics. The text has evolved out of the author's experience of teaching courses on probability, statistics, and stochastic processes at both undergraduate and graduate levels in India and the United States. Readers will get an opportunity to work on several examples from real-life applications and pursue projects and case-study analyses as capstone exercises in each chapter. Many projects involve the development of visual simulations of complex stochastic processes. This will augment the learners' comprehension of the subject and consequently train them to apply their learnings to solve hitherto unseen problems in science and engineering.
This chapter delves into the theory and application of reversible Markov Chain Monte Carlo (MCMC) algorithms, focusing on their role in Bayesian inference. It begins with the Metropolis–Hastings algorithm and explores variations such as component-wise updates, and the Metropolis-Adjusted Langevin Algorithm (MALA). The chapter also discusses Hamiltonian Monte Carlo (HMC) and the importance of scaling MCMC methods for high-dimensional models or large datasets. Key challenges in applying reversible MCMC to large-scale problems are addressed, with a focus on computational efficiency and algorithmic adjustments to improve scalability.
This chapter provides a comprehensive overview of the foundational concepts essential for scalable Bayesian learning and Monte Carlo methods. It introduces Monte Carlo integration and its relevance to Bayesian statistics, focusing on techniques such as importance sampling and control variates. The chapter outlines key applications, including logistic regression, Bayesian matrix factorization, and Bayesian neural networks, which serve as illustrative examples throughout the book. It also offers a primer on Markov chains and stochastic differential equations, which are critical for understanding the advanced methods discussed in later chapters. Additionally, the chapter introduces kernel methods in preparation for their application in scalable Markov Chain Monte Carlo (MCMC) diagnostics.
This chapter focuses on continuous-time MCMC algorithms, particularly those based on piecewise deterministic Markov processes (PDMPs). It introduces PDMPs as a scalable alternative to traditional MCMC, with a detailed explanation of their simulation, invariant distribution, and limiting processes. Various continuous-time samplers, including the bouncy particle sampler and zig-zag process, are compared in terms of efficiency and performance. The chapter also addresses practical aspects of simulating PDMPs, including techniques for exploiting model sparsity and data subsampling. Extensions to these methods, such as handling discontinuous target distributions or distributions defined on spaces of different dimensions, are discussed.
The development of more sophisticated and, especially, approximate sampling algorithms aimed at improving scalability in one or more of the senses already discussed in this book raises important considerations about how a suitable algorithm should be selected for a given task, how its tuning parameters should be determined, and how its convergence should be as- sessed. This chapter presents recent solutions to the above problems, whose starting point is to derive explicit upper bounds on an appropriate distance between the posterior and the approximation produced by MCMC. Further, we explain how these same tools can be adapted to provide powerful post-processing methods that can be used retrospectively to improve approximations produced using scalable MCMC.