To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The culmination of years of teaching experience, this book provides a modern introduction to the mathematical theory of interacting particle systems. Assuming a background in probability and measure theory, it has been designed to support a one-semester course at a Master or Ph.D. level. It also provides a useful reference for researchers, containing several results that have not appeared in print in this form before. An emphasis is placed on graphical representations, which are used to give a construction that is intuitively easier to grasp than the traditional generator approach. Also included is an extensive look at duality theory, along with discussions of mean-field methods, phase transitions and critical behaviour. The text is illustrated with the results of numerical simulations and features exercises in every chapter. The theory is demonstrated on a range of models, reflecting the modern state of the subject and highlighting the scope of possible applications.
This comprehensive yet accessible guide to enterprise risk management for financial institutions contains all the tools needed to build and maintain an ERM framework. It discusses the internal and external contexts within which risk management must be carried out, and it covers a range of qualitative and quantitative techniques that can be used to identify, model and measure risks. This third edition has been thoroughly revised and updated to reflect new regulations and legislation. It includes additional detail on machine learning, a new section on vine copulas, and significantly expanded information on sustainability. A range of new case studies include Theranos and FTX. Suitable as a course book or for self-study, this book forms part of the core reading for the Institute and Faculty of Actuaries' examination in enterprise risk management.
Based on courses taught at the University of Cambridge, this text presents core contemporary statistical methods and theory in an accessible, self-contained and rigorous fashion, with a focus on finite-sample guarantees as opposed to asymptotic arguments. Many of the topics and results have not appeared in book form previously, and some constitute new research. The prerequisites are relatively light (primarily a good grasp of linear algebra and real analysis) and complete solutions to all 250+ exercises are available online. It is the perfect entry point to the subject for master's and graduate-level students in statistics, data science and machine learning, as well as related disciplines such as artificial intelligence, signal processing, information theory, electrical engineering and econometrics. Researchers in these fields will also find it an invaluable resource. This title is also available as Open Access on Cambridge Core.
From social networks to biological systems, networks are a fundamental part of modern life. Network analysis is increasingly popular across the mathematical, physical, life and social sciences, offering insights into a range of phenomena, from developing new drugs based on intracellular interactions, to understanding the influence of social interactions on behaviour patterns. This book provides a toolkit for analyzing random networks, together with theoretical justification of the methods proposed. It combines methods from both probability and statistics, teaching how to build and analyze plausible models for random networks, and how to validate such models, to detect unusual features in the data, and to make predictions. Theoretical results are motivated by applications across a range of fields, and classical data sets are used for illustration throughout the book. This book offers a comprehensive introduction to the field for graduate students and researchers.
'High-Dimensional Probability,' winner of the 2019 PROSE Award in Mathematics, offers an accessible and friendly introduction to key probabilistic methods for mathematical data scientists. Streamlined and updated, this second edition integrates theory, core tools, and modern applications. Concentration inequalities are central, including classical results like Hoeffding's and Chernoff's inequalities, and modern ones like the matrix Bernstein inequality. The book also develops methods based on stochastic processes – Slepian's, Sudakov's, and Dudley's inequalities, generic chaining, and VC-based bounds. Applications include covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, and machine learning. New to this edition are 200 additional exercises, alongside extra hints to assist with self-study. Material on analysis, probability, and linear algebra has been reworked and expanded to help bridge the gap from a typical undergraduate background to a second course in probability.
A graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC), as applied broadly in the Bayesian computational context. The topics covered have emerged as recently as the last decade and include stochastic gradient MCMC, non-reversible MCMC, continuous time MCMC, and new techniques for convergence assessment. A particular focus is on cutting-edge methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI. Examples are woven throughout the text to demonstrate how scalable Bayesian learning methods can be implemented. This text could form the basis for a course and is sure to be an invaluable resource for researchers in the field.
Bringing together years of research into one useful resource, this text empowers the reader to creatively construct their own dependence models. Intended for senior undergraduate and postgraduate students, it takes a step-by-step look at the construction of specific dependence models, including exchangeable, Markov, moving average and, in general, spatio-temporal models. All constructions maintain a desired property of pre-specifying the marginal distribution and keeping it invariant. They do not separate the dependence from the marginals and the mechanisms followed to induce dependence are so general that they can be applied to a very large class of parametric distributions. All the constructions are based on appropriate definitions of three building blocks: prior distribution, likelihood function and posterior distribution, in a Bayesian analysis context. All results are illustrated with examples and graphical representations. Applications with data and code are interspersed throughout the book, covering fields including insurance and epidemiology.
Brownian motion is an important topic in various applied fields where the analysis of random events is necessary. Introducing Brownian motion from a statistical viewpoint, this detailed text examines the distribution of quadratic plus linear or bilinear functionals of Brownian motion and demonstrates the utility of this approach for time series analysis. It also offers the first comprehensive guide on deriving the Fredholm determinant and the resolvent associated with such statistics. Presuming only a familiarity with standard statistical theory and the basics of stochastic processes, this book brings together a set of important statistical tools in one accessible resource for researchers and graduate students. Readers also benefit from online appendices, which provide probability density graphs and solutions to the chapter problems.
An emerging field in statistics, distributional regression facilitates the modelling of the complete conditional distribution, rather than just the mean. This book introduces generalized additive models for location, scale and shape (GAMLSS) – one of the most important classes of distributional regression. Taking a broad perspective, the authors consider penalized likelihood inference, Bayesian inference, and boosting as potential ways of estimating models and illustrate their usage in complex applications. Written by the international team who developed GAMLSS, the text's focus on practical questions and problems sets it apart. Case studies demonstrate how researchers in statistics and other data-rich disciplines can use the model in their work, exploring examples ranging from fetal ultrasounds to social media performance metrics. The R code and data sets for the case studies are available on the book's companion website, allowing for replication and further study.
Complex networks are key to describing the connected nature of the society that we live in. This book, the second of two volumes, describes the local structure of random graph models for real-world networks and determines when these models have a giant component and when they are small-, and ultra-small, worlds. This is the first book to cover the theory and implications of local convergence, a crucial technique in the analysis of sparse random graphs. Suitable as a resource for researchers and PhD-level courses, it uses examples of real-world networks, such as the Internet and citation networks, as motivation for the models that are discussed, and includes exercises at the end of each chapter to develop intuition. The book closes with an extensive discussion of related models and problems that demonstratemodern approaches to network theory, such as community structure and directed models.
Providing a graduate-level introduction to discrete probability and its applications, this book develops a toolkit of essential techniques for analysing stochastic processes on graphs, other random discrete structures, and algorithms. Topics covered include the first and second moment methods, concentration inequalities, coupling and stochastic domination, martingales and potential theory, spectral methods, and branching processes. Each chapter expands on a fundamental technique, outlining common uses and showing them in action on simple examples and more substantial classical results. The focus is predominantly on non-asymptotic methods and results. All chapters provide a detailed background review section, plus exercises and signposts to the wider literature. Readers are assumed to have undergraduate-level linear algebra and basic real analysis, while prior exposure to graduate-level probability is recommended. This much-needed broad overview of discrete probability could serve as a textbook or as a reference for researchers in mathematics, statistics, data science, computer science and engineering.
Actuaries must pass exams, but more than that: they must put knowledge into practice. This coherent book supports the Society of Actuaries' short-term actuarial mathematics syllabus while emphasizing the concepts and practical application of nonlife actuarial models. A class-tested textbook for undergraduate courses in actuarial science, it is also ideal for those approaching their professional exams. Key topics covered include loss modelling, risk and ruin theory, credibility theory and applications, and empirical implementation of loss models. Revised and updated to reflect curriculum changes, this second edition includes two brand new chapters on loss reserving and ratemaking. R replaces Excel as the computation tool used throughout – the featured R code is available on the book's webpage, as are lecture slides. Numerous examples and exercises are provided, with many questions adapted from past Society of Actuaries exams.
Now in its second edition, this accessible text presents a unified Bayesian treatment of state-of-the-art filtering, smoothing, and parameter estimation algorithms for non-linear state space models. The book focuses on discrete-time state space models and carefully introduces fundamental aspects related to optimal filtering and smoothing. In particular, it covers a range of efficient non-linear Gaussian filtering and smoothing algorithms, as well as Monte Carlo-based algorithms. This updated edition features new chapters on constructing state space models of practical systems, the discretization of continuous-time state space models, Gaussian filtering by enabling approximations, posterior linearization filtering, and the corresponding smoothers. Coverage of key topics is expanded, including extended Kalman filtering and smoothing, and parameter estimation. The book's practical, algorithmic approach assumes only modest mathematical prerequisites, suitable for graduate and advanced undergraduate students. Many examples are included, with Matlab and Python code available online, enabling readers to implement algorithms in their own projects.
While the Poisson distribution is a classical statistical model for count data, the distributional model hinges on the constraining property that its mean equal its variance. This text instead introduces the Conway-Maxwell-Poisson distribution and motivates its use in developing flexible statistical methods based on its distributional form. This two-parameter model not only contains the Poisson distribution as a special case but, in its ability to account for data over- or under-dispersion, encompasses both the geometric and Bernoulli distributions. The resulting statistical methods serve in a multitude of ways, from an exploratory data analysis tool, to a flexible modeling impetus for varied statistical methods involving count data. The first comprehensive reference on the subject, this text contains numerous illustrative examples demonstrating R code and output. It is essential reading for academics in statistics and data science, as well as quantitative researchers and data analysts in economics, biostatistics and other applied disciplines.
During the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master's students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book connects exponential family theory with its applications in a way that doesn't require advanced mathematical preparation.
As a result of the COVID-19 pandemic, medical statistics and public health data have become staples of newsfeeds worldwide, with infection rates, deaths, case fatality and the mysterious R figure featuring regularly. However, we don't all have the statistical background needed to translate this information into knowledge. In this lively account, Stephen Senn explains these statistical phenomena and demonstrates how statistics is essential to making rational decisions about medical care. The second edition has been thoroughly updated to cover developments of the last two decades and includes a new chapter on medical statistical challenges of COVID-19, along with additional material on infectious disease modelling and representation of women in clinical trials. Senn entertains with anecdotes, puzzles and paradoxes, while tackling big themes including: clinical trials and the development of medicines, life tables, vaccines and their risks or lack of them, smoking and lung cancer, and even the power of prayer.
This well-balanced introduction to enterprise risk management integrates quantitative and qualitative approaches and motivates key mathematical and statistical methods with abundant real-world cases - both successes and failures. Worked examples and end-of-chapter exercises support readers in consolidating what they learn. The mathematical level, which is suitable for graduate and senior undergraduate students in quantitative programs, is pitched to give readers a solid understanding of the concepts and principles involved, without diving too deeply into more complex theory. To reveal the connections between different topics, and their relevance to the real world, the presentation has a coherent narrative flow, from risk governance, through risk identification, risk modelling, and risk mitigation, capped off with holistic topics - regulation, behavioural biases, and crisis management - that influence the whole structure of ERM. The result is a text and reference that is ideal for graduate and senior undergraduate students, risk managers in industry, and anyone preparing for ERM actuarial exams.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance – simulation and sampling, as well as experimental design and data collection – that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
Heavy tails –extreme events or values more common than expected –emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.
Fay and Brittain present statistical hypothesis testing and compatible confidence intervals, focusing on application and proper interpretation. The emphasis is on equipping applied statisticians with enough tools - and advice on choosing among them - to find reasonable methods for almost any problem and enough theory to tackle new problems by modifying existing methods. After covering the basic mathematical theory and scientific principles, tests and confidence intervals are developed for specific types of data. Essential methods for applications are covered, such as general procedures for creating tests (e.g., likelihood ratio, bootstrap, permutation, testing from models), adjustments for multiple testing, clustering, stratification, causality, censoring, missing data, group sequential tests, and non-inferiority tests. New methods developed by the authors are included throughout, such as melded confidence intervals for comparing two samples and confidence intervals associated with Wilcoxon-Mann-Whitney tests and Kaplan-Meier estimates. Examples, exercises, and the R package asht support practical use.