To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Do the one-dimensional normal distribution and the one-dimensional central limit theorem allow for a generalization to dimension two or higher? The answer is yes. Just as the one-dimensional normal density is completely determined by its expected value and variance, the bivariate normal density is completely specified by the expected values and the variances of its marginal densities and by its correlation coefficient. The bivariate normal distribution appears in many applied probability problems. This probability distribution can be extended to the multivariate normal distribution in higher dimensions. The multivariate normal distribution arises when you take the sum of a large number of independent random vectors. To get this distribution, all you have to do is to compute a vector of expected values and a matrix of covariances. The multidimensional central limit theorem explains why so many natural phenomena have the multivariate normal distribution. A nice feature of the multivariate normal distribution is its mathematical tractability. The fact that any linear combination of multivariate normal random variables has a univariate normal distribution makes the multivariate normal distribution very convenient for financial portfolio analysis, among others.
The purpose of this chapter is to give a first introduction to the multivariate normal distribution and the multidimensional central limit theorem. Several practical applications will be discussed, including the drunkard's walk in higher dimensions and the chi-square test.
This eagerly awaited textbook covers everything the graduate student in probability wants to know about Brownian motion, as well as the latest research in the area. Starting with the construction of Brownian motion, the book then proceeds to sample path properties like continuity and nowhere differentiability. Notions of fractal dimension are introduced early and are used throughout the book to describe fine properties of Brownian paths. The relation of Brownian motion and random walk is explored from several viewpoints, including a development of the theory of Brownian local times from random walk embeddings. Stochastic integration is introduced as a tool and an accessible treatment of the potential theory of Brownian motion clears the path for an extensive treatment of intersections of Brownian paths. An investigation of exceptional points on the Brownian path and an appendix on SLE processes, by Oded Schramm and Wendelin Werner, lead directly to recent research themes.
Stein's method is a collection of probabilistic techniques that allow one to assess the distance between two probability distributions by means of differential operators. In 2007, the authors discovered that one can combine Stein's method with the powerful Malliavin calculus of variations, in order to deduce quantitative central limit theorems involving functionals of general Gaussian fields. This book provides an ideal introduction both to Stein's method and Malliavin calculus, from the standpoint of normal approximations on a Gaussian space. Many recent developments and applications are studied in detail, for instance: fourth moment theorems on the Wiener chaos, density estimates, Breuer–Major theorems for fractional processes, recursive cumulant computations, optimal rates and universality results for homogeneous sums. Largely self-contained, the book is perfect for self-study. It will appeal to researchers and graduate students in probability and statistics, especially those who wish to understand the connections between Stein's method and Malliavin calculus.
Rough path analysis provides a fresh perspective on Ito's important theory of stochastic differential equations. Key theorems of modern stochastic analysis (existence and limit theorems for stochastic flows, Freidlin-Wentzell theory, the Stroock-Varadhan support description) can be obtained with dramatic simplifications. Classical approximation results and their limitations (Wong-Zakai, McShane's counterexample) receive 'obvious' rough path explanations. Evidence is building that rough paths will play an important role in the future analysis of stochastic partial differential equations and the authors include some first results in this direction. They also emphasize interactions with other parts of mathematics, including Caratheodory geometry, Dirichlet forms and Malliavin calculus. Based on successful courses at the graduate level, this up-to-date introduction presents the theory of rough paths and its applications to stochastic analysis. Examples, explanations and exercises make the book accessible to graduate students and researchers from a variety of fields.
Probability theory is nowadays applied in a huge variety of fields including physics, engineering, biology, economics and the social sciences. This book is a modern, lively and rigorous account which has Doob's theory of martingales in discrete time as its main theme. It proves important results such as Kolmogorov's Strong Law of Large Numbers and the Three-Series Theorem by martingale techniques, and the Central Limit Theorem via the use of characteristic functions. A distinguishing feature is its determination to keep the probability flowing at a nice tempo. It achieves this by being selective rather than encyclopaedic, presenting only what is essential to understand the fundamentals; and it assumes certain key results from measure theory in the main text. These measure-theoretic results are proved in full in appendices, so that the book is completely self-contained. The book is written for students, not for researchers, and has evolved through several years of class testing. Exercises play a vital rôle. Interesting and challenging problems, some with hints, consolidate what has already been learnt, and provide motivation to discover more of the subject than can be covered in a single introduction.
This is a simple and concise introduction to probability theory. Self-contained and readily accessible, it is written in an informal tutorial style with concepts and techniques defined and developed as necessary. After an elementary discussion of chance, the central and crucial rules and ideas of probability including independence and conditioning are set out. Examples, demonstrations, and exercises are used throughout to explore the ways in which probability is motivated by, and applied to, real life problems in science, medicine, gaming and other subjects of interest. This book is suitable for students taking introductory courses in probability and will provide a solid foundation for more advanced courses in probability and statistics. It would also be a valuable reference to those needing a working knowledge of probability theory and will appeal to anyone interested in this endlessly fascinating and entertaining subject.
This classic introduction to probability theory for beginning graduate students covers laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. It is a comprehensive treatment concentrating on the results that are the most useful for applications. Its philosophy is that the best way to learn probability is to see it in action, so there are 200 examples and 450 problems. The fourth edition begins with a short chapter on measure theory to orient readers new to the subject.
Random walks are stochastic processes formed by successive summation of independent, identically distributed random variables and are one of the most studied topics in probability theory. This contemporary introduction evolved from courses taught at Cornell University and the University of Chicago by the first author, who is one of the most highly regarded researchers in the field of stochastic processes. This text meets the need for a modern reference to the detailed properties of an important class of random walks on the integer lattice. It is suitable for probabilists, mathematicians working in related fields, and for researchers in other disciplines who use random walks in modeling.
Percolation theory was initiated some fifty years ago as a mathematical framework for the study of random physical processes such as flow through a disordered porous medium. It has proved to be a remarkably rich theory, with applications beyond natural phenomena to topics such as network modelling. The aims of this book, first published in 2006, are twofold. First to present classical results in a way that is accessible to non-specialists. Second, to describe results of Smirnov in conformal invariance, and outline the proof that the critical probability for random Voronoi percolation in the plane is 1/2. Throughout, the presentation is streamlined, with elegant and straightforward proofs requiring minimal background in probability and graph theory. Numerous examples illustrate the important concepts and enrich the arguments. All-in-all, it will be an essential purchase for mathematicians, physicists, electrical engineers and computer scientists working in this exciting area.
This second edition of Daniel W. Stroock's text is suitable for first-year graduate students with a good grasp of introductory, undergraduate probability theory and a sound grounding in analysis. It is intended to provide readers with an introduction to probability theory and the analytic ideas and tools on which the modern theory relies. It includes more than 750 exercises. Much of the content has undergone significant revision. In particular, the treatment of Levy processes has been rewritten, and a detailed account of Gaussian measures on a Banach space is given.
This clear and lively introduction to probability theory concentrates on the results that are the most useful for applications, including combinatorial probability and Markov chains. Concise and focused, it is designed for a one-semester introductory course in probability for students who have some familiarity with basic calculus. Reflecting the author's philosophy that the best way to learn probability is to see it in action, there are more than 350 problems and 200 examples. The examples contain all the old standards such as the birthday problem and Monty Hall, but also include a number of applications not found in other books, from areas as broad ranging as genetics, sports, finance, and inventory management.
Statistics do not lie, nor is probability paradoxical. You just have to have the right intuition. In this lively look at both subjects, David Williams convinces mathematics students of the intrinsic interest of statistics and probability, and statistics students that the language of mathematics can bring real insight and clarity to their subject. He helps students build the intuition needed, in a presentation enriched with examples drawn from all manner of applications, e.g., genetics, filtering, the Black–Scholes option-pricing formula, quantum probability and computing, and classical and modern statistical models. Statistics chapters present both the Frequentist and Bayesian approaches, emphasising Confidence Intervals rather than Hypothesis Test, and include Gibbs-sampling techniques for the practical implementation of Bayesian methods. A central chapter gives the theory of Linear Regression and ANOVA, and explains how MCMC methods allow greater flexibility in modelling. C or WinBUGS code is provided for computational examples and simulations. Many exercises are included; hints or solutions are often provided.
This comprehensive guide to stochastic processes gives a complete overview of the theory and addresses the most important applications. Pitched at a level accessible to beginning graduate students and researchers from applied disciplines, it is both a course book and a rich resource for individual readers. Subjects covered include Brownian motion, stochastic calculus, stochastic differential equations, Markov processes, weak convergence of processes and semigroup theory. Applications include the Black–Scholes formula for the pricing of derivatives in financial mathematics, the Kalman–Bucy filter used in the US space program and also theoretical applications to partial differential equations and analysis. Short, readable chapters aim for clarity rather than full generality. More than 350 exercises are included to help readers put their new-found knowledge to the test and to prepare them for tackling the research literature.
This introduction to some of the principal models in the theory of disordered systems leads the reader through the basics, to the very edge of contemporary research, with the minimum of technical fuss. Topics covered include random walk, percolation, self-avoiding walk, interacting particle systems, uniform spanning tree, random graphs, as well as the Ising, Potts, and random-cluster models for ferromagnetism, and the Lorentz model for motion in a random medium. Schramm–Löwner evolutions (SLE) arise in various contexts. The choice of topics is strongly motivated by modern applications and focuses on areas that merit further research. Special features include a simple account of Smirnov's proof of Cardy's formula for critical percolation, and a fairly full account of the theory of influence and sharp-thresholds. Accessible to a wide audience of mathematicians and physicists, this book can be used as a graduate course text. Each chapter ends with a range of exercises.
In this chapter, we show how Malliavin calculus and Stein's method may be combined into a powerful and flexible tool for studying probabilistic approximations. In particular, our aim is to use these two techniques to assess the distance between the laws of regular functionals of an isonormal Gaussian process and a one-dimensional normal distribution.
The highlight of the chapter is arguably Section 5.2, where we deduce a complete characterization of Gaussian approximations inside a fixed Wiener chaos. As discussed below, the approach developed in this chapter yields results that are systematically stronger than the so-called ‘method of moments and cumulants’, which is the most popular tool used in the proof of central limit theorems for functional of Gaussian fields.
Note that, in view of the chaos representation (2.7.8), any general result involving random variables in a fixed chaos is a key for studying probabilistic approximations of more general functionals of Gaussian fields. This last point is indeed one of the staples of the entire book, and will be abundantly illustrated in Section 5.3 as well as in Chapter 7.
Throughout the following, we fix an isonormal Gaussian process X = {X(h): h ∈ h}, defined on a suitable probability space (Ω, ℱ, P) such that ℱ = σ {X}. We will also adopt the language and notation of Malliavin calculus introduced in Chapter 2.