To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book provides an introduction to the rapidly expanding theory of stochastic integration and martingales. The treatment is close to that developed by the French school of probabilists, but is more elementary than other texts. The presentation is abstract, but largely self-contained and Dr Kopp makes fewer demands on the reader's background in probability theory than is usual. He gives a fairly full discussion of the measure theory and functional analysis needed for martingale theory, and describes the role of Brownian motion and the Poisson process as paradigm examples in the construction of abstract stochastic integrals. An appendix provides the reader with a glimpse of very recent developments in non-commutative integration theory which are of considerable importance in quantum mechanics. Thus equipped, the reader will have the necessary background to understand research in stochastic analysis. As a textbook, this account will be ideally suited to beginning graduate students in probability theory, and indeed it has evolved from such courses given at Hull University. It should also be of interest to pure mathematicians looking for a careful, yet concise introduction to martingale theory, and to physicists, engineers and economists who are finding that applications to their disciplines are becoming increasingly important.
Random Fields on the Sphere presents a comprehensive analysis of isotropic spherical random fields. The main emphasis is on tools from harmonic analysis, beginning with the representation theory for the group of rotations SO(3). Many recent developments on the method of moments and cumulants for the analysis of Gaussian subordinated fields are reviewed. This background material is used to analyse spectral representations of isotropic spherical random fields and then to investigate in depth the properties of associated harmonic coefficients. Properties and statistical estimation of angular power spectra and polyspectra are addressed in full. The authors are strongly motivated by cosmological applications, especially the analysis of cosmic microwave background (CMB) radiation data, which has initiated a challenging new field of mathematical and statistical research. Ideal for mathematicians and statisticians interested in applications to cosmology, it will also interest cosmologists and mathematicians working in group representations, stochastic calculus and spherical wavelets.
In a first course on probability one typically works with a sequence of random variables X1, X2, … For stochastic processes, instead of indexing the random variables by the positive integers, we index them by t ∈ [0, ∞) and we think of Xt as being the value at time t. The random variable could be the location of a particle on the real line, the strength of a signal, the price of a stock, and many other possibilities as well.
We will also work with increasing families of σ-fields {ℱt}, known as filtrations. The σ-field ℱt is supposed to represent what we know up to time t.
Processes and σ-fields
Let (Ω, ℱ, ℙ) be a probability space. A real-valued stochastic process (or simply a process) is a map X from [0,∞) × Ω to the reals. We write Xt = Xt(ω) = X (t, ω). We will impose stronger measurability conditions shortly, but for now we require that the random variables Xt be measurable with respect to ℱ for each t ≥ 0.
A collection of σ-fields ℱt such that ℱt ⊂ ℱ for each t and ℱs ⊂ ℱt if s ≤ t is called a filtration. Define ℱt+ = ∩∈ > 0ℱt+∈. A filtration is right continuous if ℱt+ = ℱt for all t ≥ 0. The σ-field ℱt+ is supposed to represent what one knows if one looks ahead an infinitesimal amount. Most of the filtrations we will come across will be right continuous, but see Exercise 1.1.
Why study stochastic processes? This branch of probability theory offers sophisticated theorems and proofs, such as the existence of Brownian motion, the Doob–Meyer decomposition, and the Kolmogorov continuity criterion. At the same time stochastic processes also have far-reaching applications: the explosive growth in options and derivatives in financial markets throughout the world derives from the Black–Scholes formula, while NASA relies on the Kalman–Bucy method to filter signals from satellites and probes sent into outer space.
A graduate student taking a year-long course in probability theory first learns about sequences of random variables and topics such as laws of large numbers, central limit theorems, and discrete time martingales. In the second half of the course, the student will then turn to stochastic processes, which is the subject of this text. Topics covered here are Brownian motion, stochastic integrals, stochastic differential equations, Markov processes, the Black–Scholes formula of financial mathematics, the Kalman–Bucy filter, as well as many more.
The 42 chapters of this book can be grouped into seven parts. The first part consists of Chapters 1–8, where some of the basic processes and ideas are introduced, including Brownian motion. The next group of chapters, Chapters 9–15, introduce the theory of stochastic calculus, including stochastic integrals and Itô's formula. Chapters 16–18 explore jump processes. This requires a study of the foundations of stochastic processes, which is also known as the general theory of processes. Next we take up Markov processes in Chapters 19–23. A formidable obstacle to the study of Markov processes is the notation, and I have attempted to make this as accessible as possible. Chapters 24–29 involve stochastic differential equations.
Suppose we have a sequence of probabilities on a metric space S and we want to define what it means for the sequence to converge weakly. Alternately, we may have a sequence of random variables and want to say what it means for the random variables to converge weakly. We will apply the results we obtain here in later chapters to the case where S is a function space such as C[0, 1] and obtain theorems on the convergence of stochastic processes.
For now our state space is assumed to be an arbitrary metric space, although we will soon add additional assumptions on S. We use the Borelσ-field on S, which is the σ-field generated by the open sets in S.We write A0, Ā, and δA for the interior, closure, and boundary of A, respectively.
The portmanteau theorem
Clearly the de?nition of weak convergence of real-valued random variables in terms of distribution functions (see Section A. 12) has no obvious analog. The appropriate generalization is the following; cf. Proposition A. 41.
Definition 30.1 A sequence of probabilities {ℙn} on a metric space S furnished with the Borel σ-field is said to converge weakly to ℙ if ʃ f dℙn → ʃ fdℙ for every bounded and continuous function f on S. A sequence of random variables {Xn} taking values in S converges weakly to a random variable X taking values in S if E f (Xn) → Ef(X) whenever f is a bounded and continuous function.
The monotone class theorem is a result from measure theory used in the proof of the Fubini theorem.
Definition B.1 ℳ is a monotone class if ℳ is a collection of subsets of X such that
(1) if A1 ⊂ A2 ⊂ …, A = ∪iAi, and each Ai ∈ ℳ, then A ∊ ℳ;
(2) if A1 ⊃ A2 ⊃ …, A = ∊ ℳ∩Ai, and each Ai ∈ ℳ, then A ∈ ℳ.
Recall that an algebra of sets is a collection A of sets such that if A1,…, An ∈ A, then A1 ∪ · ∪ An and A1 ∩ · ∩ An are also in A, and if A ∈ A, then Ac ∈ A.
The intersection ofmonotone classes is a monotone class, and the intersection of all monotone classes containing a given collection of sets is the smallest monotone class containing that collection.
Theorem B.2Suppose A0is an algebra of sets, A is the smallest σ-field containing A0, and ℳ is the smallest monotone class containing A0. Then ℳ = A.
Proof A σ-algebra is clearly a monotone class, so ℳ ⊂ A. We must show A ⊂ ℳ.
Let N1 ={A ∈ ℳ : Ac ℳ}. Note N1 is contained in ℳ, contains A0, and is a monotone class. Since ℳ is the smallest monotone class containing A0, then N = A, and therefore ℳ is closed under the operation of taking complements.