To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Martingale theory is one of the most powerful tools of the modern probabilist. Its intuitive appeal and intrinsic simplicity combine with an impressive array of stability properties which enables us to construct and analyse many concrete examples within an abstract mathematical framework. This makes martingales particularly attractive to the student with a good background in pure mathematics wishing to find a convenient route into modern probability theory. The range of applications is enhanced by the construction of stochastic integrals and a martingale calculus.
This text has grown out of graduate lecture courses given at the University of Hull to students with a strong background in analysis but with little previous exposure to stochastic processes. It represents an attempt to make the ‘general theory of processes’ and its application to the construction of stochastic integrals accessible to such readers. As may be expected, the material is drawn largely from the work of Meyer and Dellacherie, but the influence of such authors as Elliott, Kussmaul, Neveu and Kallianpur will also be evident. I have not attempted to give credit for particular results: most of the material covered can now be described as standard, and I make no claims of originality. The Appendix by Chris Barnett and Ivan Wilde contains recent work on non-commutative integrals, some of which is presented here for the first time.
In general I have tried to follow the simplest, thus not always the shortest, route to the principal results, often pausing for motivation through familiar concepts.
This paper first presents a review of the connections between various rate of convergence results for Markov chains (including normal Harris ergodicity, geometric ergodicity and sub-geometric rates for a variety of rate functions ψ), and finiteness of appropriate moments of hitting times on small sets. We then present a series of criteria, analogous to Foster's criterion for ergodicity, which imply the finiteness of these moments and hence the rate of convergence of the Markov chain.
These results are applied to random walk on [0, ∞), and we deduce for example that this chain converges at rate ψ(n) = nα (log n)β if the increment variable has finite moment of order nα+1 (log n)β. Similar results hold for more general storage models.
Introduction
It is not always possible to pinpoint the beginnings of a major research area, and yet there can be little doubt that for the bulk of the vast quantity of mathematics known as queueing theory one can trace both the types of problems and the methods of solution to the single fundamental paper of David Kendall [8]. In that paper he introduced the idea of analysing queueing systems using embedded Markov chains, and the interplay between Markov chain theory and queueing theory has since been of vital importance in the development of both areas.
One of the most immediate, simplest and most elegant products of this interplay was the discovery by F.G. Foster (then a student of Kendall) of a criterion for positive recurrence of Markov chains, ensuring that these chains would have stationary distributions.
We study in this paper the asymptotics as k → ∞ of the motion of a system of k particles located at sites labelled by the integers. This section gives an informal description of the particle system and our results, and the original motivation for the study.
The particles will be referred to as balls, and the sites as boxes. The motion may be described as follows. Initially the k balls are distributed amongst boxes in such a way that the set of occupied boxes is connected. (A box may contain many balls, but there is no empty box between two occupied boxes.) At each move, a ball is taken from the left-most occupied box and placed one box to the right of a ball chosen uniformly at random from among the k balls, the successive choices being mutually independent. It is clear that the set of occupied boxes remains connected, and that the collection of balls drifts off to infinity. It is easy to see that for each k the k-ball motion drifts off to infinity at an almost certain average speed sk, defined formally by (2.3) below. Our main result is that sk ~ e/k as k → ∞. To be more precise:
THEOREM 1.1 As k increases to infinity, kskincreases to e.
This result was conjectured by Tovey (private communication), and informal arguments supporting the conjecture have been given by Keller (1980) and Weiner (1980). Our method of proof (Sections 2–4) is to use coupling to compare the k-ball process with a certain, more easily analysed, pure growth process (defined at (3.3)).
In a discussion of double stochastic population processes in continuous time, attention is concentrated on transition matrices, or equivalent operators, which are linear in the variable parameters. Difficulties with the extreme case of ‘white noise’ variability for the parameters are recalled by reference to ‘stochasticized’ deterministic models, but discussed here also in relation to a general ‘switching’ model.
The use of the ‘backward’ equations for determining extinction probabilities is illustrated by deriving various formulae for the (infinite) birth-and-death process with ‘white-noise’ variability for the birth-and-death coefficients.
INTRODUCTION
It seems very apt to discuss doubly stochastic (d.s.) population processes in this volume, as it recalls the mutual interest of David Kendall and myself in population processes many years ago (e.g. Bartlett, 1947, 1949; Kendall, 1948, 1949; Bartlett and Kendall, 1951). In more recent years d.s. population processes have been considered in discrete time in connection with extinction probabilities for random environments (see, for example, references in Bartlett, 1978, 2.31) and with genetic problems (e.g. Gillespie, 1974); but the concept of d.s processes in continuous time has also become of obvious interest to mathematicians as well as to biologists (e.g. Kaplan, 1973; Keiding, 1975). I might note that my own interest in d.s. processes in continuous time first arose during a visit to Australia in early 1980, when I began to notice in the literature the use of such processes as approximations to genetic or other biological problems for discrete generations.
In my early days as a research student I was both impressed and influenced by David Kendall's enthusiasm for trying out ideas using computer-generated simulations. (His particular project at that time culminated in Kendall, 1974.) This can provide a bridge between what John Tukey calls “exploratory data analysis” and the more classical “confirmatory” aspects of statistics, model estimation and testing. Comparison of data with simulations is definitely “confirmatory” in that models are involved, yet it has much of the spirit of the “exploratory” phase, with human judgement replacing formal significance tests. Later I formalized this comparison to give Monte Carlo tests, discovered earlier by Barnard (1963) but apparently popularized by Ripley (1977).
Simulation has enabled progress to be made in the study of spatial patterns and processes which had previously seemed intractable. The starting point for all known simulation algorithms for spatial point patterns and random sets, such as those in Ripley (1981), is an algorithm to give independent uniformly distributed points in a unit square or cube. The purpose of this paper is to discuss the properties of these basic algorithms. Much of the material can be found scattered in the literature, but no one source gives a sufficiently complete picture.
CONGRUENTIAL RANDOM NUMBER GENERATORS
The usual way to produce approximately uniformly distributed random variables on (0, 1) in a computer is to sample integers xi uniformly from {0,1,…,M–1} or {1,2,…,M–1} and set Ui = xi/M. Here M is a large integer, usually of the form 2β. The approximation made is comparable with that made to represent real numbers by a finite set.