We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A model of a stochastic froth is introduced in which the rate of random coalescence of a pair of bubbles depends on an inverse power law of their sizes. The main question of interest is whether froths with a large number of bubbles can grow in a stable fashion; that is, whether under some time-varying change of scale the distributions of rescaled bubble sizes become approximately stationary. It is shown by way of a law of large numbers for the froths that the question can be re-interpreted in terms of a measure flow solving a nonlinear Boltzmann equation that represents an idealized deterministic froth. Froths turn out to be stable in the sense that there are scalings in which the rescaled measure flow is tight and, for a particular case, stable in the stronger sense that the rescaled flow converges to an equilibrium measure. Precise estimates are also given for the degree of tightness of the rescaled measure flows.
In an attempt to investigate the adequacy of the normal approximation for the number of nuclei in certain growth/coverage models, we consider a Markov chain which has properties in common with related continuous-time Markov processes (as well as being of interest in its own right). We establish that the rate of convergence to normality for the number of ‘drops’ during times 1,2,…n is of the optimal ‘Berry–Esséen’ form, as n → ∞. We also establish a law of the iterated logarithm and a functional central limit theorem.
Consider n random intervals I1, …, IN chosen by selecting endpoints independently from the uniform distribution. A packing of I1, …, IN is a disjoint sub-collection of these intervals: its wasted space is the measure of the set of points not covered by the packing. We investigate the random variable WN equal to the smallest wasted space among all packings. Coffman, Poonen and Winkler proved that EWN is of order (log N)2/N. We provide a sharp estimate of log P(WN ≥ t (log N)2 / N) and log P(WN ≤ t (log N)2 / N) for all values of t.
Suppose t1, t2,… are the arrival times of units into a system. The kth entering unit, whose magnitude is Xk and lifetime Lk, is said to be ‘active’ at time t if I(tk < tk + Lk) = Ik,t = 1. The size of the active population at time t is thus given by At = ∑k≥1Ik,t. Let Vt denote the vector whose coordinates are the magnitudes of the active units at time t, in their order of appearance in the system. For n ≥ 1, suppose λn is a measurable function on n-dimensional Euclidean space. Of interest is the weak limiting behaviour of the process λ*(t) whose value is λm(Vt) or 0, according to whether At = m > 0 or At = 0.
Let {Yn | n = 1, 2,…} be a stochastic process and M a positive real number. Define the time of ruin by T = inf{n | Yn > M} (T = +∞ if Yn ≤ M for n = 1, 2,…). Using the techniques of large deviations theory we obtain rough exponential estimates for ruin probabilities for a general class of processes. Special attention is given to the probability that ruin occurs up to a certain time point. We also generalize the concept of the safety loading and consider its importance to ruin probabilities.
This paper is devoted to the investigation of limit theorems for extremes with random sample size under general dependence-independence conditions for samples and random sample size indexes. Limit theorems of weak convergence type are obtained as well as functional limit theorems for extremal processes with random sample size indexes.
We prove a central limit theorem for conditionally centred random fields, under a moment condition and strict positivity of the empirical variance per observation. We use a random normalization, which fits non-stationary situations. The theorem applies directly to Markov random fields, including the cases of phase transition and lack of stationarity. One consequence is the asymptotic normality of the maximum pseudo-likelihood estimator for Markov fields in complete generality.
We show that the GEM process has strong ordering properties: the probability that one of the k largest elements in the GEM sequence is beyond the first ck elements (c > 1) decays superexponentially in k.
Let Mn denote the size of the largest amongst the first n generations of a simple branching process. It is shown for near critical processes with a finite offspring variance that the law of Mn/n, conditioned on no extinction at or before n, has a non-defective weak limit. The proof uses a conditioned functional limit theorem deriving from the Feller-Lindvall (CB) diffusion limit for branching processes descended from increasing numbers of ancestors. Subsidiary results are given about hitting time laws for CB diffusions and Bessel processes.
Let ζ be a Markov chain on a finite state space D, f a function from D to ℝd, and Sn = ∑k=1nf(ζk). We prove an invariance theorem for S and derive an explicit expression of the limit covariance matrix. We give its exact value for p-reinforced random walks on ℤ2 with p = 1, 2, 3.
In a real n-1 dimensional affine space E, consider a tetrahedron T0, i.e. the convex hull of n points α1, α2, …, αn of E. Choose n independent points β1, β2, …, βn randomly and uniformly in T0, thus obtaining a new tetrahedron T1 contained in T0. Repeat the operation with T1 instead of T0, obtaining T2, and so on. The sequence of the Tk shrinks to a point Y of T0 and this note computes the distribution of the barycentric coordinates of Y with respect to (α1, α2, …, αn) (Corollary 2.3). We also obtain the explicit distribution of Y in more general cases. The technique used is to reduce the problem to the study of a random walk on the semigroup of stochastic (n,n) matrices, and this note is a geometrical application of a former result of Chamayou and Letac (1994).
In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.
We prove central limit theorems for certain geometrical characteristics of the convex polygons determined by a standard Poisson line process in the plane, such as: the angles at the vertices of the polygons, the empirical mean of the number of vertices and the empirical mean of the perimeter of the polygons.
Generalizing the classical Banach matchbox problem, we consider the process of removing two types of ‘items’ from a ‘pile’ with selection probabilities for the type of the next item to be removed depending on the current numbers of remaining items, and thus changing sequentially. Under various conditions on the probability pn1,n2 that the next removal will take away an item of type I, given that n1 and n2 are the current numbers of items of the two types, we derive asymptotic formulas (as the initial pile size tends to infinity) for the probability that the items of type I are completely removed first and for the number of items left. In some special cases we also obtain explicit results.
Is the Ewens distribution the only one-parameter family of partition structures where the total number of types sampled is a sufficient statistic? In general, the answer is no. It is shown that all counterexamples can be generated via an urn scheme. The urn scheme need only satisfy two general conditions. In fact, the conditions are both necessary and sufficient. However, in particular, for a large class of partition structures that naturally arise in the infinite alleles theory of population genetics, the Ewens distribution is the only one in this class where the total number of types is sufficient for estimating the mutation rate. Finally, asymptotic sufficiency for parametric families of partition structures is discussed.
In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.
The accuracy of compound Poisson approximation can be estimated using Stein's method in terms of quantities similar to those which must be calculated for Poisson approximation. However, the solutions of the relevant Stein equation may, in general, grow exponentially fast with the mean number of ‘clumps’, leading to many applications in which the bounds are of little use. In this paper, we introduce a method for circumventing this difficulty. We establish good bounds for those solutions of the Stein equation which are needed to measure the accuracy of approximation with respect to Kolmogorov distance, but only in a restricted range of the argument. The restriction on the range is then compensated by a truncation argument. Examples are given to show that the method clearly outperforms its competitors, as soon as the mean number of clumps is even moderately large.
We study a markovian evolutionary process which encompasses the classical simple genetic algorithm. This process is obtained by randomly perturbing a very simple selection scheme. Using the Freidlin-Wentzell theory, we carry out a precise study of the asymptotic dynamics of the process as the perturbations disappear. We show how a delicate interaction between the perturbations and the selection pressure may force the convergence toward the global maxima of the fitness function. We put forward the existence of a critical population size, above which this kind of convergence can be achieved. We compute upper bounds of this critical population size for several examples. We derive several conditions to ensure convergence in the homogeneous case and these provide the first mathematically well-founded convergence results for genetic algorithms.
The shift method consists in computing the expectation of an integrable functional F defined on the probability space ((ℝd)N, B(ℝd)⊗N, μ⊗N) (μ is a probability measure on ℝd) using Birkhoff's Pointwise Ergodic Theorem, i.e.
as n → ∞, where θ denotes the canonical shift operator. When F lies in L2(FT, μ⊗N) for some integrable enough stopping time T, several weak (CLT) or strong (Gàl-Koksma Theorem or LIL) converging rates hold. The method successfully competes with Monte Carlo. The aim of this paper is to extend these results to more general probability distributions P on ((ℝd)N, B(ℝd)⊗N), namely when the canonical process (Xn)n∊N is P-stationary, α-mixing and fulfils Ibragimov's assumption
for some δ > 0. One application is the computation of the expectation of functionals of an α-mixing Markov Chain, under its stationary distribution Pν. It may both provide a better accuracy and save the random number generator compared to the usual Monte Carlo or shift methods on independent innovations.
A variety of convergence results for genealogical and line-of-descendent processes are known for exchangeable neutral population genetics models. A general convergence-to-the-coalescent theorem is presented, which works not only for a larger class of exchangeable models but also for a large class of non-exchangeable population models. The coalescence probability, i.e. the probability that two genes, chosen randomly without replacement, have a common ancestor one generation backwards in time, is the central quantity to analyse the ancestral structure.