To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A large deviation principle (LDP) with an explicit rate function is proved for the estimation of drift parameter of the Ornstein-Uhlenbeck process. We establish an LDP for two estimating functions, one of them being the score function. The first one is derived by applying the Gärtner–Ellis theorem. But this theorem is not suitable for the LDP on the score function and we circumvent this key point by using a parameter-dependent change of measure. We then state large deviation principles for the maximum likelihood estimator and another consistent drift estimator.
We present a general recurrence model which provides a conceptual framework for well-known problems such as ascents, peaks, turning points, Bernstein's urn model, the Eggenberger–Pólya urn model and the hypergeometric distribution. Moreover, we show that the Frobenius-Harper technique, based on real roots of a generating function, can be applied to this general recurrence model (under simple conditions), and so a Berry–Esséen bound and local limit theorems can be found. This provides a simple and unified approach to asymptotic theory for diverse problems hitherto treated separately.
We define a stochastic process {Xn} based on partial sums of a sequence of integer-valued random variables (K0,K1,…). The process can be represented as an urn model, which is a natural generalization of a gambling model used in the first published exposition of the criticality theorem of the classical branching process. A special case of the process is also of interest in the context of a self-annihilating branching process. Our main result is that when (K1,K2,…) are independent and identically distributed, with mean a ∊ (1,∞), there exist constants {cn} with cn+1/cn → a as n → ∞ such that Xn/cn converges almost surely to a finite random variable which is positive on the event {Xn ↛ 0}. The result is extended to the case of exchangeable summands.
A basic issue in extreme value theory is the characterization of the asymptotic distribution of the maximum of a number of random variables as the number tends to infinity. We address this issue in several settings. For independent identically distributed random variables where the distribution is a mixture, we show that the convergence of their maxima is determined by one of the distributions in the mixture that has a dominant tail. We use this result to characterize the asymptotic distribution of maxima associated with mixtures of convolutions of Erlang distributions and of normal distributions. Normalizing constants and bounds on the rates of convergence are also established. The next result is that the distribution of the maxima of independent random variables with phase type distributions converges to the Gumbel extreme-value distribution. These results are applied to describe completion times for jobs consisting of the parallel-processing of tasks represented by Markovian PERT networks or task-graphs. In these contexts, which arise in manufacturing and computer systems, the job completion time is the maximum of the task times and the number of tasks is fairly large. We also consider maxima of dependent random variables for which distributions are selected by an ergodic random environment process that may depend on the variables. We show under certain conditions that their distributions may converge to one of the three classical extreme-value distributions. This applies to parallel-processing where the subtasks are selected by a Markov chain.
We consider a classical population flow model in which individuals pass through n strata with certain state-dependent probabilities and at every time t = 0,1,2,…, there is a stochastic inflow of new individuals to every stratum. For a stationary inflow process we prove the convergence of the joint distribution of group sizes and derive the limiting Laplace transform.
In this paper, in work strongly related with that of Coffman et al. [5], Bruss and Robertson [2], and Rhee and Talagrand [15], we focus our interest on an asymptotic distributional comparison between numbers of ‘smallest’ i.i.d. random variables selected by either on-line or off-line policies. Let X1,X2,… be a sequence of i.i.d. random variables with distribution function F(x), and let X1,n,…,Xn,n be the sequence of order statistics of X1,…,Xn. For a sequence (cn)n≥1 of positive constants, the smallest fit off-line counting random variable is defined by Ne(cn) := max {j ≤ n : X1,n + … + Xj,n ≤ cn}. The asymptotic joint distributional comparison is given between the off-line count Ne(cn) and on-line counts Nnτ for ‘good’ sequential (on-line) policies τ satisfying the sum constraint ∑j≥1XτjI(τj≤n) ≤ cn. Specifically, for such policies τ, under appropriate conditions on the distribution function F(x) and the constants (cn)n≥1, we find sequences of positive constants (Bn)n≥1, (Δn)n≥1 and (Δ'n)n≥1 such that
for some non-degenerate random variables W and W'. The major tools used in the paper are convergence of point processes to Poisson random measure and continuous mapping theorems, strong approximation results of the normalized empirical process by Brownian bridges, and some renewal theory.
In this paper a central limit theorem is proved for wave-functionals defined as the sums of wave amplitudes observed in sample paths of stationary continuously differentiable Gaussian processes. Examples illustrating this theory are given.
We study a fluid flow queueing system with m independent sources alternating between periods of silence and activity; m ≥ 2. The distribution function of the activity periods of one source, is supposed to be intermediate regular varying. We show that the distribution of the net increment of the buffer during an aggregate activity period (i.e. when at least one source is active) is asymptotically tail-equivalent to the distribution of the net input during a single activity period with intermediate regular varying distribution function. In this way, we arrive at an asymptotic representation of the Palm-stationary tail-function of the buffer content at the beginning of aggregate activity periods. Our approach is probabilistic and extends recent results of Boxma (1996; 1997) who considered the special case of regular variation.
In this paper we derive asymptotically exact expressions for buffer overflow probabilities and cell loss probabilities for a finite buffer which is fed by a large number of independent and stationary sources. The technique is based on scaling, measure change and local limit theorems and extends the recent results of Courcoubetis and Weber on buffer overflow asymptotics. We discuss the cases when the buffers are of the same order as the transmission bandwidth as well as the case of small buffers. Moreover we show that the results hold for a wide variety of traffic sources including ON/OFF sources with heavy-tailed distributed ON periods, which are typical candidates for so-called ‘self-similar’ inputs, showing that the asymptotic cell loss probability behaves in much the same manner for such sources as for the Markovian type of sources, which has important implications for statistical multiplexing. Numerical validation of the results against simulations are also reported.
A model of a stochastic froth is introduced in which the rate of random coalescence of a pair of bubbles depends on an inverse power law of their sizes. The main question of interest is whether froths with a large number of bubbles can grow in a stable fashion; that is, whether under some time-varying change of scale the distributions of rescaled bubble sizes become approximately stationary. It is shown by way of a law of large numbers for the froths that the question can be re-interpreted in terms of a measure flow solving a nonlinear Boltzmann equation that represents an idealized deterministic froth. Froths turn out to be stable in the sense that there are scalings in which the rescaled measure flow is tight and, for a particular case, stable in the stronger sense that the rescaled flow converges to an equilibrium measure. Precise estimates are also given for the degree of tightness of the rescaled measure flows.
In an attempt to investigate the adequacy of the normal approximation for the number of nuclei in certain growth/coverage models, we consider a Markov chain which has properties in common with related continuous-time Markov processes (as well as being of interest in its own right). We establish that the rate of convergence to normality for the number of ‘drops’ during times 1,2,…n is of the optimal ‘Berry–Esséen’ form, as n → ∞. We also establish a law of the iterated logarithm and a functional central limit theorem.
Consider n random intervals I1, …, IN chosen by selecting endpoints independently from the uniform distribution. A packing of I1, …, IN is a disjoint sub-collection of these intervals: its wasted space is the measure of the set of points not covered by the packing. We investigate the random variable WN equal to the smallest wasted space among all packings. Coffman, Poonen and Winkler proved that EWN is of order (log N)2/N. We provide a sharp estimate of log P(WN ≥ t (log N)2 / N) and log P(WN ≤ t (log N)2 / N) for all values of t.
Suppose t1, t2,… are the arrival times of units into a system. The kth entering unit, whose magnitude is Xk and lifetime Lk, is said to be ‘active’ at time t if I(tk < tk + Lk) = Ik,t = 1. The size of the active population at time t is thus given by At = ∑k≥1Ik,t. Let Vt denote the vector whose coordinates are the magnitudes of the active units at time t, in their order of appearance in the system. For n ≥ 1, suppose λn is a measurable function on n-dimensional Euclidean space. Of interest is the weak limiting behaviour of the process λ*(t) whose value is λm(Vt) or 0, according to whether At = m > 0 or At = 0.
Let {Yn | n = 1, 2,…} be a stochastic process and M a positive real number. Define the time of ruin by T = inf{n | Yn > M} (T = +∞ if Yn ≤ M for n = 1, 2,…). Using the techniques of large deviations theory we obtain rough exponential estimates for ruin probabilities for a general class of processes. Special attention is given to the probability that ruin occurs up to a certain time point. We also generalize the concept of the safety loading and consider its importance to ruin probabilities.
This paper is devoted to the investigation of limit theorems for extremes with random sample size under general dependence-independence conditions for samples and random sample size indexes. Limit theorems of weak convergence type are obtained as well as functional limit theorems for extremal processes with random sample size indexes.
We prove a central limit theorem for conditionally centred random fields, under a moment condition and strict positivity of the empirical variance per observation. We use a random normalization, which fits non-stationary situations. The theorem applies directly to Markov random fields, including the cases of phase transition and lack of stationarity. One consequence is the asymptotic normality of the maximum pseudo-likelihood estimator for Markov fields in complete generality.
We show that the GEM process has strong ordering properties: the probability that one of the k largest elements in the GEM sequence is beyond the first ck elements (c > 1) decays superexponentially in k.
Let Mn denote the size of the largest amongst the first n generations of a simple branching process. It is shown for near critical processes with a finite offspring variance that the law of Mn/n, conditioned on no extinction at or before n, has a non-defective weak limit. The proof uses a conditioned functional limit theorem deriving from the Feller-Lindvall (CB) diffusion limit for branching processes descended from increasing numbers of ancestors. Subsidiary results are given about hitting time laws for CB diffusions and Bessel processes.
Let ζ be a Markov chain on a finite state space D, f a function from D to ℝd, and Sn = ∑k=1nf(ζk). We prove an invariance theorem for S and derive an explicit expression of the limit covariance matrix. We give its exact value for p-reinforced random walks on ℤ2 with p = 1, 2, 3.
In a real n-1 dimensional affine space E, consider a tetrahedron T0, i.e. the convex hull of n points α1, α2, …, αn of E. Choose n independent points β1, β2, …, βn randomly and uniformly in T0, thus obtaining a new tetrahedron T1 contained in T0. Repeat the operation with T1 instead of T0, obtaining T2, and so on. The sequence of the Tk shrinks to a point Y of T0 and this note computes the distribution of the barycentric coordinates of Y with respect to (α1, α2, …, αn) (Corollary 2.3). We also obtain the explicit distribution of Y in more general cases. The technique used is to reduce the problem to the study of a random walk on the semigroup of stochastic (n,n) matrices, and this note is a geometrical application of a former result of Chamayou and Letac (1994).