We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper examines a problem of importance to the telecommunications industry. In the design of modern ATM switches, it is necessary to use simulation to estimate the probability that a queue within the switch exceeds a given large value. Since these are extremely small probabilities, importance sampling methods must be used. Here we obtain a change of measure for a broad class of models with direct applicability to ATM switches.
We consider a model with A independent sources of cells where each source is modeled by a Markov renewal point process with batch arrivals. We do not assume the sources are necessarily identically distributed, nor that batch sizes are independent of the state of the Markov process. These arrivals join a queue served by multiple independent servers, each with service times also modeled as a Markov renewal process. We only discuss a time-slotted system. The queue is viewed as the additive component of a Markov additive chain subject to the constraint that the additive component remains non-negative. We apply the theory in McDonald (1999) to obtain the asymptotics of the tail of the distribution of the queue size in steady state plus the asymptotics of the mean time between large deviations of the queue size.
The number Yn of offspring of the most prolific individual in the nth generation of a Bienaymé–Galton–Watson process is studied. The asymptotic behaviour of Yn as n → ∞ may be viewed as an extreme value problem for i.i.d. random variables with random sample size. Limit theorems for both Yn and EYn provided that the offspring mean is finite are obtained using some convergence results for branching processes as well as a transfer limit lemma for maxima. Subcritical, critical and supercritical branching processes are considered separately.
We establish the uniform almost sure convergence of the partitioning estimate, which is a histogram-like mean regression function estimate, under ergodic conditions for a stationary and unbounded process. The main application of our results concerns time series analysis and prediction in the Markov processes case.
In this paper, central limit theorems for multivariate semi-Markov sequences and processes are obtained, both as the number of jumps of the associated Markov chain tends to infinity and, if appropriate, as the time for which the process has been running tends to infinity. The theorems are widely applicable since many functions defined on Markov or semi-Markov processes can be analysed by exploiting appropriate embedded multivariate semi-Markov sequences. An application to a problem in ion channel modelling is described in detail. Other applications, including to multivariate stationary reward processes, counting processes associated with Markov renewal processes, the interpretation of Markov chain Monte Carlo runs and statistical inference on semi-Markov models are briefly outlined.
In this paper we consider limit theorems for a random walk in a random environment, (Xn). Known results (recurrence-transience criteria, law of large numbers) in the case of independent environments are naturally extended to the case where the environments are only supposed to be stationary and ergodic. Furthermore, if ‘the fluctuations of the random transition probabilities around are small’, we show that there exists an invariant probability measure for ‘the environments seen from the position of (Xn)’. In the case of uniquely ergodic (therefore non-independent) environments, this measure exists as soon as (Xn) is transient so that the ‘slow diffusion phenomenon’ does not appear as it does in the independent case. Thus, under regularity conditions, we prove that, in this case, the random walk satisfies a central limit theorem for any fixed environment.
The germ-grain model is defined as the union of independent identically distributed compact random sets (grains) shifted by points (germs) of a point process. The paper introduces a family of stationary random measures in ℝd generated by germ-grain models and defined by the sum of contributions of non-overlapping parts of the individual grains. The main result of the paper is the central limit theorem for these random measures, which holds for rather general independently marked germ-grain models, including those with non-Poisson distribution of germs and non-convex grains. It is shown that this construction of random measures includes those random measures obtained by positively extended intrinsic volumes. In the Poisson case it is possible to prove a central limit theorem under weaker assumptions by using approximations by m-dependent random fields. Applications to statistics of the Boolean model are also discussed. They include a standard way to derive limit theorems for estimators of the model parameters.
Well-known inequalities for the spectral gap of a discrete-time Markov chain, such as Poincaré's and Cheeger's inequalities, do not perform well if the transition graph of the Markov chain is strongly connected. For example in the case of nearest-neighbour random walk on the n-dimensional cube Poincaré's and Cheeger's inequalities are off by a factor n. Using a coupling technique and a contraction principle lower bounds on the spectral gap can be derived. Finally, we show that using the contraction principle yields a sharp estimate for nearest-neighbour random walk on the n-dimensional cube.
A general analytic scheme for Poisson approximation to discrete distributions is studied in which the asymptotic behaviours of the generalized total variation, Fortet-Mourier (or Wasserstein), Kolmogorov and Matusita (or Hellinger) distances are explicitly characterized. Applications of this result include many number-theoretic functions and combinatorial structures. Our approach differs from most of the existing ones in the literature and is easily amended for other discrete approximations; arithmetic and combinatorial examples for Bessel approximation are also presented. A unified approach is developed for deriving uniform estimates for probability generating functions of the number of components in general decomposable combinatorial structures, with or without analytic continuation outside their circles of convergence.
For a large class of neutral population models the asymptotics of the ancestral structure of a sample of n individuals (or genes) is studied, if the total population size becomes large. Under certain conditions and under a well-known time-scaling, which can be expressed in terms of the coalescence probabilities, weak convergence in DE([0,∞)) to the coalescent holds. Further the convergence behaviour of the jump chain of the ancestral process is studied. The results are used to approximate probabilities which are of certain interest in applications, for example hitting probabilities.
Long-range dependence has been recently asserted to be an important characteristic in modeling telecommunications traffic. Inspired by the integral relationship between the fractional Brownian motion and the standard Brownian motion, we model a process with long-range dependence, Y, as a fractional integral of Riemann-Liouville type applied to a more standard process X—one that does not have long-range dependence. When X takes the form of a sample path process with bounded stationary increments, we provide a criterion for X to satisfy a moderate deviations principle (MDP). Based on the MDP of X, we then establish the MDP for Y. Furthermore, we characterize, in terms of the MDP, the transient behavior of queues when fed with the long-range dependent input process Y. In particular, we identify the most likely path that leads to a large queue, and demonstrate that unlike the case where the input has short-range dependence, the path here is nonlinear.
A large deviation principle (LDP) with an explicit rate function is proved for the estimation of drift parameter of the Ornstein-Uhlenbeck process. We establish an LDP for two estimating functions, one of them being the score function. The first one is derived by applying the Gärtner–Ellis theorem. But this theorem is not suitable for the LDP on the score function and we circumvent this key point by using a parameter-dependent change of measure. We then state large deviation principles for the maximum likelihood estimator and another consistent drift estimator.
We present a general recurrence model which provides a conceptual framework for well-known problems such as ascents, peaks, turning points, Bernstein's urn model, the Eggenberger–Pólya urn model and the hypergeometric distribution. Moreover, we show that the Frobenius-Harper technique, based on real roots of a generating function, can be applied to this general recurrence model (under simple conditions), and so a Berry–Esséen bound and local limit theorems can be found. This provides a simple and unified approach to asymptotic theory for diverse problems hitherto treated separately.
We define a stochastic process {Xn} based on partial sums of a sequence of integer-valued random variables (K0,K1,…). The process can be represented as an urn model, which is a natural generalization of a gambling model used in the first published exposition of the criticality theorem of the classical branching process. A special case of the process is also of interest in the context of a self-annihilating branching process. Our main result is that when (K1,K2,…) are independent and identically distributed, with mean a ∊ (1,∞), there exist constants {cn} with cn+1/cn → a as n → ∞ such that Xn/cn converges almost surely to a finite random variable which is positive on the event {Xn ↛ 0}. The result is extended to the case of exchangeable summands.
A basic issue in extreme value theory is the characterization of the asymptotic distribution of the maximum of a number of random variables as the number tends to infinity. We address this issue in several settings. For independent identically distributed random variables where the distribution is a mixture, we show that the convergence of their maxima is determined by one of the distributions in the mixture that has a dominant tail. We use this result to characterize the asymptotic distribution of maxima associated with mixtures of convolutions of Erlang distributions and of normal distributions. Normalizing constants and bounds on the rates of convergence are also established. The next result is that the distribution of the maxima of independent random variables with phase type distributions converges to the Gumbel extreme-value distribution. These results are applied to describe completion times for jobs consisting of the parallel-processing of tasks represented by Markovian PERT networks or task-graphs. In these contexts, which arise in manufacturing and computer systems, the job completion time is the maximum of the task times and the number of tasks is fairly large. We also consider maxima of dependent random variables for which distributions are selected by an ergodic random environment process that may depend on the variables. We show under certain conditions that their distributions may converge to one of the three classical extreme-value distributions. This applies to parallel-processing where the subtasks are selected by a Markov chain.
We consider a classical population flow model in which individuals pass through n strata with certain state-dependent probabilities and at every time t = 0,1,2,…, there is a stochastic inflow of new individuals to every stratum. For a stationary inflow process we prove the convergence of the joint distribution of group sizes and derive the limiting Laplace transform.
In this paper, in work strongly related with that of Coffman et al. [5], Bruss and Robertson [2], and Rhee and Talagrand [15], we focus our interest on an asymptotic distributional comparison between numbers of ‘smallest’ i.i.d. random variables selected by either on-line or off-line policies. Let X1,X2,… be a sequence of i.i.d. random variables with distribution function F(x), and let X1,n,…,Xn,n be the sequence of order statistics of X1,…,Xn. For a sequence (cn)n≥1 of positive constants, the smallest fit off-line counting random variable is defined by Ne(cn) := max {j ≤ n : X1,n + … + Xj,n ≤ cn}. The asymptotic joint distributional comparison is given between the off-line count Ne(cn) and on-line counts Nnτ for ‘good’ sequential (on-line) policies τ satisfying the sum constraint ∑j≥1XτjI(τj≤n) ≤ cn. Specifically, for such policies τ, under appropriate conditions on the distribution function F(x) and the constants (cn)n≥1, we find sequences of positive constants (Bn)n≥1, (Δn)n≥1 and (Δ'n)n≥1 such that
for some non-degenerate random variables W and W'. The major tools used in the paper are convergence of point processes to Poisson random measure and continuous mapping theorems, strong approximation results of the normalized empirical process by Brownian bridges, and some renewal theory.
In this paper a central limit theorem is proved for wave-functionals defined as the sums of wave amplitudes observed in sample paths of stationary continuously differentiable Gaussian processes. Examples illustrating this theory are given.
We study a fluid flow queueing system with m independent sources alternating between periods of silence and activity; m ≥ 2. The distribution function of the activity periods of one source, is supposed to be intermediate regular varying. We show that the distribution of the net increment of the buffer during an aggregate activity period (i.e. when at least one source is active) is asymptotically tail-equivalent to the distribution of the net input during a single activity period with intermediate regular varying distribution function. In this way, we arrive at an asymptotic representation of the Palm-stationary tail-function of the buffer content at the beginning of aggregate activity periods. Our approach is probabilistic and extends recent results of Boxma (1996; 1997) who considered the special case of regular variation.
In this paper we derive asymptotically exact expressions for buffer overflow probabilities and cell loss probabilities for a finite buffer which is fed by a large number of independent and stationary sources. The technique is based on scaling, measure change and local limit theorems and extends the recent results of Courcoubetis and Weber on buffer overflow asymptotics. We discuss the cases when the buffers are of the same order as the transmission bandwidth as well as the case of small buffers. Moreover we show that the results hold for a wide variety of traffic sources including ON/OFF sources with heavy-tailed distributed ON periods, which are typical candidates for so-called ‘self-similar’ inputs, showing that the asymptotic cell loss probability behaves in much the same manner for such sources as for the Markovian type of sources, which has important implications for statistical multiplexing. Numerical validation of the results against simulations are also reported.