We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Conditions are derived for the components of the normed limit of a multi-type branching process with varying environments, to be continuous on (0, ∞). The main tool is an inequality for the concentration function of sums of independent random variables, due originally to Petrov. Using this, we show that if there is a discontinuity present, then a particular linear combination of the population types must converge to a non-random constant (Equation (1)). Ensuring this can not happen provides the desired continuity conditions.
Let ξ0,ξ1,ξ2,… be a homogeneous Markov process and let Sn denote the partial sum Sn = θ(ξ1) + … + θ(ξn), where θ(ξ) is a scalar nonlinearity. If N is a stopping time with 𝔼N < ∞ and the Markov process satisfies certain ergodicity properties, we then show that 𝔼SN = [limn→∞𝔼θ(ξn)]𝔼N + 𝔼ω(ξ0) − 𝔼ω(ξN). The function ω(ξ) is a well defined scalar nonlinearity directly related to θ(ξ) through a Poisson integral equation, with the characteristic that ω(ξ) becomes zero in the i.i.d. case. Consequently our result constitutes an extension to Wald's first lemma for the case of Markov processes. We also show that, when 𝔼N → ∞, the correction term is negligible as compared to 𝔼N in the sense that 𝔼ω(ξ0) − 𝔼ω(ξN) = o(𝔼N).
Long-range dependence has been recently asserted to be an important characteristic in modeling telecommunications traffic. Inspired by the integral relationship between the fractional Brownian motion and the standard Brownian motion, we model a process with long-range dependence, Y, as a fractional integral of Riemann-Liouville type applied to a more standard process X—one that does not have long-range dependence. When X takes the form of a sample path process with bounded stationary increments, we provide a criterion for X to satisfy a moderate deviations principle (MDP). Based on the MDP of X, we then establish the MDP for Y. Furthermore, we characterize, in terms of the MDP, the transient behavior of queues when fed with the long-range dependent input process Y. In particular, we identify the most likely path that leads to a large queue, and demonstrate that unlike the case where the input has short-range dependence, the path here is nonlinear.
In this paper we study the supremum distribution of a class of Gaussian processes having stationary increments and negative drift using key results from Extreme Value Theory. We focus on deriving an asymptotic upper bound to the tail of the supremum distribution of such processes. Our bound is valid for both discrete- and continuous-time processes. We discuss the importance of the bound, its applicability to queueing problems, and show numerical examples to illustrate its performance.
We define a stochastic process {Xn} based on partial sums of a sequence of integer-valued random variables (K0,K1,…). The process can be represented as an urn model, which is a natural generalization of a gambling model used in the first published exposition of the criticality theorem of the classical branching process. A special case of the process is also of interest in the context of a self-annihilating branching process. Our main result is that when (K1,K2,…) are independent and identically distributed, with mean a ∊ (1,∞), there exist constants {cn} with cn+1/cn → a as n → ∞ such that Xn/cn converges almost surely to a finite random variable which is positive on the event {Xn ↛ 0}. The result is extended to the case of exchangeable summands.
The study of the distribution of the distance between words in a random sequence of letters is interesting in view of application in genome sequence analysis. In this paper we give the exact distribution probability and cumulative distribution function of the distances between two successive occurrences of a given word and between the nth and the (n+m)th occurrences under three models of generation of the letters: i.i.d. with the same probability for each letter, i.i.d. with different probabilities and Markov process. The generating function and the first two moments are also given. The point of studying the distances instead of the counting process is that we get some knowledge not only about the frequency of a word but also about its longitudinal distribution in the sequence.
A basic issue in extreme value theory is the characterization of the asymptotic distribution of the maximum of a number of random variables as the number tends to infinity. We address this issue in several settings. For independent identically distributed random variables where the distribution is a mixture, we show that the convergence of their maxima is determined by one of the distributions in the mixture that has a dominant tail. We use this result to characterize the asymptotic distribution of maxima associated with mixtures of convolutions of Erlang distributions and of normal distributions. Normalizing constants and bounds on the rates of convergence are also established. The next result is that the distribution of the maxima of independent random variables with phase type distributions converges to the Gumbel extreme-value distribution. These results are applied to describe completion times for jobs consisting of the parallel-processing of tasks represented by Markovian PERT networks or task-graphs. In these contexts, which arise in manufacturing and computer systems, the job completion time is the maximum of the task times and the number of tasks is fairly large. We also consider maxima of dependent random variables for which distributions are selected by an ergodic random environment process that may depend on the variables. We show under certain conditions that their distributions may converge to one of the three classical extreme-value distributions. This applies to parallel-processing where the subtasks are selected by a Markov chain.
We consider a random measure for which distribution is invariant under the action of a standard transformation group. The reduced moments are defined by applying classical theorems on invariant measure decomposition. We present a general method for constructing unbiased estimators of reduced moments. Several asymptotic results are established under an extension of the Brillinger mixing condition. Examples related to stochastic geometry are given.
In this paper, in work strongly related with that of Coffman et al. [5], Bruss and Robertson [2], and Rhee and Talagrand [15], we focus our interest on an asymptotic distributional comparison between numbers of ‘smallest’ i.i.d. random variables selected by either on-line or off-line policies. Let X1,X2,… be a sequence of i.i.d. random variables with distribution function F(x), and let X1,n,…,Xn,n be the sequence of order statistics of X1,…,Xn. For a sequence (cn)n≥1 of positive constants, the smallest fit off-line counting random variable is defined by Ne(cn) := max {j ≤ n : X1,n + … + Xj,n ≤ cn}. The asymptotic joint distributional comparison is given between the off-line count Ne(cn) and on-line counts Nnτ for ‘good’ sequential (on-line) policies τ satisfying the sum constraint ∑j≥1XτjI(τj≤n) ≤ cn. Specifically, for such policies τ, under appropriate conditions on the distribution function F(x) and the constants (cn)n≥1, we find sequences of positive constants (Bn)n≥1, (Δn)n≥1 and (Δ'n)n≥1 such that
for some non-degenerate random variables W and W'. The major tools used in the paper are convergence of point processes to Poisson random measure and continuous mapping theorems, strong approximation results of the normalized empirical process by Brownian bridges, and some renewal theory.
In this paper a central limit theorem is proved for wave-functionals defined as the sums of wave amplitudes observed in sample paths of stationary continuously differentiable Gaussian processes. Examples illustrating this theory are given.
Recently Propp and Wilson [14] have proposed an algorithm, called coupling from the past (CFTP), which allows not only an approximate but perfect (i.e. exact) simulation of the stationary distribution of certain finite state space Markov chains. Perfect sampling using CFTP has been successfully extended to the context of point processes by, amongst other authors, Häggström et al. [5]. In [5] Gibbs sampling is applied to a bivariate point process, the penetrable spheres mixture model [19]. However, in general the running time of CFTP in terms of number of transitions is not independent of the state sampled. Thus an impatient user who aborts long runs may introduce a subtle bias, the user impatience bias. Fill [3] introduced an exact sampling algorithm for finite state space Markov chains which, in contrast to CFTP, is unbiased for user impatience. Fill's algorithm is a form of rejection sampling and similarly to CFTP requires sufficient monotonicity properties of the transition kernel used. We show how Fill's version of rejection sampling can be extended to an infinite state space context to produce an exact sample of the penetrable spheres mixture process and related models. Following [5] we use Gibbs sampling and make use of the partial order of the mixture model state space. Thus we construct an algorithm which protects against bias caused by user impatience and which delivers samples not only of the mixture model but also of the attractive area-interaction and the continuum random-cluster process.
We study a class of simulated annealing type algorithms for global minimization with general acceptance probabilities. This paper presents simple conditions, easy to verify in practice, which ensure the convergence of the algorithm to the global minimum with probability 1.
Suppose t1, t2,… are the arrival times of units into a system. The kth entering unit, whose magnitude is Xk and lifetime Lk, is said to be ‘active’ at time t if I(tk < tk + Lk) = Ik,t = 1. The size of the active population at time t is thus given by At = ∑k≥1Ik,t. Let Vt denote the vector whose coordinates are the magnitudes of the active units at time t, in their order of appearance in the system. For n ≥ 1, suppose λn is a measurable function on n-dimensional Euclidean space. Of interest is the weak limiting behaviour of the process λ*(t) whose value is λm(Vt) or 0, according to whether At = m > 0 or At = 0.
The filtering problem concerns the estimation of a stochastic process X from its noisy partial information Y. With the notable exception of the linear-Gaussian situation, general optimal filters have no finitely recursive solution. The aim of this work is the design of a Monte Carlo particle system approach to solve discrete time and nonlinear filtering problems. The main result is a uniform convergence theorem. We introduce a concept of regularity and we give a simple ergodic condition on the signal semigroup for the Monte Carlo particle filter to converge in law and uniformly with respect to time to the optimal filter, yielding what seems to be the first uniform convergence result for a particle approximation of the nonlinear filtering equation.
An optimal repair/replacement problem for a single-unit repairable system with minimal repair and random repair cost is considered. The existence of the optimal policy is established using results of the optimal stopping theory, and it is shown that the optimal policy is a ‘repair-cost-limit’ policy, that is, there is a series of repair-cost-limit functions gn(t), n = 1, 2,…, such that a unit of age t is replaced at the nth failure if and only if the repair cost C(n, t) ≥ gn(t); otherwise it is minimally repaired. If the repair cost does not depend on n, then there is a single repair cost limit function g(t), which is uniquely determined by a first-order differential equation with a boundary condition.
Explicit formulas are found for the payoff and the optimal stopping strategy of the optimal stopping problem supτE (max0≤t≤τXt − c τ), where X = (Xt)t≥0 is geometric Brownian motion with drift μ and volatility σ > 0, and the supremum is taken over all stopping times for X. The payoff is shown to be finite, if and only if μ < 0. The optimal stopping time is given by τ* = inf {t > 0 | Xt = g* (max0≤t≤sXs)} where s ↦ g*(s) is the maximal solution of the (nonlinear) differential equation
under the condition 0 < g(s) < s, where Δ = 1 − 2μ / σ2 and K = Δ σ2 / 2c. The estimate is established g*(s) ∼ ((Δ − 1) / K Δ)1 / Δs1−1/Δ as s → ∞. Applying these results we prove the following maximal inequality:
where τ may be any stopping time for X. This extends the well-known identity E (supt>0Xt) = 1 − (σ 2 / 2 μ) and is shown to be sharp. The method of proof relies upon a smooth pasting guess (for the Stephan problem with moving boundary) and the Itô–Tanaka formula (being applied two-dimensionally). The key point and main novelty in our approach is the maximality principle for the moving boundary (the optimal stopping boundary is the maximal solution of the differential equation obtained by a smooth pasting guess). We think that this principle is by itself of theoretical and practical interest.
The paper considers stability and instability properties of the Markov chain generated by the composition of an i.i.d. sequence of random transformations. The transformations are assumed to be either linear mappings or else mappings which can be well approximated near 0 by linear mappings. The main results concern the risk probabilities that the Markov chain enters or exits certain balls centered at 0. An application is given to the probability of extinction in a model from population dynamics.
Let {Yn | n = 1, 2,…} be a stochastic process and M a positive real number. Define the time of ruin by T = inf{n | Yn > M} (T = +∞ if Yn ≤ M for n = 1, 2,…). Using the techniques of large deviations theory we obtain rough exponential estimates for ruin probabilities for a general class of processes. Special attention is given to the probability that ruin occurs up to a certain time point. We also generalize the concept of the safety loading and consider its importance to ruin probabilities.
A number of stationary stochastic processes are presented with properties pertinent to modelling time series from turbulence and finance. Specifically, the one-dimensional marginal distributions have log-linear tails and the autocorrelation may have two or more time scales. Discrete time models with a given marginal distribution are constructed as sums of independent autoregressions. A similar construction is made in continuous time by considering sums of Ornstein-Uhlenbeck-type processes. To prepare for this, a new property of self-decomposable distributions is presented. Also another, rather different, construction of stationary processes with generalized logistic marginal distributions as an infinite sum of Gaussian processes is proposed. In this way processes with continuous sample paths can be constructed. Multivariate versions of the various constructions are also given.
We study one-dimensional continuous loss networks with length distribution G and cable capacity C. We prove that the unique stationary distribution ηL of the network for which the restriction on the number of calls to be less than C is imposed only in the segment [−L,L] is the same as the distribution of a stationary M/G/∞ queue conditioned to be less than C in the time interval [−L,L]. For distributions G which are of phase type (= absorbing times of finite state Markov processes) we show that the limit as L → ∞ of ηL exists and is unique. The limiting distribution turns out to be invariant for the infinite loss network. This was conjectured by Kelly (1991).