To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A basic issue in extreme value theory is the characterization of the asymptotic distribution of the maximum of a number of random variables as the number tends to infinity. We address this issue in several settings. For independent identically distributed random variables where the distribution is a mixture, we show that the convergence of their maxima is determined by one of the distributions in the mixture that has a dominant tail. We use this result to characterize the asymptotic distribution of maxima associated with mixtures of convolutions of Erlang distributions and of normal distributions. Normalizing constants and bounds on the rates of convergence are also established. The next result is that the distribution of the maxima of independent random variables with phase type distributions converges to the Gumbel extreme-value distribution. These results are applied to describe completion times for jobs consisting of the parallel-processing of tasks represented by Markovian PERT networks or task-graphs. In these contexts, which arise in manufacturing and computer systems, the job completion time is the maximum of the task times and the number of tasks is fairly large. We also consider maxima of dependent random variables for which distributions are selected by an ergodic random environment process that may depend on the variables. We show under certain conditions that their distributions may converge to one of the three classical extreme-value distributions. This applies to parallel-processing where the subtasks are selected by a Markov chain.
We consider a random measure for which distribution is invariant under the action of a standard transformation group. The reduced moments are defined by applying classical theorems on invariant measure decomposition. We present a general method for constructing unbiased estimators of reduced moments. Several asymptotic results are established under an extension of the Brillinger mixing condition. Examples related to stochastic geometry are given.
In this paper, in work strongly related with that of Coffman et al. [5], Bruss and Robertson [2], and Rhee and Talagrand [15], we focus our interest on an asymptotic distributional comparison between numbers of ‘smallest’ i.i.d. random variables selected by either on-line or off-line policies. Let X1,X2,… be a sequence of i.i.d. random variables with distribution function F(x), and let X1,n,…,Xn,n be the sequence of order statistics of X1,…,Xn. For a sequence (cn)n≥1 of positive constants, the smallest fit off-line counting random variable is defined by Ne(cn) := max {j ≤ n : X1,n + … + Xj,n ≤ cn}. The asymptotic joint distributional comparison is given between the off-line count Ne(cn) and on-line counts Nnτ for ‘good’ sequential (on-line) policies τ satisfying the sum constraint ∑j≥1XτjI(τj≤n) ≤ cn. Specifically, for such policies τ, under appropriate conditions on the distribution function F(x) and the constants (cn)n≥1, we find sequences of positive constants (Bn)n≥1, (Δn)n≥1 and (Δ'n)n≥1 such that
for some non-degenerate random variables W and W'. The major tools used in the paper are convergence of point processes to Poisson random measure and continuous mapping theorems, strong approximation results of the normalized empirical process by Brownian bridges, and some renewal theory.
In this paper a central limit theorem is proved for wave-functionals defined as the sums of wave amplitudes observed in sample paths of stationary continuously differentiable Gaussian processes. Examples illustrating this theory are given.
Recently Propp and Wilson [14] have proposed an algorithm, called coupling from the past (CFTP), which allows not only an approximate but perfect (i.e. exact) simulation of the stationary distribution of certain finite state space Markov chains. Perfect sampling using CFTP has been successfully extended to the context of point processes by, amongst other authors, Häggström et al. [5]. In [5] Gibbs sampling is applied to a bivariate point process, the penetrable spheres mixture model [19]. However, in general the running time of CFTP in terms of number of transitions is not independent of the state sampled. Thus an impatient user who aborts long runs may introduce a subtle bias, the user impatience bias. Fill [3] introduced an exact sampling algorithm for finite state space Markov chains which, in contrast to CFTP, is unbiased for user impatience. Fill's algorithm is a form of rejection sampling and similarly to CFTP requires sufficient monotonicity properties of the transition kernel used. We show how Fill's version of rejection sampling can be extended to an infinite state space context to produce an exact sample of the penetrable spheres mixture process and related models. Following [5] we use Gibbs sampling and make use of the partial order of the mixture model state space. Thus we construct an algorithm which protects against bias caused by user impatience and which delivers samples not only of the mixture model but also of the attractive area-interaction and the continuum random-cluster process.
We study a class of simulated annealing type algorithms for global minimization with general acceptance probabilities. This paper presents simple conditions, easy to verify in practice, which ensure the convergence of the algorithm to the global minimum with probability 1.
Suppose t1, t2,… are the arrival times of units into a system. The kth entering unit, whose magnitude is Xk and lifetime Lk, is said to be ‘active’ at time t if I(tk < tk + Lk) = Ik,t = 1. The size of the active population at time t is thus given by At = ∑k≥1Ik,t. Let Vt denote the vector whose coordinates are the magnitudes of the active units at time t, in their order of appearance in the system. For n ≥ 1, suppose λn is a measurable function on n-dimensional Euclidean space. Of interest is the weak limiting behaviour of the process λ*(t) whose value is λm(Vt) or 0, according to whether At = m > 0 or At = 0.
The filtering problem concerns the estimation of a stochastic process X from its noisy partial information Y. With the notable exception of the linear-Gaussian situation, general optimal filters have no finitely recursive solution. The aim of this work is the design of a Monte Carlo particle system approach to solve discrete time and nonlinear filtering problems. The main result is a uniform convergence theorem. We introduce a concept of regularity and we give a simple ergodic condition on the signal semigroup for the Monte Carlo particle filter to converge in law and uniformly with respect to time to the optimal filter, yielding what seems to be the first uniform convergence result for a particle approximation of the nonlinear filtering equation.
An optimal repair/replacement problem for a single-unit repairable system with minimal repair and random repair cost is considered. The existence of the optimal policy is established using results of the optimal stopping theory, and it is shown that the optimal policy is a ‘repair-cost-limit’ policy, that is, there is a series of repair-cost-limit functions gn(t), n = 1, 2,…, such that a unit of age t is replaced at the nth failure if and only if the repair cost C(n, t) ≥ gn(t); otherwise it is minimally repaired. If the repair cost does not depend on n, then there is a single repair cost limit function g(t), which is uniquely determined by a first-order differential equation with a boundary condition.
Explicit formulas are found for the payoff and the optimal stopping strategy of the optimal stopping problem supτE (max0≤t≤τXt − c τ), where X = (Xt)t≥0 is geometric Brownian motion with drift μ and volatility σ > 0, and the supremum is taken over all stopping times for X. The payoff is shown to be finite, if and only if μ < 0. The optimal stopping time is given by τ* = inf {t > 0 | Xt = g* (max0≤t≤sXs)} where s ↦ g*(s) is the maximal solution of the (nonlinear) differential equationunder the condition 0 < g(s) < s, where Δ = 1 − 2μ / σ2 and K = Δ σ2 / 2c. The estimate is established g*(s) ∼ ((Δ − 1) / K Δ)1 / Δs1−1/Δ as s → ∞. Applying these results we prove the following maximal inequality:where τ may be any stopping time for X. This extends the well-known identity E (supt>0Xt) = 1 − (σ 2 / 2 μ) and is shown to be sharp. The method of proof relies upon a smooth pasting guess (for the Stephan problem with moving boundary) and the Itô–Tanaka formula (being applied two-dimensionally). The key point and main novelty in our approach is the maximality principle for the moving boundary (the optimal stopping boundary is the maximal solution of the differential equation obtained by a smooth pasting guess). We think that this principle is by itself of theoretical and practical interest.
The paper considers stability and instability properties of the Markov chain generated by the composition of an i.i.d. sequence of random transformations. The transformations are assumed to be either linear mappings or else mappings which can be well approximated near 0 by linear mappings. The main results concern the risk probabilities that the Markov chain enters or exits certain balls centered at 0. An application is given to the probability of extinction in a model from population dynamics.
Let {Yn | n = 1, 2,…} be a stochastic process and M a positive real number. Define the time of ruin by T = inf{n | Yn > M} (T = +∞ if Yn ≤ M for n = 1, 2,…). Using the techniques of large deviations theory we obtain rough exponential estimates for ruin probabilities for a general class of processes. Special attention is given to the probability that ruin occurs up to a certain time point. We also generalize the concept of the safety loading and consider its importance to ruin probabilities.
A number of stationary stochastic processes are presented with properties pertinent to modelling time series from turbulence and finance. Specifically, the one-dimensional marginal distributions have log-linear tails and the autocorrelation may have two or more time scales. Discrete time models with a given marginal distribution are constructed as sums of independent autoregressions. A similar construction is made in continuous time by considering sums of Ornstein-Uhlenbeck-type processes. To prepare for this, a new property of self-decomposable distributions is presented. Also another, rather different, construction of stationary processes with generalized logistic marginal distributions as an infinite sum of Gaussian processes is proposed. In this way processes with continuous sample paths can be constructed. Multivariate versions of the various constructions are also given.
We study one-dimensional continuous loss networks with length distribution G and cable capacity C. We prove that the unique stationary distribution ηL of the network for which the restriction on the number of calls to be less than C is imposed only in the segment [−L,L] is the same as the distribution of a stationary M/G/∞ queue conditioned to be less than C in the time interval [−L,L]. For distributions G which are of phase type (= absorbing times of finite state Markov processes) we show that the limit as L → ∞ of ηL exists and is unique. The limiting distribution turns out to be invariant for the infinite loss network. This was conjectured by Kelly (1991).
We give formulae for different types of contact distribution functions for stationary (not necessarily Poisson) Voronoi tessellations in ℝd in terms of the Palm void probabilities of the generating point process. Moreover, using the well-known relationship between the linear contact distribution and the chord length distribution we derive a closed form expression for the mean chord length in terms of the two-point Palm distribution and the pair correlation function of the generating point process. The results obtained are specified for Voronoi tessellations generated by Poisson cluster and Gibbsian processes, respectively.
For fixed i let X(i) = (X1(i), …, Xd(i)) be a d-dimensional random vector with some known joint distribution. Here i should be considered a time variable. Let X(i), i = 1, …, n be a sequence of n independent vectors, where n is the total horizon. In many examples Xj(i) can be thought of as the return to partner j, when there are d ≥ 2 partners, and one stops with the ith observation. If the jth partner alone could decide on a (random) stopping rule t, his goal would be to maximize EXj(t) over all possible stopping rules t ≤ n. In the present ‘multivariate’ setup the d partners must however cooperate and stop at the same stopping time t, so as to maximize some agreed function h(∙) of the individual expected returns. The goal is thus to find a stopping rule t* for which h(EX1 (t), …, EXd(t)) = h (EX(t)) is maximized. For continuous and monotone h we describe the class of optimal stopping rules t*. With some additional symmetry assumptions we show that the optimal rule is one which (also) maximizes EZt where Zi = ∑dj=1Xj(i), and hence has a particularly simple structure. Examples are included, and the results are extended both to the infinite horizon case and to the case when X(1), …, X(n) are dependent. Asymptotic comparisons between the present problem of finding suph(EX(t)) and the ‘classical’ problem of finding supEh(X(t)) are given. Comparisons between the optimal return to the statistician and to a ‘prophet’ are also included. In the present context a ‘prophet’ is someone who can base his (random) choice g on the full sequence X(1), …, X(n), with corresponding return suph(EX(g)).
We prove a central limit theorem for conditionally centred random fields, under a moment condition and strict positivity of the empirical variance per observation. We use a random normalization, which fits non-stationary situations. The theorem applies directly to Markov random fields, including the cases of phase transition and lack of stationarity. One consequence is the asymptotic normality of the maximum pseudo-likelihood estimator for Markov fields in complete generality.
We study a point process model with stochastic intensities for a particular branching population of individuals of two types. Type-I individuals immigrate into the population at the times of a Poisson process. During their lives they generate type-II individuals according to a random age dependent birth rate, which themselves may multiply and die. Living type-II descendants increase the death intensity of their type-I ancestor, and conversely, the multiplication and dying intensities of type-II individuals may depend on the life situation of their type-I ancestor. We show that the probability generating function of the marginal distribution of a type-I individual's life process, conditioned on its individual infection and death risk, satisfies an initial value problem of a partial differential equation, and derive its solution. This allows for the determination of additional distributions of observable random variables as well as for describing the complete population process.
Analytic approximations are derived for the distribution of the first crossing time of a straight-line boundary by a d-dimensional Bessel process and its discrete time analogue. The main ingredient for the approximations is the conditional probability that the process crossed the boundary before time m, given its location beneath the boundary at time m. The boundary crossing probability is of interest as the significance level and power of a sequential test comparing d+1 treatments using an O'Brien-Fleming (1979) stopping boundary (see Betensky 1996). Also, it is shown by DeLong (1980) to be the limiting distribution of a nonparametric test statistic for multiple regression. The approximations are compared with exact values from the literature and with values from a Monte Carlo simulation.
We derive formulas for the first- and higher-order derivatives of the steady state performance measures for changes in transition matrices of irreducible and aperiodic Markov chains. Using these formulas, we obtain a Maclaurin series for the performance measures of such Markov chains. The convergence range of the Maclaurin series can be determined. We show that the derivatives and the coefficients of the Maclaurin series can be easily estimated by analysing a single sample path of the Markov chain. Algorithms for estimating these quantities are provided. Markov chains consisting of transient states and multiple chains are also studied. The results can be easily extended to Markov processes. The derivation of the results is closely related to some fundamental concepts, such as group inverse, potentials, and realization factors in perturbation analysis. Simulation results are provided to illustrate the accuracy of the single sample path based estimation. Possible applications to engineering problems are discussed.