We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let S0 := 0 and Sk := ξ1 + ··· + ξk for k ∈ ℕ := {1, 2, …}, where {ξk : k ∈ ℕ} are independent copies of a random variable ξ with values in ℕ and distribution pk := P{ξ = k}, k ∈ ℕ. We interpret the random walk {Sk : k = 0, 1, 2, …} as a particle jumping to the right through integer positions. Fix n ∈ ℕ and modify the process by requiring that the particle is bumped back to its current state each time a jump would bring the particle to a state larger than or equal to n. This constraint defines an increasing Markov chain {Rk(n) : k = 0, 1, 2, …} which never reaches the state n. We call this process a random walk with barrier n. Let Mn denote the number of jumps of the random walk with barrier n. This paper focuses on the asymptotics of Mn as n tends to ∞. A key observation is that, under p1 > 0, {Mn : n ∈ ℕ} satisfies the distributional recursion M1 = 0 and for n = 2, 3, …, where In is independent of M2, …, Mn−1 with distribution P{In = k} = pk / (p1 + ··· + pn−1), k ∈ {1, …, n − 1}. Depending on the tail behavior of the distribution of ξ, several scalings for Mn and corresponding limiting distributions come into play, including stable distributions and distributions of exponential integrals of subordinators. The methods used in this paper are mainly probabilistic. The key tool is to compare (couple) the number of jumps, Mn, with the first time, Nn, when the unrestricted random walk {Sk : k = 0, 1, …} reaches a state larger than or equal to n. The results are applied to derive the asymptotics of the number of collision events (that take place until there is just a single block) for β(a, b)-coalescent processes with parameters 0 < a < 2 and b = 1.
The reduced Markov branching process is a stochastic model for the genealogy of an unstructured biological population. Its limit behavior in the critical case is well studied for the Zolotarev-Slack regularity parameter α ∈ (0, 1]. We turn to the case of very heavy-tailed reproduction distribution α = 0 assuming that Zubkov's regularity condition holds with parameter β ∈ (0, ∞). Our main result gives a new asymptotic pattern for the reduced branching process conditioned on nonextinction during a long time interval.
Gabetta and Regazzini (2006b) have shown that finiteness of the initial energy (second moment) is necessary and sufficient for the solution of the Kac's model Boltzmann equation to converge weakly (Cb-convergence) to a probability measure on R. Here, we complement this result by providing a detailed analysis of what does actually happen when the initial energy is infinite. In particular, we prove that such a solution converges vaguely (C0-convergence) to the zero measure (which is identically 0 on the Borel sets of R). More precisely, we prove that the total mass of the limiting distribution splits into two equal masses (of value ½ each), and we provide quantitative estimates on the rate at which such a phenomenon takes place. The methods employed in the proofs also apply in the context of sums of weighted independent and identically distributed random variables x̃1, x̃2, …, where these random variables have an infinite second moment and zero mean. Then, with Tn := ∑j=1ηnλj,nx̃j, with max1 ≤ j ≤ ηnλj,n → 0 (as n → +∞), and ∑j=1ηnλj,n2 = 1, n = 1, 2, …, the classical central limit theorem suggests that T should in some sense converge to a ‘normal random variable of infinite variance’. Again, in this setting we prove quantitative estimates on the rate at which the mass splits into adherent masses to -∞ and +∞, or to ∞, that are analogous to those we have obtained for the Kac equation. Although the setting in this case is quite classical, we have not uncovered any previous results of a similar type.
In this paper we investigate the ‘local’ properties of a random mapping model, TnD̂, which maps the set {1, 2, …, n} into itself. The random mapping TnD̂, which was introduced in a companion paper (Hansen and Jaworski (2008)), is constructed using a collection of exchangeable random variables D̂1, …, D̂n which satisfy In the random digraph, GnD̂, which represents the mapping TnD̂, the in-degree sequence for the vertices is given by the variables D̂1, D̂2, …, D̂n, and, in some sense, GnD̂ can be viewed as an analogue of the general independent degree models from random graph theory. By local properties we mean the distributions of random mapping characteristics related to a given vertex v of GnD̂ - for example, the numbers of predecessors and successors of v in GnD̂. We show that the distribution of several variables associated with the local structure of GnD̂ can be expressed in terms of expectations of simple functions of D̂1, D̂2, …, D̂n. We also consider two special examples of TnD̂ which correspond to random mappings with preferential and anti-preferential attachment, and determine, for these examples, exact and asymptotic distributions for the local structure variables considered in this paper. These distributions are also of independent interest.
This article proves that the on-off renewal process with Weibull sojourn times satisfies the large deviation principle on a nonlinear scale. Unusually, its rate function is not convex. Apart from on a compact set, the rate function is infinite, which enables us to construct natural processes that satisfy the large deviation principle with nontrivial rate functions on more than one time scale.
We consider a sequential rule, where an item is chosen into the group, such as a university faculty member, only if his/her score is better than the average score of those already belonging to the group. We study four variables: the average score of the members of the group after k items have been selected, the time it takes (in terms of the number of observed items) to assemble a group of k items, the average score of the group after n items have been observed, and the number of items kept after the first n items have been observed. We develop the relationships between these variables, and obtain their asymptotic behavior as k (respectively, n) tends to ∞. The assumption throughout is that the items are independent and identically distributed with a continuous distribution. Though knowledge of this distribution is not needed to implement the selection rule, the asymptotic behavior does depend on the distribution. We study in some detail the exponential, Pareto, and beta distributions. Generalizations of the ‘better than average’ rule to the β better than average rules are also considered. These are rules where an item is admitted to the group only if its score is better than β times the present average of the group, where β > 0.
The copula of a multivariate distribution is the distribution transformed so that one-dimensional marginal distributions are uniform. We review a different transformation of a multivariate distribution which yields standard Pareto for the marginal distributions, and we call the resulting distribution the Pareto copula. Use of the Pareto copula has a certain claim to naturalness when considering asymptotic limit distributions for sums, maxima, and empirical processes. We discuss implications for aggregation of risk and offer some examples.
We make a correction to an important result by Cline [D. B. H. Cline, ‘Convolutions of distributions with exponential tails’, J. Austral. Math. Soc. (Series A)43 (1987), 347–365; D. B. H. Cline, ‘Convolutions of distributions with exponential tails: corrigendum’, J. Austral. Math. Soc. (Series A)48 (1990), 152–153] on the closure of the exponential class under convolution power mixtures (random summation).
Assume that there are k types of insurance contracts in an insurance company. The ith related claims are denoted by {Xij, j ≥ 1}, i = 1,…,k. In this paper we investigate large deviations for both partial sums S(k; n1,…,nk) = ∑i=1k ∑j=1niXij and random sums S(k; t) = ∑i=1k ∑j=1Ni (t)Xij, where Ni(t), i = 1,…,k, are counting processes for the claim number. The obtained results extend some related classical results.
Starting from a sequence of independent Wright-Fisher diffusion processes on [0, 1], we construct a class of reversible infinite-dimensional diffusion processes on Δ∞ := {x ∈ [0, 1]N: ∑i≥1xi = 1} with GEM distribution as the reversible measure. Log-Sobolev inequalities are established for these diffusions, which lead to the exponential convergence of the corresponding reversible measures in the entropy. Extensions are made to a class of measure-valued processes over an abstract space S. This provides a reasonable alternative to the Fleming-Viot process, which does not satisfy the log-Sobolev inequality when S is infinite as observed by Stannat (2000).
We consider the tail behavior of the product of two independent nonnegative random variables X and Y. Breiman (1965) has considered this problem, assuming that X is regularly varying with index α and that E{Yα+ε} < ∞ for some ε > 0. We investigate when the condition on Y can be weakened and apply our findings to analyze a class of random difference equations.
Large deviation estimates are derived for sums of random variables with certain dependence structures, including finite population statistics and random graphs. The argument is based on Stein's method, but with a novel modification of Stein's equation inspired by the Cramér transform.
A stack is a structural unit in an RNA structure that is formed by pairs of hydrogen bonded nucleotides. Paired nucleotides are scored according to their ability to hydrogen bond. We consider stack/hairpin-loop structures for a sequence of independent and identically distributed random variables with values in a finite alphabet, and we show how to obtain an asymptotic Poisson distribution of the number of stack/hairpin-loop structures with a score exceeding a high threshold, given that we count in a proper, declumped way. From this result we obtain an asymptotic Gumbel distribution of the maximal stack score. We also provide examples focusing on the computation of constants that enter in the asymptotic distributions. Finally, we discuss the close relation to existing results for local alignment.
The transmission control protocol (TCP) is a transport protocol used in the Internet. In Ott (2005), a more general class of candidate transport protocols called ‘protocols in the TCP paradigm’ was introduced. The long-term objective of studying this class is to find protocols with promising performance characteristics. In this paper we study Markov chain models derived from protocols in the TCP paradigm. Protocols in the TCP paradigm, as TCP, protect the network from congestion by decreasing the ‘congestion window’ (i.e. the amount of data allowed to be sent but not yet acknowledged) when there is packet loss or packet marking, and increasing it when there is no loss. When loss of different packets are assumed to be independent events and the probability p of loss is assumed to be constant, the protocol gives rise to a Markov chain {Wn}, where Wn is the size of the congestion window after the transmission of the nth packet. For a wide class of such Markov chains, we prove weak convergence results, after appropriate rescaling of time and space, as p → 0. The limiting processes are defined by stochastic differential equations. Depending on certain parameter values, the stochastic differential equation can define an Ornstein-Uhlenbeck process or can be driven by a Poisson process.
In this paper we prove a conditional limit theorem for a critical Galton-Watson branching process {Zn; n ≥ 0} with offspring generating function s + (1 − s)L((1 − s)−1), where L(x) is slowly varying. In contrast to a well-known theorem of Slack (1968), (1972) we use a functional normalization, which gives an exponential limit. We also give an alternative proof of Sze's (1976) result on the asymptotic behavior of the nonextinction probability.
A multitype urn scheme with random replacements is considered. Each time a ball is picked, another ball is added, and its type is chosen according to the transition probabilities of a reducible Markov chain. The vector of frequencies is shown to converge almost surely to a random element of the set of stationary measures of the Markov chain. Its probability distribution is characterized as the solution to a fixed point problem. It is proved to be Dirichlet in the particular case of a single transient state to which no return is possible. This is no longer the case, however, as soon as returns to transient states are allowed.
We derive an asymptotic expansion for the distribution of a compound sum of independent random variables, all having the same rapidly varying subexponential distribution. The examples of a Poisson and geometric number of summands serve as an illustration of the main result. Complete calculations are done for a Weibull distribution, with which we derive, as examples and without any difficulties, seven-term expansions.
In this paper we propose a class of sequential urn designs based on generalized Pólya urn (GPU) models for balancing the allocations of two treatments in sequential clinical trials. In particular, we consider a GPU model characterized by a 2 x 2 random addition matrix with null balance (i.e. null row sums) and replacement rule depending upon the urn composition. Under this scheme, the urn process has a Markovian structure and can be regarded as a random extension of the classical Ehrenfest model. We establish almost sure convergence and asymptotic normality for the frequency of treatment allocations and show that in some peculiar cases the asymptotic variance of the design admits a natural representation based on the set of orthogonal polynomials associated with the corresponding Markov process.
Given two sequences of length n over a finite alphabet A of size |A| = d, the D2 statistic is the number of k-letter word matches between the two sequences. This statistic is used in bioinformatics for EST sequence database searches. Under the assumption of independent and identically distributed letters in the sequences, Lippert, Huang and Waterman (2002) raised questions about the asymptotic behavior of D2 when the alphabet is uniformly distributed. They expressed a concern that the commonly assumed normality may create errors in estimating significance. In this paper we answer those questions. Using Stein's method, we show that, for large enough k, the D2 statistic is approximately normal as n gets large. When k = 1, we prove that, for large enough d, the D2 statistic is approximately normal as n gets large. We also give a formula for the variance of D2 in the uniform case.
We investigate the large scale behaviour of a Lévy process whose jump magnitude follows a stable law with spherically inhomogenous scaling coefficients. Furthermore, the jumps are dragged in the spherical direction by a dynamical system which has an attractor.