To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.
We prove central limit theorems for certain geometrical characteristics of the convex polygons determined by a standard Poisson line process in the plane, such as: the angles at the vertices of the polygons, the empirical mean of the number of vertices and the empirical mean of the perimeter of the polygons.
Generalizing the classical Banach matchbox problem, we consider the process of removing two types of ‘items’ from a ‘pile’ with selection probabilities for the type of the next item to be removed depending on the current numbers of remaining items, and thus changing sequentially. Under various conditions on the probability pn1,n2 that the next removal will take away an item of type I, given that n1 and n2 are the current numbers of items of the two types, we derive asymptotic formulas (as the initial pile size tends to infinity) for the probability that the items of type I are completely removed first and for the number of items left. In some special cases we also obtain explicit results.
Is the Ewens distribution the only one-parameter family of partition structures where the total number of types sampled is a sufficient statistic? In general, the answer is no. It is shown that all counterexamples can be generated via an urn scheme. The urn scheme need only satisfy two general conditions. In fact, the conditions are both necessary and sufficient. However, in particular, for a large class of partition structures that naturally arise in the infinite alleles theory of population genetics, the Ewens distribution is the only one in this class where the total number of types is sufficient for estimating the mutation rate. Finally, asymptotic sufficiency for parametric families of partition structures is discussed.
In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.
The accuracy of compound Poisson approximation can be estimated using Stein's method in terms of quantities similar to those which must be calculated for Poisson approximation. However, the solutions of the relevant Stein equation may, in general, grow exponentially fast with the mean number of ‘clumps’, leading to many applications in which the bounds are of little use. In this paper, we introduce a method for circumventing this difficulty. We establish good bounds for those solutions of the Stein equation which are needed to measure the accuracy of approximation with respect to Kolmogorov distance, but only in a restricted range of the argument. The restriction on the range is then compensated by a truncation argument. Examples are given to show that the method clearly outperforms its competitors, as soon as the mean number of clumps is even moderately large.
We study a markovian evolutionary process which encompasses the classical simple genetic algorithm. This process is obtained by randomly perturbing a very simple selection scheme. Using the Freidlin-Wentzell theory, we carry out a precise study of the asymptotic dynamics of the process as the perturbations disappear. We show how a delicate interaction between the perturbations and the selection pressure may force the convergence toward the global maxima of the fitness function. We put forward the existence of a critical population size, above which this kind of convergence can be achieved. We compute upper bounds of this critical population size for several examples. We derive several conditions to ensure convergence in the homogeneous case and these provide the first mathematically well-founded convergence results for genetic algorithms.
The shift method consists in computing the expectation of an integrable functional F defined on the probability space ((ℝd)N, B(ℝd)⊗N, μ⊗N) (μ is a probability measure on ℝd) using Birkhoff's Pointwise Ergodic Theorem, i.e.
as n → ∞, where θ denotes the canonical shift operator. When F lies in L2(FT, μ⊗N) for some integrable enough stopping time T, several weak (CLT) or strong (Gàl-Koksma Theorem or LIL) converging rates hold. The method successfully competes with Monte Carlo. The aim of this paper is to extend these results to more general probability distributions P on ((ℝd)N, B(ℝd)⊗N), namely when the canonical process (Xn)n∊N is P-stationary, α-mixing and fulfils Ibragimov's assumption
for some δ > 0. One application is the computation of the expectation of functionals of an α-mixing Markov Chain, under its stationary distribution Pν. It may both provide a better accuracy and save the random number generator compared to the usual Monte Carlo or shift methods on independent innovations.
A variety of convergence results for genealogical and line-of-descendent processes are known for exchangeable neutral population genetics models. A general convergence-to-the-coalescent theorem is presented, which works not only for a larger class of exchangeable models but also for a large class of non-exchangeable population models. The coalescence probability, i.e. the probability that two genes, chosen randomly without replacement, have a common ancestor one generation backwards in time, is the central quantity to analyse the ancestral structure.
A simple convergence theorem for sequences of Markov chains is presented in order to derive new ‘convergence-to-the-coalescent’ results for diploid neutral population models.
For the so-called diploid Wright-Fisher model with selfing probability s and mutation rate θ, it is shown that the ancestral structure of n sampled genes can be treated in the framework of an n-coalescent with mutation rate ̃θ := θ(1-s/2), if the population size N is large and if the time is measured in units of (2-s)N generations.
‘Convergence-to-the-coalescent’ theorems for two-sex neutral population models are presented. For the two-sex Wright-Fisher model the ancestry of n sampled genes behaves like the usual n-coalescent, if the population size N is large and if the time is measured in units of 4N generations. Generalisations to a larger class of two-sex models are discussed.
If (Fn)n∈ℕ is a sequence of independent and identically distributed random mappings from a second countable locally compact state space 𝕏 to 𝕏 which itself is independent of the 𝕏-valued initial variable X0, the discrete-time stochastic process (Xn)n≥0, defined by the recursion equation Xn = Fn(Xn−1) for n∈ℕ, has the Markov property. Since 𝕏 is Polish in particular, a complete metric d exists. The random mappings (Fn)n∈ℕ are assumed to satisfy ℙ-a.s.Conditions on the distribution of l(Fn) are given for the existence of an invariant distribution of X0 making the process (Xn)n≥0 stationary and ergodic. Our main result corrects a central limit theorem by Łoskot and Rudnicki (1995) and removes an error in its proof. Instead of trying to compare the sequence φ (Xn)n≥0 for some φ : 𝕏 → ℝ with a triangular scheme of independent random variables our proof is based on an approximation by a martingale difference scheme.
Dynamic asset allocation strategies that are continuously rebalanced so as to always keep a fixed constant proportion of wealth invested in the various assets at each point in time play a fundamental role in the theory of optimal portfolio strategies. In this paper we study the rate of return on investment, defined here as the net gain in wealth divided by the cumulative investment, for such investment strategies in continuous time. Among other results, we prove that the limiting distribution of this measure of return is a gamma distribution. This limit theorem allows for comparisons of different strategies. For example, the mean return on investment is maximized by the same strategy that maximizes logarithmic utility, which is also known to maximize the exponential rate at which wealth grows. The return from this policy turns out to have other stochastic dominance properties as well. We also study the return on the risky investment alone, defined here as the present value of the gain from investment divided by the present value of the cumulative investment in the risky asset needed to achieve the gain. We show that for the log-optimal, or optimal growth policy, this return tends to an exponential distribution. We compare the return from the optimal growth policy with the return from a policy that invests a constant amount in the risky stock. We show that for the case of a single risky investment, the constant investor's expected return is twice that of the optimal growth policy. This difference can be considered the cost for insuring that the proportional investor does not go bankrupt.
In this paper we obtain the large deviation principle for scaled queue lengths at a multi-buffered resource, and simplify the corresponding variational problem in the case where the inputs are assumed to be independent.
Let Tr be the first time at which a random walk Sn escapes from the strip [-r,r], and let |STr|-r be the overshoot of the boundary of the strip. We investigate the order of magnitude of the overshoot, as r → ∞, by providing necessary and sufficient conditions for the ‘stability’ of |STr|, by which we mean that |STr|/r converges to 1, either in probability (weakly) or almost surely (strongly), as r → ∞. These also turn out to be equivalent to requiring only the boundedness of |STr|/r, rather than its convergence to 1, either in the weak or strong sense, as r → ∞. The almost sure characterisation turns out to be extremely simple to state and to apply: we have |STr|/r → 1 a.s. if and only if EX2 < ∞ and EX = 0 or 0 < |EX| ≤ E|X| < ∞. Proving this requires establishing the equivalence of the stability of STr with certain dominance properties of the maximum partial sum Sn* = max{|Sj|: 1 ≤ j ≤ n} over its maximal increment.
The Brownian density process is a Gaussian distribution-valued process. It can be defined either as a limit of a functional over a Poisson system of independent Brownian particles or as a solution of a stochastic partial differential equation with respect to Gaussian martingale measure. We show that, with an appropriate change in the initial distribution of the infinite particle system, the limiting density process is non-Gaussian and it solves a stochastic partial differential equation where the initial measure and the driving measure are non-Gaussian, possibly having infinite second moment.
This paper provides a detailed stochastic analysis of leucocyte cell movement based on the dynamics of a rigid body. The cell's behavior is studied in two relevant anisotropic environments displaying adhesion mediated movement (haptotaxis) and stimulus mediated movement (chemotaxis). This behavior is modeled by diffusion processes on three successively longer time scales, termed locomotion, translocation, and migration.
An important model in communications is the stochastic FM signal st = A cos , where the message process {mt} is a stochastic process. In this paper, we investigate the linear models and limit distributions of FM signals. Firstly, we show that this non-linear model in the frequency domain can be converted to an ARMA (2, q + 1) model in the time domain when {mt} is a Gaussian MA (q) sequence. The spectral density of {St} can then be solved easily for MA message processes. Also, an error bound is given for an ARMA approximation for more general message processes. Secondly, we show that {St} is asymptotically strictly stationary if {mt} is a Markov chain satisfying a certain condition on its transition kernel. Also, we find the limit distribution of st for some message processes {mt}. These results show that a joint method of probability theory, linear and non-linear time series analysis can yield fruitful results. They also have significance for FM modulation and demodulation in communications.
Let ξ (t); t ≧ 0 be a normalized continuous mean square differentiable stationary normal process with covariance function r(t). Further, letand set . We give bounds which are roughly of order Τ –δ for the rate of convergence of the distribution of the maximum and of the number of upcrossings of a high level by ξ (t) in the interval [0, T]. The results assume that r(t) and r′(t) decay polynomially at infinity and that r″ (t) is suitably bounded. For the number of upcrossings it is in addition assumed that r(t) is non-negative.