To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider a Markov-modulated Brownian motion reflected to stay in a strip [0, B]. The stationary distribution of this process is known to have a simple form under some assumptions. We provide a short probabilistic argument leading to this result and explain its simplicity. Moreover, this argument allows for generalizations including the distribution of the reflected process at an independent, exponentially distributed epoch. Our second contribution concerns transient behavior of the model. We identify the joint law of the processes defining the model at inverse local times.
Kipnis and Varadhan (1986) showed that, for an additive functional, Sn say, of a reversible Markov chain, the condition E[Sn2] / n → κ ∈ (0, ∞) implies the convergence of the conditional distribution of Sn / √E[Sn2], given the starting point, to the standard normal distribution. We revisit this question under the weaker condition, E[Sn2] = nl(n), where l is a slowly varying function. It is shown by example that the conditional distributions of Sn / √E[Sn2] need not converge to the standard normal distribution in this case; and sufficient conditions for convergence to a (possibly nonstandard) normal distribution are developed.
A dynamic model for a random network evolving in continuous time is defined, where new vertices are born and existing vertices may die. The fitness of a vertex is defined as the accumulated in-degree of the vertex and a new vertex is connected to an existing vertex with probability proportional to a function b of the fitness of the existing vertex. Furthermore, a vertex dies at a rate given by a function d of its fitness. Using results from the theory of general branching processes, an expression for the asymptotic empirical fitness distribution {pk} is derived and analyzed for a number of specific choices of b and d. When b(i) = i + α and d(i) = β, that is, linear preferential attachment for the newborn and random deaths, then pk ∼ k-(2+α). When b(i) = i + 1 and d(i) = β(i + 1), with β < 1, then pk ∼ (1 + β)−k, that is, if the death rate is also proportional to the fitness, then the power-law distribution is lost. Furthermore, when b(i) = i + 1 and d(i) = β(i + 1)γ, with β, γ < 1, then logpk ∼ -kγ, a stretched exponential distribution. The momentaneous in-degrees are also studied and simulations suggest that their behaviour is qualitatively similar to that of the fitnesses.
Quasi-stationary distributions, as discussed in Darroch and Seneta (1965), have been used in biology to describe the steady state behaviour of population models which, while eventually certain to become extinct, nevertheless maintain an apparent stochastic equilibrium for long periods. These distributions have some drawbacks: they need not exist, nor be unique, and their calculation can present problems. In this paper, we give biologically plausible conditions under which the quasi-stationary distribution is unique, and can be closely approximated by distributions that are simple to compute.
In this paper we generalize a bounded Markov process, described by Stoyanov and Pacheco-González for a class of transition probability functions. A recursive integral equation for the probability density of these bounded Markov processes is derived and the stationary probability density is obtained by solving an equivalent differential equation. Examples of stationary densities for different transition probability functions are given and an application for designing a robotic coverage algorithm with specific emphasis on particular regions is discussed.
In the present paper an almost-sure renewal theorem for branching random walks (BRWs) on the real line is formulated and established. The theorem constitutes a generalization of Nerman's theorem on the almost-sure convergence of Malthus normed supercritical Crump-Mode-Jagers branching processes counted with general characteristic and Gatouras' almost-sure renewal theorem for BRWs on a lattice.
In this paper we establish limit theorems for a class of stochastic hybrid systems (continuous deterministic dynamics coupled with jump Markov processes) in the fluid limit (small jumps at high frequency), thus extending known results for jump Markov processes. We prove a functional law of large numbers with exponential convergence speed, derive a diffusion approximation, and establish a functional central limit theorem. We apply these results to neuron models with stochastic ion channels, as the number of channels goes to infinity, estimating the convergence to the deterministic model. In terms of neural coding, we apply our central limit theorems to numerically estimate the impact of channel noise both on frequency and spike timing coding.
In this paper we consider the problem of identifiability for the two-state Markovian arrival process (MAP2). In particular, we show that the MAP2 is not identifiable, providing the conditions under which two different sets of parameters induce identical stationary laws for the observable process.
File-sharing networks are distributed systems used to disseminate files among nodes of a communication network. The general simple principle of these systems is that once a node has retrieved a file, it may become a server for this file. In this paper, the capacity of these networks is analyzed with a stochastic model when there is a constant flow of incoming requests for a given file. It is shown that the problem can be solved by analyzing the asymptotic behavior of a class of interacting branching processes. Several results of independent interest concerning these branching processes are derived and then used to study the file-sharing systems.
In this paper, a finite-state mean-reverting model for the short rate, based on the continuous-time Ehrenfest process, will be examined. Two explicit pricing formulae for zero-coupon bonds will be derived in the general and special symmetric cases. Its limiting relationship to the Vasicek model will be examined with some numerical results.
We look forwards and backwards in the multi-allelic neutral exchangeable Cannings model with fixed population size and nonoverlapping generations. The Markov chain X is studied which describes the allelic composition of the population forward in time. A duality relation (inversion formula) between the transition matrix of X and an appropriate backward matrix is discussed. The probabilities of the backward matrix are explicitly expressed in terms of the offspring distribution, complementing the work of Gladstien (1978). The results are applied to fundamental multi-allelic Cannings models, among them the Moran model, the Wright-Fisher model, the Kimura model, and the Karlin and McGregor model. As a side effect, number theoretical sieve formulae occur in these examples.
In this paper we study the asymptotic normality of discrete-time Markov control processes in Borel spaces, with possibly unbounded cost. Under suitable hypotheses, we show that the cost sequence is asymptotically normal. As a special case, we obtain a central limit theorem for (noncontrolled) Markov chains.
In this paper we present an application of the read-once coupling from the past algorithm to problems in Bayesian inference for latent statistical models. We describe a method for perfect simulation from the posterior distribution of the unknown mixture weights in a mixture model. Our method is extended to a more general mixture problem, where unknown parameters exist for the mixture components, and to a hidden Markov model.
We identify the Poisson boundary of the dual of the universal compact quantum group Au(F) with a measurable field of ITPFI (infinite tensor product of finite type I) factors.
The conditional least-squares estimators of the variances are studied for a critical branching process with immigration that allows the offspring distributions to have infinite fourth moments. We derive different forms of limiting distributions for these estimators when the offspring distributions have regularly varying tails with index α. In particular, in the case in which 2 < α < 8/3, the normalizing factor of the estimator for the offspring variance is smaller than √n, which is different from that of Winnicki (1991).
We consider a Markov additive process (MAP) with phase-type jumps, starting at 0. Given a positive level u, we determine the joint distribution of the undershoot and overshoot of the first jump over the level u, the maximal level before this jump, the time of attaining this maximum, and the time between the maximum and the jump. The analysis is based on first passage times and time reversion of MAPs. A marginal of the derived distribution is the Gerber-Shiu function, which is of interest to insurance risk. Several examples serve to compare the present result with the literature.
Motivated by a problem arising in the mining industry, we present a first study of the energy required to reduce a unit mass fragment by consecutively using several devices. Two devices are considered, which we represent as different stochastic fragmentation processes. Following the self-similar energy model introduced in Bertoin and Martínez (2005), we compute the average energy required to attain a size η0 with this two-device procedure. We then asymptotically compare, as η0 goes to 0 or 1, its energy requirement with that of individual fragmentation processes. In particular, we show that, for a certain range of parameters of the fragmentation processes and of their energy cost functions, the consecutive use of two devices can be asymptotically more efficient than using each of them separately, or vice versa.
We study a stochastic differential game between two insurance companies who employ reinsurance to reduce the risk of exposure. Under the assumption that the companies have large insurance portfolios compared to any individual claim size, their surplus processes can be approximated by stochastic differential equations. We formulate competition between the two companies as a game with a single payoff function which depends on the surplus processes. One company chooses a dynamic reinsurance strategy in order to maximize this expected payoff, while the other company simultaneously chooses a dynamic reinsurance strategy so as to minimize the same quantity. We describe the Nash equilibrium of this stochastic differential game and solve it explicitly for the case of maximizing/minimizing the exit probability.
Consider a family of random ordered graph trees (Tn)n≥1, where Tn has n vertices. It has previously been established that if the associated search-depth processes converge to the normalised Brownian excursion when rescaled appropriately as n → ∞, then the simple random walks on the graph trees have the Brownian motion on the Brownian continuum random tree as their scaling limit. Here, this result is extended to demonstrate the existence of a diffusion scaling limit whenever the volume measure on the limiting real tree is nonatomic, supported on the leaves of the limiting tree, and satisfies a polynomial lower bound for the volume of balls. Furthermore, as an application of this generalisation, it is established that the simple random walks on a family of Galton-Watson trees with a critical infinite variance offspring distribution, conditioned on the total number of offspring, can be rescaled to converge to the Brownian motion on a related α-stable tree.
This paper gives easy proofs of conditional limit laws for the population size Zt of a critical Markov branching process whose offspring law is attracted to a stable law with index 1 + α, where 0 ≤ α ≤ 1. Conditioning events subsume the usual ones, and more general initial laws are considered. The case α = 0 is related to extreme value theory for the Gumbel law.