To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Famously, a d-dimensional, spatially homogeneous random walk whose increments are nondegenerate, have finite second moments, and have zero mean is recurrent if d∈{1,2}, but transient if d≥3. Once spatial homogeneity is relaxed, this is no longer true. We study a family of zero-drift spatially nonhomogeneous random walks (Markov processes) whose increment covariance matrix is asymptotically constant along rays from the origin, and which, in any ambient dimension d≥2, can be adjusted so that the walk is either transient or recurrent. Natural examples are provided by random walks whose increments are supported on ellipsoids that are symmetric about the ray from the origin through the walk's current position; these elliptic random walks generalize the classical homogeneous Pearson‒Rayleigh walk (the spherical case). Our proof of the recurrence classification is based on fundamental work of Lamperti.
This is a case study concerning the rate at which probabilistic coupling occurs for nilpotent diffusions. We focus on the simplest case of Kolmogorov diffusion (Brownian motion together with its time integral or, more generally, together with a finite number of iterated time integrals). We show that in this case there can be no Markovian maximal coupling. Indeed, there can be no efficient Markovian coupling strategy (efficient for all pairs of distinct starting values), where the notion of efficiency extends the terminology of Burdzy and Kendall (2000). Finally, at least in the classical case of a single time integral, it is not possible to choose a Markovian coupling that is optimal in the sense of simultaneously minimizing the probability of failing to couple by time t for all positive t. In recompense for all these negative results, we exhibit a simple efficient non-Markovian coupling strategy.
Answering a question by Angel, Holroyd, Martin, Wilson and Winkler [1], we show that the maximal number of non-colliding coupled simple random walks on the complete graph KN, which take turns, moving one at a time, is monotone in N. We use this fact to couple [N/4] such walks on KN, improving the previous Ω(N/log N) lower bound of Angel et al. We also introduce a new generalization of simple avoidance coupling which we call partially ordered simple avoidance coupling, and provide a monotonicity result for this extension as well.
We propose a class of models of random walks in a random environment where an exact solution can be given for a stationary distribution. The environment is cast in terms of a Jackson/Gordon–Newell network although alternative interpretations are possible. The main tool is the detailed balance equations. The difference compared to earlier works is that the position of the random walk influences the transition intensities of the network environment and vice versa, creating strong correlations. The form of the stationary distribution is closely related to the well-known product formula.
This paper is devoted to probabilistic cellular automata (PCAs) on N,Z or Z / nZ, depending on two neighbors with a general alphabet E (finite or infinite, discrete or not). We study the following question: under which conditions does a PCA possess a Markov chain as an invariant distribution? Previous results in the literature give some conditions on the transition matrix (for positive rate PCAs) when the alphabet E is finite. Here we obtain conditions on the transition kernel of a PCA with a general alphabet E. In particular, we show that the existence of an invariant Markov chain is equivalent to the existence of a solution to a cubic integral equation. One of the difficulties in passing from a finite alphabet to a general alphabet comes from the problem of measurability, and a large part of this work is devoted to clarifying these issues.
Let I be a finite set and S be a nonempty strict subset of I which is partitioned into classes, and let C(s) be the class containing s ∈ S. Let (Ps: s ∈ S) be a family of distributions on IN, where each Ps applies to sequences starting with the symbol s. To this family, we associate a class of distributions P(π) on IN which depends on a probability vector π. Our main results assume that, for each s ∈ S, Ps regenerates with distribution Ps' when it encounters s' ∈ S ∖ C(s). From semiregenerative theory, we determine a simple condition on π for P(π) to be time stationary. We give a similar result for the following more complex model. Once a symbol s' ∈ S ∖ C(s) has been encountered, there is a decision to be made: either a new region of type C(s') governed by Ps' starts or the region continues to be a C(s) region. This decision is modeled as a random event and its probability depends on s and s'. The aim in studying these kinds of models is to attain a deeper statistical understanding of bacterial DNA sequences. Here I is the set of codons and the classes (C(s): s ∈ S) identify codons that initiate similar genomic regions. In particular, there are two classes corresponding to the start and stop codons which delimit coding and noncoding regions in bacterial DNA sequences. In addition, the random decision to continue the current region or begin a new region of a different class reflects the well-known fact that not every appearance of a start codon marks the beginning of a new coding region.
We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the computer science notion of algorithm complexity. Our main result states that any weak limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previously-known MCMC diffusion limit results to prove that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O(d1/3) iterations to converge to stationarity.
In this paper we connect Archimedean survival processes (ASPs) with the theory of Markov copulas. ASPs were introduced by Hoyle and Mengütürk (2013) to model the realized variance of two assets. We present some new properties of ASPs related to their dependency structure. We study weak and strong Markovian consistency properties of ASPs. An ASP is weak Markovian consistent, but generally not strong Markovian consistent. Our results contain necessary and sufficient conditions for an ASP to be strong Markovian consistent. These properties are closely related to the concept of Markov copulas, which is very useful in modelling different dependence phenomena. At the end we present possible applications.
A useful result about leftmost and rightmost paths in two-dimensional bond percolation is proved. This result was introduced without proof in Gray (1991) in the context of the contact process in continuous time. As discussed here, it also holds for several related models, including the discrete-time contact process and two-dimensional site percolation. Among the consequences are a natural monotonicity in the probability of percolation between different sites and a somewhat counter-intuitive correlation inequality.
In this paper we study a special class of size dependent branching processes. We assume that for some positive integer K as long as the population size does not exceed level K, the process evolves as a discrete-time supercritical branching process, and when the population size exceeds level K, it evolves as a subcritical or critical branching process. It is shown that this process does die out in finite time T. The question of when the mean value E(T) is finite or infinite is also addressed.
Affine processes possess the property that expectations of exponential affine transformations are given by a set of Riccati differential equations, which is the main feature of this popular class of processes. In this paper we generalise these results for expectations of more general transformations. This is of interest in, e.g. doubly stochastic Markov models, in particular in life insurance. When using affine processes for modelling the transition rates and interest rate, the results presented allow for easy calculation of transition probabilities and expected present values.
We consider three different schemes for signal routeing on a tree. The vertices of the tree represent transceivers that can transmit and receive signals, and are equipped with independent and identically distributed weights representing the strength of the transceivers. The edges of the tree are also equipped with independent and identically distributed weights, representing the costs for passing the edges. For each one of our schemes, we derive sharp conditions on the distributions of the vertex weights and the edge weights that determine when the root can transmit a signal over arbitrarily large distances.
This note is motivated by Blom's work in 1989. We consider a generalized Ehrenfest urn model in which a randomly-chosen ball has a positive probability of moving from one urn to the other urn. We use recursion relations between the mean transition times to derive formulas in terms of finite sums, which are shown to be equivalent to the definite integrals obtained by Blom.
In this paper we study a generalized coupon collector problem, which consists of analyzing the time needed to collect a given number of distinct coupons that are drawn from a set of coupons with an arbitrary probability distribution. We suppose that a special coupon called the null coupon can be drawn but never belongs to any collection. In this context, we prove that the almost uniform distribution, for which all the nonnull coupons have the same drawing probability, is the distribution which stochastically minimizes the time needed to collect a fixed number of distinct coupons. Moreover, we show that in a given closed subset of probability distributions, the distribution with all its entries, but one, equal to the smallest possible value is the one which stochastically maximizes the time needed to collect a fixed number of distinct coupons.
In this paper the optimal dividend (subject to transaction costs) and reinsurance (with two reinsurers) problem is studied in the limit diffusion setting. It is assumed that transaction costs and taxes are required when dividends occur, and that the premiums charged by two reinsurers are calculated according to the exponential premium principle with different parameters, which makes the stochastic control problem nonlinear. The objective of the insurer is to determine the optimal reinsurance and dividend policy so as to maximize the expected discounted dividends until ruin. The problem is formulated as a mixed classical-impulse stochastic control problem. Explicit expressions for the value function and the corresponding optimal strategy are obtained. Finally, a numerical example is presented to illustrate the impact of the parameters associated with the two reinsurers' premium principle on the optimal reinsurance strategy.
In this paper we consider linear functions constructed on two different weighted branching processes and provide explicit bounds for their Kantorovich–Rubinstein distance in terms of couplings of their corresponding generic branching vectors. Motivated by applications to the analysis of random graphs, we also consider a variation of the weighted branching process where the generic branching vector has a different dependence structure from the usual one. By applying the bounds to sequences of weighted branching processes, we derive sufficient conditions for the convergence in the Kantorovich–Rubinstein distance of linear functions. We focus on the case where the limits are endogenous fixed points of suitable smoothing transformations.
During the course of a day an individual typically mixes with different groups of individuals. Epidemic models incorporating population structure with individuals being able to infect different groups of individuals have received extensive attention in the literature. However, almost exclusively the models assume that individuals are able to simultaneously infect members of all groups, whereas in reality individuals will typically only be able to infect members of any group they currently reside in. In this paper we develop a model where individuals move between a community and their household during the course of the day, only infecting within their current group. By defining a novel branching process approximation with an explicit expression for the probability generating function of the offspring distribution, we are able to derive the probability of a major epidemic outbreak.
In this paper we investigate the functional central limit theorem (CLT) for stochastic processes associated to partial sums of additive functionals of reversible Markov chains with general spate space, under the normalization standard deviation of partial sums. For this case, we show that the functional CLT is equivalent to the fact that the variance of partial sums is regularly varying with exponent 1 and the partial sums satisfy the CLT. It is also equivalent to the conditional CLT.
The two-sided nonlinear boundary crossing probabilities for one-dimensional Brownian motion and related processes have been studied in Fu and Wu (2010) based on the finite Markov chain imbedding technique. It provides an efficient numerical method to computing the boundary crossing probabilities. In this paper we extend the above results for high-dimensional Brownian motion. In particular, we obtain the rate of convergence for high-dimensional boundary crossing probabilities. Numerical results are also provided to illustrate our results.
Inspired by the works of Landriault et al. (2011), (2014), we study the Gerber–Shiu distribution at Parisian ruin with exponential implementation delays for a spectrally negative Lévy insurance risk process. To be more specific, we study the so-called Gerber–Shiu distribution for a ruin model where at each time the surplus process goes negative, an independent exponential clock is started. If the clock rings before the surplus becomes positive again then the insurance company is ruined. Our methodology uses excursion theory for spectrally negative Lévy processes and relies on the theory of so-called scale functions. In particular, we extend the recent results of Landriault et al. (2011), (2014).