We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigate properties of random mappings whose core is composed of derangements as opposed to permutations. Such mappings arise as the natural framework for studying the Screaming Toes game described, for example, by Peter Cameron. This mapping differs from the classical case primarily in the behaviour of the small components, and a number of explicit results are provided to illustrate these differences.
We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems.
In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.
We consider fragmentation processes with values in the space of marked partitions of $\mathbb{N}$, i.e. partitions where each block is decorated with a nonnegative real number. Assuming that the marks on distinct blocks evolve as independent positive self-similar Markov processes and determine the speed at which their blocks fragment, we get a natural generalization of the self-similar fragmentations of Bertoin (Ann. Inst. H. Poincaré Prob. Statist.38, 2002). Our main result is the characterization of these generalized fragmentation processes: a Lévy–Khinchin representation is obtained, using techniques from positive self-similar Markov processes and from classical fragmentation processes. We then give sufficient conditions for their absorption in finite time to a frozen state, and for the genealogical tree of the process to have finite total length.
We revisit the forward algorithm, developed by Irle, to characterize both the value function and the stopping set for a large class of optimal stopping problems on continuous-time Markov chains. Our objective is to renew interest in this constructive method by showing its usefulness in solving some constrained optimal stopping problems that have emerged recently.
Consider a Lamperti–Kiu Markov additive process $(J, \xi)$ on $\{+, -\}\times\mathbb R\cup \{-\infty\}$, where J is the modulating Markov chain component. First we study the finiteness of the exponential functional and then consider its moments and tail asymptotics under Cramér’s condition. In the strong subexponential case we determine the subexponential tails of the exponential functional under some further assumptions.
A classical result for the simple symmetric random walk with 2n steps is that the number of steps above the origin, the time of the last visit to the origin, and the time of the maximum height all have exactly the same distribution and converge when scaled to the arcsine law. Motivated by applications in genomics, we study the distributions of these statistics for the non-Markovian random walk generated from the ascents and descents of a uniform random permutation and a Mallows(q) permutation and show that they have the same asymptotic distributions as for the simple random walk. We also give an unexpected conjecture, along with numerical evidence and a partial proof in special cases, for the result that the number of steps above the origin by step 2n for the uniform permutation generated walk has exactly the same discrete arcsine distribution as for the simple random walk, even though the other statistics for these walks have very different laws. We also give explicit error bounds to the limit theorems using Stein’s method for the arcsine distribution, as well as functional central limit theorems and a strong embedding of the Mallows(q) permutation which is of independent interest.
In this paper an exact rejection algorithm for simulating paths of the coupled Wright–Fisher diffusion is introduced. The coupled Wright–Fisher diffusion is a family of multivariate Wright–Fisher diffusions that have drifts depending on each other through a coupling term and that find applications in the study of networks of interacting genes. The proposed rejection algorithm uses independent neutral Wright–Fisher diffusions as candidate proposals, which are only needed at a finite number of points. Once a candidate is accepted, the remainder of the path can be recovered by sampling from neutral multivariate Wright–Fisher bridges, for which an exact sampling strategy is also provided. Finally, the algorithm’s complexity is derived and its performance demonstrated in a simulation study.
We consider a class of phase-type distributions (PH-distributions), to be called the MMPP class of PH-distributions, and find bounds of their mean and squared coefficient of variation (SCV). As an application, we have shown that the SCV of the event-stationary inter-event time for Markov modulated Poisson processes (MMPPs) is greater than or equal to unity, which answers an open problem for MMPPs. The results are useful for selecting proper PH-distributions and counting processes in stochastic modeling.
In a multitype branching process, it is assumed that immigrants arrive according to a non-homogeneous Poisson or a contagious Poisson process (both processes are formulated as a non-homogeneous birth process with an appropriate choice of transition intensities). We show that the normalized numbers of objects of the various types alive at time t for supercritical, critical, and subcritical cases jointly converge in distribution under those two different arrival processes. Furthermore, we provide some transient expectation results when there are only two types of particles.
Motivated by applications to a wide range of areas, including assemble-to-order systems, operations scheduling, healthcare systems, and the collaborative economy, we study a stochastic matching model on hypergraphs, extending the model of Mairesse and Moyal (J. Appl. Prob.53, 2016) to the case of hypergraphical (rather than graphical) matching structures. We address a discrete-event system under a random input of single items, simply using the system as an interface to be matched in groups of two or more. We primarily study the stability of this model, for various hypergraph geometries.
In addition to the features of the two-parameter Chinese restaurant process (CRP), the restaurant under consideration has a cocktail bar and hence allows for a wider range of (bar and table) occupancy mechanisms. The model depends on three real parameters,
$\alpha$
,
$\theta_1$
, and
$\theta_2$
, fulfilling certain conditions. Results known for the two-parameter CRP are carried over to this model. We study the number of customers at the cocktail bar, the number of customers at each table, and the number of occupied tables after n customers have entered the restaurant. For
$\alpha>0$
the number of occupied tables, properly scaled, is asymptotically three-parameter Mittag–Leffler distributed as n tends to infinity. We provide representations for the two- and three-parameter Mittag–Leffler distribution leading to efficient random number generators for these distributions. The proofs draw heavily from methods known for exchangeable random partitions, martingale methods known for generalized Pólya urns, and results known for the two-parameter CRP.
A common tool in the practice of Markov chain Monte Carlo (MCMC) is to use approximating transition kernels to speed up computation when the desired kernel is slow to evaluate or is intractable. A limited set of quantitative tools exists to assess the relative accuracy and efficiency of such approximations. We derive a set of tools for such analysis based on the Hilbert space generated by the stationary distribution we intend to sample, $L_2(\pi)$. Our results apply to approximations of reversible chains which are geometrically ergodic, as is typically the case for applications to MCMC. The focus of our work is on determining whether the approximating kernel will preserve the geometric ergodicity of the exact chain, and whether the approximating stationary distribution will be close to the original stationary distribution. For reversible chains, our results extend the results of Johndrow et al. (2015) from the uniformly ergodic case to the geometrically ergodic case, under some additional regularity conditions. We then apply our results to a number of approximate MCMC algorithms.
Under a fourth-order moment condition on the branching and a second-order moment condition on the immigration mechanisms, we show that an appropriately scaled projection of a supercritical and irreducible continuous-state and continuous-time branching process with immigration on certain left non-Perron eigenvectors of the branching mean matrix is asymptotically mixed normal. With an appropriate random scaling, under some conditional probability measure, we prove asymptotic normality as well. In the case of a non-trivial process, under a first-order moment condition on the immigration mechanism, we also prove the convergence of the relative frequencies of distinct types of individuals on a suitable event; for instance, if the immigration mechanism does not vanish, then this convergence holds almost surely.
We prove that projective spaces of Lorentzian and real stable polynomials are homeomorphic to Euclidean balls. This solves a conjecture of June Huh and the author. The proof utilises and refines a connection between the symmetric exclusion process in interacting particle systems and the geometry of polynomials.
For a
$\psi $
-mixing process
$\xi _0,\xi _1,\xi _2,\ldots $
we consider the number
${\mathcal N}_N$
of multiple returns
$\{\xi _{q_{i,N}(n)}\in {\Gamma }_N,\, i=1,\ldots ,\ell \}$
to a set
${\Gamma }_N$
for n until either a fixed number N or until the moment
$\tau _N$
when another multiple return
$\{\xi _{q_{i,N}(n)}\in {\Delta }_N,\, i=1,\ldots ,\ell \}$
, takes place for the first time where
${\Gamma }_N\cap {\Delta }_N=\emptyset $
and
$q_{i,N}$
,
$i=1,\ldots ,\ell $
are certain functions of n taking on non-negative integer values when n runs from 0 to N. The dependence of
$q_{i,N}(n)$
on both n and N is the main novelty of the paper. Under some restrictions on the functions
$q_{i,N}$
we obtain Poisson distributions limits of
${\mathcal N}_N$
when counting is until N as
$N\to \infty $
and geometric distributions limits when counting is until
$\tau _N$
as
$N\to \infty $
. We obtain also similar results in the dynamical systems setup considering a
$\psi $
-mixing shift T on a sequence space
${\Omega }$
and studying the number of multiple returns
$\{ T^{q_{i,N}(n)}{\omega }\in A^a_n,\, i=1,\ldots ,\ell \}$
until the first occurrence of another multiple return
$\{ T^{q_{i,N}(n)}{\omega }\in A^b_m,\, i=1,\ldots ,\ell \}$
where
$A^a_n,\, A_m^b$
are cylinder sets of length n and m constructed by sequences
$a,b\in {\Omega }$
, respectively, and chosen so that their probabilities have the same order.
We prove the existence and asymptotic behaviour of the transition density for a large class of subordinators whose Laplace exponents satisfy lower scaling condition at infinity. Furthermore, we present lower and upper bounds for the density. Sharp estimates are provided if an additional upper scaling condition on the Laplace exponent is imposed. In particular, we cover the case when the (minus) second derivative of the Laplace exponent is a function regularly varying at infinity with regularity index bigger than
$-2$
.
We study the static maximization of long-term averaged profit, when optimal preset thresholds are determined to describe a pairs trading strategy in a general one-dimensional ergodic diffusion model of a stochastic spread process. An explicit formula for the expected value of a certain first passage time is given, which is used to derive a simple equation for determining the optimal thresholds. Asymptotic arbitrage in the long run of the threshold strategy is observed.
In this paper we consider the one-dimensional, biased, randomly trapped random walk with infinite-variance trapping times. We prove sufficient conditions for the suitably scaled walk to converge to a transformation of a stable Lévy process. As our main motivation, we apply subsequential versions of our results to biased walks on subcritical Galton–Watson trees conditioned to survive. This confirms the correct order of the fluctuations of the walk around its speed for values of the bias that yield a non-Gaussian regime.
It is well known that stationary geometrically ergodic Markov chains are
$\beta$
-mixing (absolutely regular) with geometrically decaying mixing coefficients. Furthermore, for initial distributions other than the stationary one, geometric ergodicity implies
$\beta$
-mixing under suitable moment assumptions. In this note we show that similar results hold also for subgeometrically ergodic Markov chains. In particular, for both stationary and other initial distributions, subgeometric ergodicity implies
$\beta$
-mixing with subgeometrically decaying mixing coefficients. Although this result is simple, it should prove very useful in obtaining rates of mixing in situations where geometric ergodicity cannot be established. To illustrate our results we derive new subgeometric ergodicity and
$\beta$
-mixing results for the self-exciting threshold autoregressive model.