We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A moment constraint that limits the number of dividends in an optimal dividend problem is suggested. This leads to a new type of time-inconsistent stochastic impulse control problem. First, the optimal solution in the precommitment sense is derived. Second, the problem is formulated as an intrapersonal sequential dynamic game in line with Strotz’s consistent planning. In particular, the notions of pure dividend strategies and a (strong) subgame-perfect Nash equilibrium are adapted. An equilibrium is derived using a smooth fit condition. The equilibrium is shown to be strong. The uncontrolled state process is a fairly general diffusion.
We provide constructions of age-structured branching processes without or with immigration as pathwise-unique solutions to stochastic integral equations. A necessary and sufficient condition for the ergodicity of the model with immigration is also given.
We make the first steps towards generalising the theory of stochastic block models, in the sparse regime, towards a model where the discrete community structure is replaced by an underlying geometry. We consider a geometric random graph over a homogeneous metric space where the probability of two vertices to be connected is an arbitrary function of the distance. We give sufficient conditions under which the locations can be recovered (up to an isomorphism of the space) in the sparse regime. Moreover, we define a geometric counterpart of the model of flow of information on trees, due to Mossel and Peres, in which one considers a branching random walk on a sphere and the goal is to recover the location of the root based on the locations of leaves. We give some sufficient conditions for percolation and for non-percolation of information in this model.
The random-cluster model is a unifying framework for studying random graphs, spin systems and electrical networks that plays a fundamental role in designing efficient Markov Chain Monte Carlo (MCMC) sampling algorithms for the classical ferromagnetic Ising and Potts models. In this paper, we study a natural non-local Markov chain known as the Chayes–Machta (CM) dynamics for the mean-field case of the random-cluster model, where the underlying graph is the complete graph on n vertices. The random-cluster model is parametrised by an edge probability p and a cluster weight q. Our focus is on the critical regime:
$p = p_c(q)$
and
$q \in (1,2)$
, where
$p_c(q)$
is the threshold corresponding to the order–disorder phase transition of the model. We show that the mixing time of the CM dynamics is
$O({\log}\ n \cdot \log \log n)$
in this parameter regime, which reveals that the dynamics does not undergo an exponential slowdown at criticality, a surprising fact that had been predicted (but not proved) by statistical physicists. This also provides a nearly optimal bound (up to the
$\log\log n$
factor) for the mixing time of the mean-field CM dynamics in the only regime of parameters where no non-trivial bound was previously known. Our proof consists of a multi-phased coupling argument that combines several key ingredients, including a new local limit theorem, a precise bound on the maximum of symmetric random walks with varying step sizes and tailored estimates for critical random graphs. In addition, we derive an improved comparison inequality between the mixing time of the CM dynamics and that of the local Glauber dynamics on general graphs; this results in better mixing time bounds for the local dynamics in the mean-field setting.
A diffusion approximation to a risk process under dynamic proportional reinsurance is considered. The goal is to minimise the discounted time in drawdown; that is, the time where the distance of the present surplus to the running maximum is larger than a given level
$d > 0$
. We calculate the value function and determine the optimal reinsurance strategy. We conclude that the drawdown measure stabilises process paths but has a drawback as it also prevents surpassing the initial maximum. That is, the insurer is, under the optimal strategy, not interested in any more profits. We therefore suggest using optimisation criteria that do not avoid future profits.
Consider the following iterated process on a hypergraph H. Each vertex v starts with some initial weight
$x_v$
. At each step, uniformly at random select an edge e in H, and for each vertex v in e replace the weight of v by the average value of the vertex weights over all vertices in e. This is a generalization of an interactive process on graphs which was first introduced by Aldous and Lanoue. In this paper we use the eigenvalues of a Laplacian for hypergraphs to bound the rate of convergence for this iterated averaging process.
Given a branching random walk
$(Z_n)_{n\geq0}$
on
$\mathbb{R}$
, let
$Z_n(A)$
be the number of particles located in interval A at generation n. It is well known that under some mild conditions,
$Z_n(\sqrt nA)/Z_n(\mathbb{R})$
converges almost surely to
$\nu(A)$
as
$n\rightarrow\infty$
, where
$\nu$
is the standard Gaussian measure. We investigate its large-deviation probabilities under the condition that the step size or offspring law has a heavy tail, i.e. a decay rate of
$\mathbb{P}(Z_n(\sqrt nA)/Z_n(\mathbb{R})>p)$
as
$n\rightarrow\infty$
, where
$p\in(\nu(A),1)$
. Our results complete those in Chen and He (2019) and Louidor and Perkins (2015).
Across a wide variety of applications, the self-exciting Hawkes process has been used to model phenomena in which the history of events influences future occurrences. However, there may be many situations in which the past events only influence the future as long as they remain active. For example, a person spreads a contagious disease only as long as they are contagious. In this paper, we define a novel generalization of the Hawkes process that we call the ephemerally self-exciting process. In this new stochastic process, the excitement from one arrival lasts for a randomly drawn activity duration, hence the ephemerality. Our study includes exploration of the process itself as well as connections to well-known stochastic models such as branching processes, random walks, epidemics, preferential attachment, and Bayesian mixture models. Furthermore, we prove a batch scaling construction of general, marked Hawkes processes from a general ephemerally self-exciting model, and this novel limit theorem both provides insight into the Hawkes process and motivates the model contained herein as an attractive self-exciting process in its own right.
We study a general class of interacting particle systems called kinetically constrained models (KCM) in two dimensions tightly linked to the monotone cellular automata called bootstrap percolation. There are three classes of such models, the most studied being the critical one. In a recent series of works by Martinelli, Morris, Toninelli and the authors, it was shown that the KCM counterparts of critical bootstrap percolation models with the same properties split into two classes with different behaviour. Together with the companion paper by the first author, our work determines the logarithm of the infection time up to a constant factor for all critical KCM, which were previously known only up to logarithmic corrections. This improves all previous results except for the Duarte-KCM, for which we give a new proof of the best result known. We establish that on this level of precision critical KCM have to be classified into seven categories instead of the two in bootstrap percolation. In the present work, we establish lower bounds for critical KCM in a unified way, also recovering the universality result of Toninelli and the authors and the Duarte model result of Martinelli, Toninelli and the second author.
Let
$(Z_n)_{n\geq 0}$
be a critical branching process in a random environment defined by a Markov chain
$(X_n)_{n\geq 0}$
with values in a finite state space
$\mathbb{X}$
. Let
$ S_n = \sum_{k=1}^n \ln f_{X_k}^{\prime}(1)$
be the Markov walk associated to
$(X_n)_{n\geq 0}$
, where
$f_i$
is the offspring generating function when the environment is
$i \in \mathbb{X}$
. Conditioned on the event
$\{ Z_n>0\}$
, we show the nondegeneracy of the limit law of the normalized number of particles
${Z_n}/{e^{S_n}}$
and determine the limit of the law of
$\frac{S_n}{\sqrt{n}} $
jointly with
$X_n$
. Based on these results we establish a Yaglom-type theorem which specifies the limit of the joint law of
$ \log Z_n$
and
$X_n$
given
$Z_n>0$
.
Let
$B^{H}$
be a fractional Brownian motion in
$\mathbb{R}^{d}$
of Hurst index
$H\in\left(0,1\right)$
,
$f\;:\;\left[0,1\right]\longrightarrow\mathbb{R}^{d}$
a Borel function and
$A\subset\left[0,1\right]$
a Borel set. We provide sufficient conditions for the image
$(B^{H}+f)(A)$
to have a positive Lebesgue measure or to have a non-empty interior. This is done through the study of the properties of the density of the occupation measure of
$(B^{H}+f)$
. Precisely, we prove that if the parabolic Hausdorff dimension of the graph of f is greater than Hd, then the density is a square integrable function. If, on the other hand, the Hausdorff dimension of A is greater than Hd, then it even admits a continuous version. This allows us to establish the result already cited.
We revisit the so-called cat-and-mouse Markov chain, studied earlier by Litvak and Robert (2012). This is a two-dimensional Markov chain on the lattice
$\mathbb{Z}^2$
, where the first component (the cat) is a simple random walk and the second component (the mouse) changes when the components meet. We obtain new results for two generalisations of the model. First, in the two-dimensional case we consider far more general jump distributions for the components and obtain a scaling limit for the second component. When we let the first component be a simple random walk again, we further generalise the jump distribution of the second component. Secondly, we consider chains of three and more dimensions, where we investigate structural properties of the model and find a limiting law for the last component.
We consider two classes of irreducible Markovian arrival processes specified by the matrices C and D: the Markov-modulated Poisson process (MMPP) and the Markov-switched Poisson process (MSPP). The former exhibits a diagonal matrix D while the latter exhibits a diagonal matrix C. For these two classes we consider the following four statements: (I) the counting process is overdispersed; (II) the hazard rate of the event-stationary interarrival time is nonincreasing; (III) the squared coefficient of variation of the event-stationary process is greater than or equal to one; (IV) there is a stochastic order showing that the time-stationary interarrival time dominates the event-stationary interarrival time. For general MSPPs and order two MMPPs, we show that (I)–(IV) hold. Then for general MMPPs, it is easy to establish (I), while (II) is shown to be false by a counter-example. For general simple point processes, (III) follows from (IV). For MMPPs, we conjecture that (IV) and thus (III) hold. We also carry out some numerical experiments that fail to disprove this conjecture. Importantly, modelling folklore has often treated MMPPs as “bursty”, and implicitly assumed that (III) holds. However, to the best of our knowledge, proving this is still an open problem.
An iterated perturbed random walk is a sequence of point processes defined by the birth times of individuals in subsequent generations of a general branching process provided that the birth times of the first generation individuals are given by a perturbed random walk. We prove counterparts of the classical renewal-theoretic results (the elementary renewal theorem, Blackwell’s theorem, and the key renewal theorem) for the number of jth-generation individuals with birth times
$\leq t$
, when
$j,t\to\infty$
and
$j(t)={\textrm{o}}\big(t^{2/3}\big)$
. According to our terminology, such generations form a subset of the set of intermediate generations.
This paper is devoted to the study of regime-switching jump diffusion processes with countable regimes. It aims to establish Foster–Lyapunov-type criteria for exponential ergodicity of such processes. After recalling results concerning the petiteness of compact sets, this paper presents sufficient conditions for the existence of a Foster–Lyapunov function; this, in turn, helps to establish sufficient conditions for the desired exponential ergodicity for regime-switching jump diffusion processes. Finally, an application to feedback control problems is presented.
Let T be the regular tree in which every vertex has exactly
$d\ge 3$
neighbours. Run a branching random walk on T, in which at each time step every particle gives birth to a random number of children with mean d and finite variance, and each of these children moves independently to a uniformly chosen neighbour of its parent. We show that, starting with one particle at some vertex 0 and conditionally on survival of the process, the time it takes for every vertex within distance r of 0 to be hit by a particle of the branching random walk is
$r + ({2}/{\log(3/2)})\log\log r + {\mathrm{o}}(\log\log r)$
.
Consider two-type linear-fractional branching processes in varying environments with asymptotically constant mean matrices. Let
$\nu$
be the extinction time. Under certain conditions, we show that both
$\mathbb{P}(\nu=n)$
and
$\mathbb{P}(\nu>n)$
are asymptotically the same as some functions of the products of spectral radii of the mean matrices. We also give an example for which
$\mathbb{P}(\nu=n)$
decays with various speeds such as
${c}/({n^{1/2}\log n)^2}$
,
${c}/{n^\beta}$
,
$\beta >1$
, which are very different from those of homogeneous multitype Galton–Watson processes.
We give a setting of the Diaconis–Freedman chain in a multi-dimensional simplex and consider its asymptotic behavior. By using techniques from random iterated function theory and quasi-compact operator theory, we first give some sufficient conditions which ensure the existence and uniqueness of an invariant probability measure and, in particular cases, explicit formulas for the invariant probability density. Moreover, we completely classify all behaviors of this chain in dimension two. Some other settings of the chain are also discussed.
Latouche and Nguyen (2015b) constructed a sequence of stochastic fluid processes and showed that it converges weakly to a Markov-modulated Brownian motion (MMBM). Here, we construct a different sequence of stochastic fluid processes and show that it converges strongly to an MMBM. To the best of our knowledge, this is the first result on strong convergence to a Markov-modulated Brownian motion. Besides implying weak convergence, such a strong approximation constitutes a powerful tool for developing deep results for sophisticated models. Additionally, we prove that the rate of this almost sure convergence is
$o(n^{-1/2} \log n)$
. When reduced to the special case of standard Brownian motion, our convergence rate is an improvement over that obtained by a different approximation in Gorostiza and Griego (1980), which is
$o(n^{-1/2}(\log n)^{5/2})$
.
We investigate properties of random mappings whose core is composed of derangements as opposed to permutations. Such mappings arise as the natural framework for studying the Screaming Toes game described, for example, by Peter Cameron. This mapping differs from the classical case primarily in the behaviour of the small components, and a number of explicit results are provided to illustrate these differences.