We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study propagation of avalanches in a certain excitable network. The model is a particular case of the one introduced by Larremore et al. (Phys. Rev. E, 2012) and is mathematically equivalent to an endemic variation of the Reed–Frost epidemic model introduced by Longini (Math. Biosci., 1980). Two types of heuristic approximation are frequently used for models of this type in applications: a branching process for avalanches of a small size at the beginning of the process and a deterministic dynamical system once the avalanche spreads to a significant fraction of a large network. In this paper we prove several results concerning the exact relation between the avalanche model and these limits, including rates of convergence and rigorous bounds for common characteristics of the model.
Oscillatory systems of interacting Hawkes processes with Erlang memory kernels were introduced by Ditlevsen and Löcherbach (Stoch. Process. Appl., 2017). They are piecewise deterministic Markov processes (PDMP) and can be approximated by a stochastic diffusion. In this paper, first, a strong error bound between the PDMP and the diffusion is proved. Second, moment bounds for the resulting diffusion are derived. Third, approximation schemes for the diffusion, based on the numerical splitting approach, are proposed. These schemes are proved to converge with mean-square order 1 and to preserve the properties of the diffusion, in particular the hypoellipticity, the ergodicity, and the moment bounds. Finally, the PDMP and the diffusion are compared through numerical experiments, where the PDMP is simulated with an adapted thinning procedure.
We consider a birth–death process with killing where transitions from state i may go to either state
$i-1$
or state
$i+1$
or an absorbing state (killing). Stochastic ordering results on the killing time are derived. In particular, if the killing rate in state i is monotone in i, then the distribution of the killing time with initial state i is stochastically monotone in i. This result is a consequence of the following one for a non-negative tri-diagonal matrix M: if the row sums of M are monotone, so are the row sums of
$M^n$
for all
$n\ge 2$
.
Detailed balance of a chemical reaction network can be defined in several different ways. Here we investigate the relationship among four types of detailed balance conditions: deterministic, stochastic, local, and zero-order local detailed balance. We show that the four types of detailed balance are equivalent when different reactions lead to different species changes and are not equivalent when some different reactions lead to the same species change. Under the condition of local detailed balance, we further show that the system has a global potential defined over the whole space, which plays a central role in the large deviation theory and the Freidlin–Wentzell-type metastability theory of chemical reaction networks. Finally, we provide a new sufficient condition for stochastic detailed balance, which is applied to construct a class of high-dimensional chemical reaction networks that both satisfies stochastic detailed balance and displays multistability.
In the classical simple random walk the steps are independent, that is, the walker has no memory. In contrast, in the elephant random walk, which was introduced by Schütz and Trimper [19] in 2004, the next step always depends on the whole path so far. Our main aim is to prove analogous results when the elephant has only a restricted memory, for example remembering only the most remote step(s), the most recent step(s), or both. We also extend the models to cover more general step sizes.
A class of controlled branching processes with continuous time is introduced and some limiting distributions are obtained in the critical case. An extension of this class as regenerative controlled branching processes with continuous time is proposed and some asymptotic properties are considered.
It is well known that the height profile of a critical conditioned Galton–Watson tree with finite offspring variance converges, after a suitable normalisation, to the local time of a standard Brownian excursion. In this work, we study the distance profile, defined as the profile of all distances between pairs of vertices. We show that after a proper rescaling the distance profile converges to a continuous random function that can be described as the density of distances between random points in the Brownian continuum random tree. We show that this limiting function a.s. is Hölder continuous of any order
$\alpha<1$
, and that it is a.e. differentiable. We note that it cannot be differentiable at 0, but leave as open questions whether it is Lipschitz, and whether it is continuously differentiable on the half-line
$(0,\infty)$
. The distance profile is naturally defined also for unrooted trees contrary to the height profile that is designed for rooted trees. This is used in our proof, and we prove the corresponding convergence result for the distance profile of random unrooted simply generated trees. As a minor purpose of the present work, we also formalize the notion of unrooted simply generated trees and include some simple results relating them to rooted simply generated trees, which might be of independent interest.
We consider backward filtrations generated by processes coming from deterministic and probabilistic cellular automata. We prove that these filtrations are standard in the classical sense of Vershik’s theory, but we also study them from another point of view that takes into account the measure-preserving action of the shift map, for which each sigma-algebra in the filtrations is invariant. This initiates what we call the dynamical classification of factor filtrations, and the examples we study show that this classification leads to different results.
Mixing rates, relaxation rates, and decay of correlations for dynamics defined by potentials with summable variations are well understood, but little is known for non-summable variations. This paper exhibits upper bounds for these quantities for dynamics defined by potentials with square-summable variations. We obtain these bounds as corollaries of a new block coupling inequality between pairs of dynamics starting with different histories. As applications of our results, we prove a new weak invariance principle and a Hoeffding-type inequality.
We show that fractal percolation sets in $\mathbb{R}^{d}$ almost surely intersect every hyperplane absolutely winning (HAW) set with full Hausdorff dimension. In particular, if $E\subset\mathbb{R}^{d}$ is a realisation of a fractal percolation process, then almost surely (conditioned on $E\neq\emptyset$), for every countable collection $\left(f_{i}\right)_{i\in\mathbb{N}}$ of $C^{1}$ diffeomorphisms of $\mathbb{R}^{d}$, $\dim_{H}\left(E\cap\left(\bigcap_{i\in\mathbb{N}}f_{i}\left(\text{BA}_{d}\right)\right)\right)=\dim_{H}\left(E\right)$, where $\text{BA}_{d}$ is the set of badly approximable vectors in $\mathbb{R}^{d}$. We show this by proving that E almost surely contains hyperplane diffuse subsets which are Ahlfors-regular with dimensions arbitrarily close to $\dim_{H}\left(E\right)$.
We achieve this by analysing Galton–Watson trees and showing that they almost surely contain appropriate subtrees whose projections to $\mathbb{R}^{d}$ yield the aforementioned subsets of E. This method allows us to obtain a more general result by projecting the Galton–Watson trees against any similarity IFS whose attractor is not contained in a single affine hyperplane. Thus our general result relates to a broader class of random fractals than fractal percolation.
Distinguishing between continuous and first-order phase transitions is a major challenge in random discrete systems. We study the topic for events with recursive structure on Galton–Watson trees. For example, let
$\mathcal{T}_1$
be the event that a Galton–Watson tree is infinite and let
$\mathcal{T}_2$
be the event that it contains an infinite binary tree starting from its root. These events satisfy similar recursive properties:
$\mathcal{T}_1$
holds if and only if
$\mathcal{T}_1$
holds for at least one of the trees initiated by children of the root, and
$\mathcal{T}_2$
holds if and only if
$\mathcal{T}_2$
holds for at least two of these trees. The probability of
$\mathcal{T}_1$
has a continuous phase transition, increasing from 0 when the mean of the child distribution increases above 1. On the other hand, the probability of
$\mathcal{T}_2$
has a first-order phase transition, jumping discontinuously to a non-zero value at criticality. Given the recursive property satisfied by the event, we describe the critical child distributions where a continuous phase transition takes place. In many cases, we also characterise the event undergoing the phase transition.
In this paper we show how ideas, methods and results from optimal transportation can be used to study various aspects of the stationary measures of Iterated Function Systems equipped with a probability distribution. We recover a classical existence and uniqueness result under a contraction-on-average assumption, prove generalised moment bounds from which tail estimates can be deduced, consider the convergence of the empirical measure of an associated Markov chain, and prove in many cases the Lipschitz continuity of the stationary measure when the system is perturbed, with as a consequence a “linear response formula” at almost every parameter of the perturbation.
This paper considers risk-sensitive average optimization for denumerable continuous-time Markov decision processes (CTMDPs), in which the transition and cost rates are allowed to be unbounded, and the policies can be randomized history dependent. We first derive the multiplicative dynamic programming principle and some new facts for risk-sensitive finite-horizon CTMDPs. Then, we establish the existence and uniqueness of a solution to the risk-sensitive average optimality equation (RS-AOE) through the results for risk-sensitive finite-horizon CTMDPs developed here, and also prove the existence of an optimal stationary policy via the RS-AOE. Furthermore, for the case of finite actions available at each state, we construct a sequence of models of finite-state CTMDPs with optimal stationary policies which can be obtained by a policy iteration algorithm in a finite number of iterations, and prove that an average optimal policy for the case of infinitely countable states can be approximated by those of the finite-state models. Finally, we illustrate the conditions and the iteration algorithm with an example.
We consider a gradual-impulse control problem of continuous-time Markov decision processes, where the system performance is measured by the expectation of the exponential utility of the total cost. We show, under natural conditions on the system primitives, the existence of a deterministic stationary optimal policy out of a more general class of policies that allow multiple simultaneous impulses, randomized selection of impulses with random effects, and accumulation of jumps. After characterizing the value function using the optimality equation, we reduce the gradual-impulse control problem to an equivalent simple discrete-time Markov decision process, whose action space is the union of the sets of gradual and impulsive actions.
We consider a stochastic matching model with a general compatibility graph, as introduced by Mairesse and Moyal (2016). We show that the natural necessary condition of stability of the system is also sufficient for the natural ‘first-come, first-matched’ matching policy. To do so, we derive the stationary distribution under a remarkable product form, by using an original dynamic reversibility property related to that of Adan, Bušić, Mairesse, and Weiss (2018) for the bipartite matching model.
The paper discusses the risk of ruin in insurance coverage of an epidemic in a closed population. The model studied is an extended susceptible–infective–removed (SIR) epidemic model built by Lefèvre and Simon (Methodology Comput. Appl. Prob.22, 2020) as a block-structured Markov process. A fluid component is then introduced to describe the premium amounts received and the care costs reimbursed by the insurance. Our interest is in the risk of collapse of the corresponding reserves of the company. The use of matrix-analytic methods allows us to determine the distribution of ruin time, the probability of ruin, and the final amount of reserves. The case where the reserves are subjected to a Brownian noise is also studied. Finally, some of the results obtained are illustrated for two particular standard SIR epidemic models.
We focus on the population dynamics driven by two classes of truncated $\alpha$-stable processes with Markovian switching. Almost necessary and sufficient conditions for the ergodicity of the proposed models are provided. Also, these results illustrate the impact on ergodicity and extinct conditions as the parameter $\alpha$ tends to 2.
We develop a continuous-time Markov chain (CTMC) approximation of one-dimensional diffusions with sticky boundary or interior points. Approximate solutions to the action of the Feynman–Kac operator associated with a sticky diffusion and first passage probabilities are obtained using matrix exponentials. We show how to compute matrix exponentials efficiently and prove that a carefully designed scheme achieves second-order convergence. We also propose a scheme based on CTMC approximation for the simulation of sticky diffusions, for which the Euler scheme may completely fail. The efficiency of our method and its advantages over alternative approaches are illustrated in the context of bond pricing in a sticky short-rate model for a low-interest environment and option pricing under a geometric Brownian motion price model with a sticky interior point.
We analyze average-based distributed algorithms relying on simple and pairwise random interactions among a large and unknown number of anonymous agents. This allows the characterization of global properties emerging from these local interactions. Agents start with an initial integer value, and at each interaction keep the average integer part of both values as their new value. The convergence occurs when, with high probability, all the agents possess the same value, which means that they all know a property of the global system. Using a well-chosen stochastic coupling, we improve upon existing results by providing explicit and tight bounds on the convergence time. We apply these general results to both the proportion problem and the system size problem.
We investigate the impact of Knightian uncertainty on the optimal timing policy of an ambiguity-averse decision-maker in the case where the underlying factor dynamics follow a multidimensional Brownian motion and the exercise payoff depends on either a linear combination of the factors or the radial part of the driving factor dynamics. We present a general characterization of the value of the optimal timing policy and the worst-case measure in terms of a family of explicitly identified excessive functions generating an appropriate class of supermartingales. In line with previous findings based on linear diffusions, we find that ambiguity accelerates timing in comparison with the unambiguous setting. Somewhat surprisingly, we find that ambiguity may lead to stationarity in models which typically do not possess stationary behavior. In this way, our results indicate that ambiguity may act as a stabilizing mechanism.