To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study an N-player game where a pure action of each player is to select a nonnegative function on a Polish space supporting a finite diffuse measure, subject to a finite constraint on the integral of the function. This function is used to define the intensity of a Poisson point process on the Polish space. The processes are independent over the players, and the value to a player is the measure of the union of her open Voronoi cells in the superposition point process. Under randomized strategies, the process of points of a player is thus a Cox process, and the nature of competition between the players is akin to that in Hotelling competition games. We characterize when such a game admits Nash equilibria and prove that when a Nash equilibrium exists, it is unique and consists of pure strategies that are proportional in the same proportions as the total intensities. We give examples of such games where Nash equilibria do not exist. A better understanding of the criterion for the existence of Nash equilibria remains an intriguing open problem.
The Chow–Robbins game is a classical, still partly unsolved, stopping problem introduced by Chow and Robbins in 1965. You repeatedly toss a fair coin. After each toss, you decide whether you take the fraction of heads up to now as a payoff, otherwise you continue. As a more general stopping problem this reads $V(n,x) = \sup_{\tau }\mathbb{E} \left [ \frac{x + S_\tau}{n+\tau}\right]$, where S is a random walk. We give a tight upper bound for V when S has sub-Gaussian increments by using the analogous time-continuous problem with a standard Brownian motion as the driving process. For the Chow–Robbins game we also give a tight lower bound and use these to calculate, on the integers, the complete continuation and the stopping set of the problem for $n\leq 489\,241$.
A diffusion approximation to a risk process under dynamic proportional reinsurance is considered. The goal is to minimise the discounted time in drawdown; that is, the time where the distance of the present surplus to the running maximum is larger than a given level $d > 0$. We calculate the value function and determine the optimal reinsurance strategy. We conclude that the drawdown measure stabilises process paths but has a drawback as it also prevents surpassing the initial maximum. That is, the insurer is, under the optimal strategy, not interested in any more profits. We therefore suggest using optimisation criteria that do not avoid future profits.
Given a branching random walk $(Z_n)_{n\geq0}$ on $\mathbb{R}$, let $Z_n(A)$ be the number of particles located in interval A at generation n. It is well known that under some mild conditions, $Z_n(\sqrt nA)/Z_n(\mathbb{R})$ converges almost surely to $\nu(A)$ as $n\rightarrow\infty$, where $\nu$ is the standard Gaussian measure. We investigate its large-deviation probabilities under the condition that the step size or offspring law has a heavy tail, i.e. a decay rate of $\mathbb{P}(Z_n(\sqrt nA)/Z_n(\mathbb{R})>p)$ as $n\rightarrow\infty$, where $p\in(\nu(A),1)$. Our results complete those in Chen and He (2019) and Louidor and Perkins (2015).
Across a wide variety of applications, the self-exciting Hawkes process has been used to model phenomena in which the history of events influences future occurrences. However, there may be many situations in which the past events only influence the future as long as they remain active. For example, a person spreads a contagious disease only as long as they are contagious. In this paper, we define a novel generalization of the Hawkes process that we call the ephemerally self-exciting process. In this new stochastic process, the excitement from one arrival lasts for a randomly drawn activity duration, hence the ephemerality. Our study includes exploration of the process itself as well as connections to well-known stochastic models such as branching processes, random walks, epidemics, preferential attachment, and Bayesian mixture models. Furthermore, we prove a batch scaling construction of general, marked Hawkes processes from a general ephemerally self-exciting model, and this novel limit theorem both provides insight into the Hawkes process and motivates the model contained herein as an attractive self-exciting process in its own right.
For the gambler’s ruin problem with two players starting with the same amount of money, we show the playing time is stochastically maximized when the games are fair.
This paper discusses a general class of replicator–mutator equations on a multidimensional fitness space. We establish a novel probabilistic representation of weak solutions of the equation by using the theory of Fokker–Planck–Kolmogorov (FPK) equations and a martingale extraction approach. We provide examples with closed-form probabilistic solutions for different fitness functions considered in the existing literature. We also construct a particle system and prove a general convergence result to the unique solution of the FPK equation associated with the extended replicator–mutator equation with respect to a Wasserstein-like metric adapted to our probabilistic framework.
Signal-to-interference-plus-noise ratio (SINR) percolation is an infinite-range dependent variant of continuum percolation modeling connections in a telecommunication network. Unlike in earlier works, in the present paper the transmitted signal powers of the devices of the network are assumed random, independent and identically distributed, and possibly unbounded. Additionally, we assume that the devices form a stationary Cox point process, i.e., a Poisson point process with stationary random intensity measure, in two or more dimensions. We present the following main results. First, under suitable moment conditions on the signal powers and the intensity measure, there is percolation in the SINR graph given that the device density is high and interferences are sufficiently reduced, but not vanishing. Second, if the interference cancellation factor $\gamma$ and the SINR threshold $\tau$ satisfy $\gamma \geq 1/(2\tau)$, then there is no percolation for any intensity parameter. Third, in the case of a Poisson point process with constant powers, for any intensity parameter that is supercritical for the underlying Gilbert graph, the SINR graph also percolates with some small but positive interference cancellation factor.
This paper studies the joint tail asymptotics of extrema of the multi-dimensional Gaussian process over random intervals defined as $P(u)\;:\!=\; \mathbb{P}\{\cap_{i=1}^n (\sup_{t\in[0,\mathcal{T}_i]} ( X_{i}(t) +c_i t )>a_i u )\}$, $u\rightarrow\infty$, where $X_i(t)$, $t\ge0$, $i=1,2,\ldots,n$, are independent centered Gaussian processes with stationary increments, $\boldsymbol{\mathcal{T}}=(\mathcal{T}_1, \ldots, \mathcal{T}_n)$ is a regularly varying random vector with positive components, which is independent of the Gaussian processes, and $c_i\in \mathbb{R}$, $a_i>0$, $i=1,2,\ldots,n$. Our result shows that the structure of the asymptotics of P(u) is determined by the signs of the drifts $c_i$. We also discuss a relevant multi-dimensional regenerative model and derive the corresponding ruin probability.
We establish a normal approximation for the limiting distribution of partial sums of random Rademacher multiplicative functions over function fields, provided the number of irreducible factors of the polynomials is small enough. This parallels work of Harper for random Rademacher multiplicative functions over the integers.
For a one-locus haploid infinite population with discrete generations, the celebrated model of Kingman describes the evolution of fitness distributions under the competition of selection and mutation, with a constant mutation probability. This paper generalises Kingman’s model by using independent and identically distributed random mutation probabilities, to reflect the influence of a random environment. The weak convergence of fitness distributions to the globally stable equilibrium is proved. Condensation occurs when almost surely a positive proportion of the population travels to and condenses at the largest fitness value. Condensation may occur when selection is favoured over mutation. A criterion for the occurrence of condensation is given.
We consider two classes of irreducible Markovian arrival processes specified by the matrices C and D: the Markov-modulated Poisson process (MMPP) and the Markov-switched Poisson process (MSPP). The former exhibits a diagonal matrix D while the latter exhibits a diagonal matrix C. For these two classes we consider the following four statements: (I) the counting process is overdispersed; (II) the hazard rate of the event-stationary interarrival time is nonincreasing; (III) the squared coefficient of variation of the event-stationary process is greater than or equal to one; (IV) there is a stochastic order showing that the time-stationary interarrival time dominates the event-stationary interarrival time. For general MSPPs and order two MMPPs, we show that (I)–(IV) hold. Then for general MMPPs, it is easy to establish (I), while (II) is shown to be false by a counter-example. For general simple point processes, (III) follows from (IV). For MMPPs, we conjecture that (IV) and thus (III) hold. We also carry out some numerical experiments that fail to disprove this conjecture. Importantly, modelling folklore has often treated MMPPs as “bursty”, and implicitly assumed that (III) holds. However, to the best of our knowledge, proving this is still an open problem.
An iterated perturbed random walk is a sequence of point processes defined by the birth times of individuals in subsequent generations of a general branching process provided that the birth times of the first generation individuals are given by a perturbed random walk. We prove counterparts of the classical renewal-theoretic results (the elementary renewal theorem, Blackwell’s theorem, and the key renewal theorem) for the number of jth-generation individuals with birth times $\leq t$, when $j,t\to\infty$ and $j(t)={\textrm{o}}\big(t^{2/3}\big)$. According to our terminology, such generations form a subset of the set of intermediate generations.
We present a new and straightforward algorithm that simulates exact sample paths for a generalized stress-release process. The computation of the exact law of the joint inter-arrival times is detailed and used to derive this algorithm. Furthermore, the martingale generator of the process is derived, and induces theoretical moments which generalize some results of [3] and are used to demonstrate the validity of our simulation algorithm.
We consider the near-critical Erdős–Rényi random graph G(n, p) and provide a new probabilistic proof of the fact that, when p is of the form $p=p(n)=1/n+\lambda/n^{4/3}$ and A is large,
where $\mathcal{C}_{\max}$ is the largest connected component of the graph. Our result allows A and $\lambda$ to depend on n. While this result is already known, our proof relies only on conceptual and adaptable tools such as ballot theorems, whereas the existing proof relies on a combinatorial formula specific to Erdős–Rényi graphs, together with analytic estimates.
We consider a continuous Gaussian random field living on a compact set $T\subset \mathbb{R}^{d}$. We are interested in designing an asymptotically efficient estimator of the probability that the integral of the exponential of the Gaussian process over T exceeds a large threshold u. We propose an Asmussen–Kroese conditional Monte Carlo type estimator and discuss its asymptotic properties according to the assumptions on the first and second moments of the Gaussian random field. We also provide a simulation study to illustrate its effectiveness and compare its performance with the importance sampling type estimator of Liu and Xu (2014a).
Log-concavity of a joint survival function is proposed as a model for bivariate increasing failure rate (BIFR) distributions. Its connections with or distinctness from other notions of BIFR are discussed. A necessary and sufficient condition for a bivariate survival function to be log-concave (BIFR-LCC) is given that elucidates the impact of dependence between lifetimes on ageing. Illustrative examples are provided to explain BIFR-LCC for both positive and negative dependence.
Asymptotics deviation probabilities of the sum $S_n=X_1+\dots+X_n$ of independent and identically distributed real-valued random variables have been extensively investigated, in particular when $X_1$ is not exponentially integrable. For instance, Nagaev (1969a, 1969b) formulated exact asymptotics results for $\mathbb{P}(S_n>x_n)$ with $x_n\to \infty$ when $X_1$ has a semiexponential distribution. In the same setting, Brosset et al. (2020) derived deviation results at logarithmic scale with shorter proofs relying on classical tools of large-deviation theory and making the rate function at the transition explicit. In this paper we exhibit the same asymptotic behavior for triangular arrays of semiexponentially distributed random variables.
We study an open discrete-time queueing network. We assume data is generated at nodes of the network as a discrete-time Bernoulli process. All nodes in the network maintain a queue and relay data, which is to be finally collected by a designated sink. We prove that the resulting multidimensional Markov chain representing the queue size of nodes has two behavior regimes depending on the value of the rate of data generation. In particular, we show that there is a nontrivial critical value of the data rate below which the chain is ergodic and converges to a stationary distribution and above which it is non-ergodic, i.e., the queues at the nodes grow in an unbounded manner. We show that the rate of convergence to stationarity is geometric in the subcritical regime.
Latouche and Nguyen (2015b) constructed a sequence of stochastic fluid processes and showed that it converges weakly to a Markov-modulated Brownian motion (MMBM). Here, we construct a different sequence of stochastic fluid processes and show that it converges strongly to an MMBM. To the best of our knowledge, this is the first result on strong convergence to a Markov-modulated Brownian motion. Besides implying weak convergence, such a strong approximation constitutes a powerful tool for developing deep results for sophisticated models. Additionally, we prove that the rate of this almost sure convergence is $o(n^{-1/2} \log n)$. When reduced to the special case of standard Brownian motion, our convergence rate is an improvement over that obtained by a different approximation in Gorostiza and Griego (1980), which is $o(n^{-1/2}(\log n)^{5/2})$.