To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article describes the limiting distribution of the extremes of observations that arrive in clusters. We start by studying the tail behaviour of an individual cluster, and then we apply the developed theory to determine the limiting distribution of $\max\{X_j\,:\, j=0,\ldots, K(t)\}$, where K(t) is the number of independent and identically distributed observations $(X_j)$ arriving up to the time t according to a general marked renewal cluster process. The results are illustrated in the context of some commonly used Poisson cluster models such as the marked Hawkes process.
We consider the random splitting and aggregating of Hawkes processes. We present the random splitting schemes using the direct approach for counting processes, as well as the immigration–birth branching representations of Hawkes processes. From the second scheme, it is shown that random split Hawkes processes are again Hawkes. We discuss functional central limit theorems (FCLTs) for the scaled split processes from the different schemes. On the other hand, aggregating multivariate Hawkes processes may not necessarily be Hawkes. We identify a necessary and sufficient condition for the aggregated process to be Hawkes. We prove an FCLT for a multivariate Hawkes process under a random splitting and then aggregating scheme (under certain conditions, transforming into a Hawkes process of a different dimension).
We establish exponential ergodicity for a class of Markov processes with interactions, including two-factor type processes and Gruschin type processes. The proof is elementary and direct via the Markov coupling technique.
We prove that the hitting measure is singular with respect to the Lebesgue measure for random walks driven by finitely supported measures on cocompact, hyperelliptic Fuchsian groups. Moreover, the Hausdorff dimension of the hitting measure is strictly less than one. Equivalently, the inequality between entropy and drift is strict. A similar statement is proven for Coxeter groups.
The existence of moments of first downward passage times of a spectrally negative Lévy process is governed by the general dynamics of the Lévy process, i.e. whether it is drifting to $+\infty$, $-\infty$, or oscillating. Whenever the Lévy process drifts to $+\infty$, we prove that the $\kappa$th moment of the first passage time (conditioned to be finite) exists if and only if the $(\kappa+1)$th moment of the Lévy jump measure exists. This generalizes a result shown earlier by Delbaen for Cramér–Lundberg risk processes. Whenever the Lévy process drifts to $-\infty$, we prove that all moments of the first passage time exist, while for an oscillating Lévy process we derive conditions for non-existence of the moments, and in particular we show that no integer moments exist.
We consider a Lévy process Y(t) that is not continuously observed, but rather inspected at Poisson($\omega$) moments only, over an exponentially distributed time $T_\beta$ with parameter $\beta$. The focus lies on the analysis of the distribution of the running maximum at such inspection moments up to $T_\beta$, denoted by $Y_{\beta,\omega}$. Our main result is a decomposition: we derive a remarkable distributional equality that contains $Y_{\beta,\omega}$ as well as the running maximum process $\bar Y(t)$ at the exponentially distributed times $T_\beta$ and $T_{\beta+\omega}$. Concretely, $\overline{Y}(T_\beta)$ can be written as the sum of two independent random variables that are distributed as $Y_{\beta,\omega}$ and $\overline{Y}(T_{\beta+\omega})$. The distribution of $Y_{\beta,\omega}$ can be identified more explicitly in the two special cases of a spectrally positive and a spectrally negative Lévy process. As an illustrative example of the potential of our results, we show how to determine the asymptotic behavior of the bankruptcy probability in the Cramér–Lundberg insurance risk model.
In this article we introduce a simple tool to derive polynomial upper bounds for the probability of observing unusually large maximal components in some models of random graphs when considered at criticality. Specifically, we apply our method to a model of a random intersection graph, a random graph obtained through p-bond percolation on a general d-regular graph, and a model of an inhomogeneous random graph.
In this article we provide new results for the asymptotic behavior of a time-fractional birth and death process $N_{\alpha}(t)$, whose transition probabilities $\mathbb{P}[N_{\alpha}(t)=\,j\mid N_{\alpha}(0)=i]$ are governed by a time-fractional system of differential equations, under the condition that it is not killed. More specifically, we prove that the concepts of quasi-limiting distribution and quasi-stationary distribution do not coincide, which is a consequence of the long-memory nature of the process. In addition, exact formulas for the quasi-limiting distribution and its rate convergence are presented. In the first sections, we revisit the two equivalent characterizations for this process: the first one is a time-changed classic birth and death process, whereas the second one is a Markov renewal process. Finally, we apply our main theorems to the linear model originally introduced by Orsingher and Polito [23].
Growth-fragmentation processes describe the evolution of systems in which cells grow slowly and fragment suddenly. Despite originating as a way to describe biological phenomena, they have recently been found to describe the lengths of certain curves in statistical physics models. In this note, we describe a new growth-fragmentation process connected to random planar maps with faces of large degree, having as a key ingredient the ricocheted stable process recently discovered by Budd. The process has applications to the excursions of planar Brownian motion and Liouville quantum gravity.
We study Bose gases in $d \ge 2$ dimensions with short-range repulsive pair interactions at positive temperature, in the canonical ensemble and in the thermodynamic limit. We assume the presence of hard Poissonian obstacles and focus on the non-percolation regime. For sufficiently strong interparticle interactions, we show that almost surely there cannot be Bose–Einstein condensation into a sufficiently localized, normalized one-particle state. The results apply to the canonical eigenstates of the underlying one-particle Hamiltonian.
We prove that for a discrete determinantal process the BK inequality occurs for increasing events generated by simple points. We also give some elementary but nonetheless appealing relationships between a discrete determinantal process and the well-known CS decomposition.
Let X be a continuous-time strongly mixing or weakly dependent process and let T be a renewal process independent of X. We show general conditions under which the sampled process $(X_{T_i},T_i-T_{i-1})^{\top}$ is strongly mixing or weakly dependent. Moreover, we explicitly compute the strong mixing or weak dependence coefficients of the renewal sampled process and show that exponential or power decay of the coefficients of X is preserved (at least asymptotically). Our results imply that essentially all central limit theorems available in the literature for strongly mixing or weakly dependent processes can be applied when renewal sampled observations of the process X are at our disposal.
We study large-deviation probabilities of Telecom processes appearing as limits in a critical regime of the infinite-source Poisson model elaborated by I. Kaj and M. Taqqu. We examine three different regimes of large deviations (LD) depending on the deviation level. A Telecom process $(Y_t)_{t \ge 0}$ scales as $t^{1/\gamma}$, where t denotes time and $\gamma\in(1,2)$ is the key parameter of Y. We must distinguish moderate LD ${\mathbb P}(Y_t\ge y_t)$ with $t^{1/\gamma} \ll y_t \ll t$, intermediate LD with $ y_t \approx t$, and ultralarge LD with $ y_t \gg t$. The results we obtain essentially depend on another parameter of Y, namely the resource distribution. We solve completely the cases of moderate and intermediate LD (the latter being the most technical one), whereas the ultralarge deviation asymptotics is found for the case of regularly varying distribution tails. In all the cases considered, the large-deviation level is essentially reached by the minimal necessary number of ‘service processes’.
This article derives quantitative limit theorems for multivariate Poisson and Poisson process approximations. Employing the solution of the Stein equation for Poisson random variables, we obtain an explicit bound for the multivariate Poisson approximation of random vectors in the Wasserstein distance. The bound is then utilized in the context of point processes to provide a Poisson process approximation result in terms of a new metric called $d_\pi$, stronger than the total variation distance, defined as the supremum over all Wasserstein distances between random vectors obtained by evaluating the point processes on arbitrary collections of disjoint sets. As applications, the multivariate Poisson approximation of the sum of m-dependent Bernoulli random vectors, the Poisson process approximation of point processes of U-statistic structure, and the Poisson process approximation of point processes with Papangelou intensity are considered. Our bounds in $d_\pi$ are as good as those already available in the literature.
It was recently proven that the correlation function of the stationary version of a reflected Lévy process is nonnegative, nonincreasing, and convex. In another branch of the literature it was established that the mean value of the reflected process starting from zero is nondecreasing and concave. In the present paper it is shown, by putting them in a common framework, that these results extend to substantially more general settings. Indeed, instead of reflected Lévy processes, we consider a class of more general stochastically monotone Markov processes. In this setup we show monotonicity results associated with a supermodular function of two coordinates of our Markov process, from which the above-mentioned monotonicity and convexity/concavity results directly follow, but now for the class of Markov processes considered rather than just reflected Lévy processes. In addition, various results for the transient case (when the Markov process is not in stationarity) are provided. The conditions imposed are natural, in that they are satisfied by various frequently used Markovian models, as illustrated by a series of examples.
We study the geometric and topological features of U-statistics of order k when the k-tuples satisfying geometric and topological constraints do not occur frequently. Using appropriate scaling, we establish the convergence of U-statistics in vague topology, while the structure of a non-degenerate limit measure is also revealed. Our general result shows various limit theorems for geometric and topological statistics, including persistent Betti numbers of Čech complexes, the volume of simplices, a functional of the Morse critical points, and values of the min-type distance function. The required vague convergence can be obtained as a result of the limit theorem for point processes induced by U-statistics. The latter convergence particularly occurs in the $\mathcal M_0$-topology.
In this paper we study a class of optimal stopping problems under g-expectation, that is, the cost function is described by the solution of backward stochastic differential equations (BSDEs). Primarily, we assume that the reward process is $L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$-integrable with $\mu>\mu_0$ for some critical value $\mu_0$. This integrability is weaker than $L^p$-integrability for any $p>1$, so it covers a comparatively wide class of optimal stopping problems. To reach our goal, we introduce a class of reflected backward stochastic differential equations (RBSDEs) with $L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$-integrable parameters. We prove the existence, uniqueness, and comparison theorem for these RBSDEs under Lipschitz-type assumptions on the coefficients. This allows us to characterize the value function of our optimal stopping problem as the unique solution of such RBSDEs.
We consider a sequence of Poisson cluster point processes on $\mathbb{R}^d$: at step $n\in\mathbb{N}_0$ of the construction, the cluster centers have intensity $c/(n+1)$ for some $c>0$, and each cluster consists of the particles of a branching random walk up to generation n—generated by a point process with mean 1. We show that this ‘critical cluster cascade’ converges weakly, and that either the limit point process equals the void process (extinction), or it has the same intensity c as the critical cluster cascade (persistence). We obtain persistence if and only if the Palm version of the outgrown critical branching random walk is locally almost surely finite. This result allows us to give numerous examples for persistent critical cluster cascades.
In this paper, we study the optimal multiple stopping problem under the filtration-consistent nonlinear expectations. The reward is given by a set of random variables satisfying some appropriate assumptions, rather than a process that is right-continuous with left limits. We first construct the optimal stopping time for the single stopping problem, which is no longer given by the first hitting time of processes. We then prove by induction that the value function of the multiple stopping problem can be interpreted as the one for the single stopping problem associated with a new reward family, which allows us to construct the optimal multiple stopping times. If the reward family satisfies some strong regularity conditions, we show that the reward family and the value functions can be aggregated by some progressive processes. Hence, the optimal stopping times can be represented as hitting times.