To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce and study the class of branching-stable point measures, which can be seen as an analog of stable random variables when the branching mechanism for point measures replaces the usual addition. In contrast with the classical theory of stable (Lévy) processes, there exists a rich family of branching-stable point measures with a negative scaling exponent, which can be described as certain Crump‒Mode‒Jagers branching processes. We investigate the asymptotic behavior of their cumulative distribution functions, that is, the number of atoms in (-∞, x] as x→∞, and further depict the genealogical lineage of typical atoms. For both results, we rely crucially on the work of Biggins (1977), (1992).
This paper deals with a non-self-adjoint differential operator which is associated with a diffusion process with random jumps from the boundary. Our main result is that the algebraic multiplicity of an eigenvalue is equal to its order as a zero of the characteristic function $\unicode[STIX]{x1D6E5}(\unicode[STIX]{x1D706})$. This is a new criterion for determining the multiplicities of eigenvalues for concrete operators.
In this paper we study the Assouad dimension of graphs of certain Lévy processes and functions defined by stochastic integrals. We do this by introducing a convenient condition which guarantees a graph to have full Assouad dimension and then show that graphs of our studied processes satisfy this condition.
We give the first polynomial upper bound on the mixing time of the edge-flip Markov chain for unbiased dyadic tilings, resolving an open problem originally posed by Janson, Randall and Spencer in 2002 [14]. A dyadic tiling of size n is a tiling of the unit square by n non-overlapping dyadic rectangles, each of area 1/n, where a dyadic rectangle is any rectangle that can be written in the form [a2−s, (a + 1)2−s] × [b2−t, (b + 1)2−t] for a, b, s, t ∈ ℤ⩾ 0. The edge-flip Markov chain selects a random edge of the tiling and replaces it with its perpendicular bisector if doing so yields a valid dyadic tiling. Specifically, we show that the relaxation time of the edge-flip Markov chain for dyadic tilings is at most O(n4.09), which implies that the mixing time is at most O(n5.09). We complement this by showing that the relaxation time is at least Ω(n1.38), improving upon the previously best lower bound of Ω(n log n) coming from the diameter of the chain.
Consider a uniform random rooted labelled tree on n vertices. We imagine that each node of the tree has space for a single car to park. A number m ≤ n of cars arrive one by one, each at a node chosen independently and uniformly at random. If a car arrives at a space which is already occupied, it follows the unique path towards the root until it encounters an empty space, in which case it parks there; if there is no empty space, it leaves the tree. Consider m = ⌊α n⌋ and let An,α denote the event that all ⌊α n⌋ cars find spaces in the tree. Lackner and Panholzer proved (via analytic combinatorics methods) that there is a phase transition in this model. Then if α ≤ 1/2, we have $\mathbb{P}({A_{n,\alpha}}) \to {\sqrt{1-2\alpha}}/{(1-\alpha})$, whereas if α > 1/2 we have $\mathbb{P}({A_{n,\alpha}}) \to 0$. We give a probabilistic explanation for this phenomenon, and an alternative proof via the objective method. Along the way, we consider the following variant of the problem: take the tree to be the family tree of a Galton–Watson branching process with Poisson(1) offspring distribution, and let an independent Poisson(α) number of cars arrive at each vertex. Let X be the number of cars which visit the root of the tree. We show that $\mathbb{E}{[X]}$ undergoes a discontinuous phase transition, which turns out to be a generic phenomenon for arbitrary offspring distributions of mean at least 1 for the tree and arbitrary arrival distributions.
Given a one-dimensional downwards transient diffusion process $X$, we consider a random time $\unicode[STIX]{x1D70C}$, the last exit time when $X$ exits a certain level $\ell$, and detect the optimal stopping time for it. In particular, for this random time $\unicode[STIX]{x1D70C}$, we solve the optimisation problem $\inf _{\unicode[STIX]{x1D70F}}\mathbb{E}[\unicode[STIX]{x1D706}(\unicode[STIX]{x1D70F}-\unicode[STIX]{x1D70C})_{+}+(1-\unicode[STIX]{x1D706})(\unicode[STIX]{x1D70C}-\unicode[STIX]{x1D70F})_{+}]$ over all stopping times $\unicode[STIX]{x1D70F}$. We show that the process should stop optimally when it runs below some fixed level $\unicode[STIX]{x1D705}_{\ell }$ for the first time, where $\unicode[STIX]{x1D705}_{\ell }$ is the unique solution in the interval $(0,\unicode[STIX]{x1D706}\ell )$ of an explicitly defined equation.
This paper studies the friendship paradox for weighted and directed networks, from a probabilistic perspective. We consolidate and extend recent results of Cao and Ross and Kramer, Cutler and Radcliffe, to weighted networks. Friendship paradox results for directed networks are given; connections to detailed balance are considered.
Let X be the constrained random walk on ℤ+2 having increments (1,0), (-1,1), and (0,-1) with respective probabilities λ, µ1, and µ2 representing the lengths of two tandem queues. We assume that X is stable and µ1≠µ2. Let τn be the first time when the sum of the components of X equals n. Let Y be the constrained random walk on ℤ×ℤ+ having increments (-1,0), (1,1), and (0,-1) with probabilities λ, µ1, and µ2. Let τ be the first time that the components of Y are equal to each other. We prove that Pn-xn(1),xn(2)(τ<∞) approximates pn(xn) with relative error exponentially decaying in n for xn=⌊nx⌋, x ∈ℝ+2, 0<x(1)+x(2)<1, x(1)>0. An affine transformation moving the origin to the point (n,0) and letting n→∞ connect the X and Y processes. We use a linear combination of basis functions constructed from single and conjugate points on a characteristic surface associated with X to derive a simple expression for ℙy(τ<∞) in terms of the utilization rates of the nodes. The proof that the relative error decays exponentially in n uses a sequence of subsolutions of a related Hamilton‒Jacobi‒Bellman equation on a manifold consisting of three copies of ℝ+2 glued to each other along the constraining boundaries. We indicate how the ideas of the paper can be generalized to more general processes and other exit boundaries.
We consider a Markov chain of point processes such that each state is a superposition of an independent cluster process with the previous state as its centre process together with some independent noise process and a thinned version of the previous state. The model extends earlier work by Felsenstein (1975) and Shimatani (2010) describing a reproducing population. We discuss when closed-form expressions of the first- and second-order moments are available for a given state. In a special case it is known that the pair correlation function for these type of point processes converges as the Markov chain progresses, but it has not been shown whether the Markov chain has an equilibrium distribution with this, particular, pair correlation function and how it may be constructed. Assuming the same reproducing system, we construct an equilibrium distribution by a coupling argument.
We consider the stationary solution Z of the Markov chain {Zn}n∈ℕ defined by Zn+1=ψn+1(Zn), where {ψn}n∈ℕ is a sequence of independent and identically distributed random Lipschitz functions. We estimate the probability of the event {Z>x} when x is large, and develop a state-dependent importance sampling estimator under a set of assumptions on ψn such that, for large x, the event {Z>x} is governed by a single large jump. Under natural conditions, we show that our estimator is strongly efficient. Special attention is paid to a class of perpetuities with heavy tails.
Under mild nondegeneracy assumptions on branching rates in each generation, we provide a criterion for almost sure extinction of a multi-type branching process with time-dependent branching rates. We also provide a criterion for the total number of particles (conditioned on survival and divided by the expectation of the resulting random variable) to approach an exponential random variable as time goes to ∞.
We consider upper‒lower (UL) (and lower‒upper (LU)) factorizations of the one-step transition probability matrix of a random walk with the state space of nonnegative integers, with the condition that both upper and lower triangular matrices in the factorization are also stochastic matrices. We provide conditions on the free parameter of the UL factorization in terms of certain continued fractions such that this stochastic factorization is possible. By inverting the order of the factors (also known as a Darboux transformation) we obtain a new family of random walks where it is possible to state the spectral measures in terms of a Geronimus transformation. We repeat this for the LU factorization but without a free parameter. Finally, we apply our results in two examples; the random walk with constant transition probabilities, and the random walk generated by the Jacobi orthogonal polynomials. In both situations we obtain urn models associated with all the random walks in question.
In this paper we study the number of customers in infinite-server queues with a self-exciting (Hawkes) arrival process. Initially we assume that service requirements are exponentially distributed and that the Hawkes arrival process is of a Markovian nature. We obtain a system of differential equations that characterizes the joint distribution of the arrival intensity and the number of customers. Moreover, we provide a recursive procedure that explicitly identifies (transient and stationary) moments. Subsequently, we allow for non-Markovian Hawkes arrival processes and nonexponential service times. By viewing the Hawkes process as a branching process, we find that the probability generating function of the number of customers in the system can be expressed in terms of the solution of a fixed-point equation. We also include various asymptotic results: we derive the tail of the distribution of the number of customers for the case that the intensity jumps of the Hawkes process are heavy tailed, and we consider a heavy-traffic regime. We conclude by discussing how our results can be used computationally and by verifying the numerical results via simulations.
In this paper we study a finite-fuel two-dimensional degenerate singular stochastic control problem under regime switching motivated by the optimal irreversible extraction problem of an exhaustible commodity. A company extracts a natural resource from a reserve with finite capacity and sells it in the market at a spot price that evolves according to a Brownian motion with volatility modulated by a two-state Markov chain. In this setting, the company aims at finding the extraction rule that maximizes its expected discounted cash flow, net of the costs of extraction and maintenance of the reserve. We provide expressions for both the value function and the optimal control. On the one hand, if the running cost for the maintenance of the reserve is a convex function of the reserve level, the optimal extraction rule prescribes a Skorokhod reflection of the (optimally) controlled state process at a certain state and price-dependent threshold. On the other hand, in the presence of a concave running cost function, it is optimal to instantaneously deplete the reserve at the time at which the commodity's price exceeds an endogenously determined critical level. In both cases, the threshold triggering the optimal control is given in terms of the optimal stopping boundary of an auxiliary family of perpetual optimal selling problems with regime switching.
Motivated by a common mathematical finance topic, we discuss the reciprocal of the exit time from a cone of planar Brownian motion which also corresponds to the exponential functional of Brownian motion in the framework of planar Brownian motion. We prove a conjecture of Vakeroudis and Yor (2012) concerning infinite divisibility properties of this random variable and present a novel simple proof of the result of DeBlassie (1987), (1988) concerning the asymptotic behavior of the distribution of the Bessel clock appearing in the skew-product representation of planar Brownian motion, as t→∞. We use the results of the windings approach in order to obtain results for quantities associated to the pricing of Asian options.
We consider positive zero-sum stochastic games with countable state and action spaces. For each player, we provide a characterization of those strategies that are optimal in every subgame. These characterizations are used to prove two simplification results. We show that if player 2 has an optimal strategy then he/she also has a stationary optimal strategy, and prove the same for player 1 under the assumption that the state space and player 2's action space are finite.
Suppose that a mobile sensor describes a Markovian trajectory in the ambient space and at each time the sensor measures an attribute of interest, e.g. the temperature. Using only the location history of the sensor and the associated measurements, we estimate the average value of the attribute over the space. In contrast to classical probabilistic integration methods, e.g. Monte Carlo, the proposed approach does not require any knowledge of the distribution of the sensor trajectory. We establish probabilistic bounds on the convergence rates of the estimator. These rates are better than the traditional `root n'-rate, where n is the sample size, attached to other probabilistic integration methods. For finite sample sizes, we demonstrate the favorable behavior of the procedure through simulations and consider an application to the evaluation of the average temperature of oceans.
We consider a supercritical branching process (Zn, n ≥ 0) with offspring distribution (pk, k ≥ 0) satisfying p0 = 0 and p1 > 0. By applying the self-normalized large deviation of Shao (1997) for independent and identically distributed random variables, we obtain the self-normalized large deviation for supercritical branching processes, which is the self-normalized version of the result obtained by Athreya (1994). The self-normalized large deviation can also be generalized to supercritical multitype branching processes.