We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study optimal stopping problems related to the pricing of perpetual American options in an extension of the Black-Merton-Scholes model in which the dividend and volatility rates of the underlying risky asset depend on the running values of its maximum and maximum drawdown. The optimal stopping times of the exercise are shown to be the first times at which the price of the underlying asset exits some regions restricted by certain boundaries depending on the running values of the associated maximum and maximum drawdown processes. We obtain closed-form solutions to the equivalent free-boundary problems for the value functions with smooth fit at the optimal stopping boundaries and normal reflection at the edges of the state space of the resulting three-dimensional Markov process. We derive first-order nonlinear ordinary differential equations for the optimal exercise boundaries of the perpetual American standard options.
We extend the local non-homogeneous Tb theorem of Nazarov, Treil and Volberg to the setting of singular integrals with operator-valued kernel that act on vector-valued functions. Here, ‘vector-valued’ means ‘taking values in a function lattice with the UMD (unconditional martingale differences) property’. A similar extension (but for general UMD spaces rather than UMD lattices) of Nazarov-Treil-Volberg's global non-homogeneous Tb theorem was achieved earlier by the first author, and it has found applications in the work of Mayboroda and Volberg on square-functions and rectifiability. Our local version requires several elaborations of the previous techniques, and raises new questions about the limits of the vector-valued theory.
We propose a two-urn model of Pólya type as follows. There are two urns, urn A and urn B. At the beginning, urn A contains rA red and wA white balls and urn B contains rB red and wB white balls. We first draw m balls from urn A and note their colors, say i red and m - i
white balls. The balls are returned to urn A and bi red and b(m - i) white balls are added to urn B. Next, we draw ℓ balls from urn B and note their colors, say j red and ℓ - j white balls. The balls are returned to urn B and aj red and a(ℓ - j) white balls are added to urn A. Repeat the above action n times and let Xn be the fraction of red balls in urn A and Yn the fraction of red balls in urn B. We first show that the expectations of Xn and Yn have the same limit, and then use martingale theory to show that Xn and Yn converge almost surely to the same limit.
This paper deals with Poisson processes on an arbitrary measurable space. Using a direct approach, we derive formulae for moments and cumulants of a vector of multiple Wiener-Itô integrals with respect to the compensated Poisson process. Also, we present a multivariate central limit theorem for a vector whose components admit a finite chaos expansion of the type of a Poisson U-statistic. The approach is based on recent results of Peccati et al. (2010), combining Malliavin calculus and Stein's method; it also yields Berry-Esseen-type bounds. As applications, we discuss moment formulae and central limit theorems for general geometric functionals of intersection processes associated with a stationary Poisson process of k-dimensional flats in .
Given a Poisson process on a d-dimensional torus, its random geometric simplicial complex is the complex whose vertices are the points of the Poisson process and simplices are given by the C̆ech complex associated to the coverage of each point. By means of Malliavin calculus, we compute explicitly the three first-order moments of the number of k-simplices, and provide a way to compute higher-order moments. Then we derive the mean and the variance of the Euler characteristic. Using the Stein method, we estimate the speed of convergence of the number of occurrences of any connected subcomplex as it converges towards the Gaussian law when the intensity of the Poisson point process tends to infinity. We use a concentration inequality for Poisson processes to find bounds for the tail distribution of the Betti number of first order and the Euler characteristic in such simplicial complexes.
Consider the classic infinite-horizon problem of stopping a one-dimensional diffusion to optimise between running and terminal rewards, and suppose that we are given a parametrised family of such problems. We provide a general theory of parameter dependence in infinite-horizon stopping problems for which threshold strategies are optimal. The crux of the approach is a supermodularity condition which guarantees that the family of problems is indexable by a set-valued map which we call the indifference map. This map is a natural generalisation of the allocation (Gittins) index, a classical quantity in the theory of dynamic allocation. Importantly, the notion of indexability leads to a framework for inverse optimal stopping problems.
We analyze the optimal policy for the sequential selection of an alternating subsequence from a sequence of n independent observations from a continuous distribution F, and we prove a central limit theorem for the number of selections made by that policy. The proof exploits the backward recursion of dynamic programming and assembles a detailed understanding of the associated value functions and selection rules.
We show that the total number of collisions in the exchangeable coalescent process driven by the beta (1, b) measure converges in distribution to a 1-stable law, as the initial number of particles goes to ∞. The stable limit law is also shown for the total branch length of the coalescent tree. These results were known previously for the instance b = 1, which corresponds to the Bolthausen-Sznitman coalescent. The approach we take is based on estimating the quality of a renewal approximation to the coalescent in terms of a suitable Wasserstein distance. Application of the method to beta (a, b)-coalescents with 0 < a < 1 leads to a simplified derivation of the known (2 - a)-stable limit. We furthermore derive asymptotic expansions for the moments of the number of collisions and of the total branch length for the beta (1, b)-coalescent by exploiting the method of sequential approximations.
In this paper we study a reinsurance game between two insurers whose surplus processes are modeled by arithmetic Brownian motions. We assume a minimax criterion in the game. One insurer tries to maximize the probability of absolute dominance while the other tries to minimize it through reinsurance control. Here absolute dominance is defined as the event that liminf of the difference of the surplus levels tends to -∞. Under suitable parameter conditions, the game is solved with the value function and the Nash equilibrium strategy given in explicit form.
Let T be a stopping time associated with a sequence of independent and identically distributed or exchangeable random variables taking values in {0, 1, 2, …, m}, and let ST,i be the stopped sum denoting the number of appearances of outcome 'i' in X1, …, XT, 0 ≤ i ≤ m. In this paper we present results revealing that, if the distribution of T is known, then we can also derive the joint distribution of (T, ST,0, ST,1, …, ST,m). Two applications, which have independent interest, are offered to illustrate the applicability and the usefulness of the main results.
In this paper a method based on a Markov chain Monte Carlo (MCMC) algorithm is proposed to compute the probability of a rare event. The conditional distribution of the underlying process given that the rare event occurs has the probability of the rare event as its normalizing constant. Using the MCMC methodology, a Markov chain is simulated, with the aforementioned conditional distribution as its invariant distribution, and information about the normalizing constant is extracted from its trajectory. The algorithm is described in full generality and applied to the problem of computing the probability that a heavy-tailed random walk exceeds a high threshold. An unbiased estimator of the reciprocal probability is constructed whose normalized variance vanishes asymptotically. The algorithm is extended to random sums and its performance is illustrated numerically and compared to existing importance sampling algorithms.
We consider a two-dimensional reflecting random walk on the nonnegative integer quadrant. This random walk is assumed to be skip free in the direction to the boundary of the quadrant, but may have unbounded jumps in the opposite direction, which are referred to as upward jumps. We are interested in the tail asymptotic behavior of its stationary distribution, provided it exists. Assuming that the upward jump size distributions have light tails, we find the rough tail asymptotics of the marginal stationary distributions in all directions. This generalizes the corresponding results for the skip-free reflecting random walk in Miyazawa (2009). We exemplify these results for a two-node queueing network with exogenous batch arrivals.
The yeast Saccharomyces cerevisiae has emerged as an ideal model system to study the dynamics of prion proteins which are responsible for a number of fatal neurodegenerative diseases in humans. Within an infected cell, prion proteins aggregate in complexes which may increase in size or be fragmented and are transmitted upon cell division. Recent work in yeast suggests that only aggregates below a critical size are transmitted efficiently. We formulate a continuous-time branching process model of a yeast colony under conditions of prion curing. We generalize previous approaches by providing an explicit formula approximating prion loss as influenced by both aggregate growth and size-dependent transmission.
We construct a flow of continuous-time and discrete-state branching processes. Some scaling limit theorems for the flow are proved, which lead to the path-valued branching processes and nonlocal branching superprocesses, over the positive half line, studied in Li (2014).
In this paper we provide the basis for new methods of inference for max-stable processes ξ on general spaces that admit a certain incremental representation, which, in important cases, has a much simpler structure than the max-stable process itself. A corresponding peaks-over-threshold approach will incorporate all single events that are extreme in some sense and will therefore rely on a substantially larger amount of data in comparison to estimation procedures based on block maxima. Conditioning a process η in the max-domain of attraction of ξ on being extremal, several convergence results for the increments of η are proved. In a similar way, the shape functions of mixed moving maxima (M3) processes can be extracted from suitably conditioned single events η. Connecting the two approaches, transformation formulae for processes that admit both an incremental and an M3 representation are identified.
In Chaudhuri and Dasgupta's 2006 paper a certain stochastic model for ‘replicating character strings’ (such as in DNA sequences) was studied. In their model, a random ‘input’ sequence was subjected to random mutations, insertions, and deletions, resulting in a random ‘output’ sequence. In this paper their model will be set up in a slightly different way, in an effort to facilitate further development of the theory for their model. In their 2006 paper, Chaudhuri and Dasgupta showed that, under certain conditions, strict stationarity of the ‘input’ sequence would be preserved by the ‘output’ sequence, and they proved a similar ‘preservation’ result for the property of strong mixing with exponential mixing rate. In our setup, we will in spirit slightly extend their ‘preservation of stationarity’ result, and also prove a ‘preservation’ result for the property of absolute regularity with summable mixing rate.
As the name suggests, the family of general error distributions has been used to model nonnormal errors in a variety of situations. In this article we show that the asymptotic distribution of linearly normalized partial maxima of random observations from the general error distributions is Gumbel when the parameter of these distributions lies in the interval (0, 1). Our result fills a gap in the literature. We also establish the corresponding density convergence, obtain an asymptotic distribution of the partial maxima under power normalization, and state and prove a strong law. We also study the asymptotic behaviour of observations near the partial maxima and the sum of such observations.
The paper considers a statistical concept of causality in continuous time between filtered probability spaces, based on Granger’s definition of causality. This causality concept is connected with the preservation of the martingale representation property when the filtration is getting smaller. We also give conditions, in terms of causality, for every martingale to be a continuous semimartingale, and we consider the equivalence between the concept of causality and the preservation of the martingale representation property under change of measure. In addition, we apply these results to weak solutions of stochastic differential equations. The results can be applied to the economics of securities trading.
There is much interest within the mathematical biology and statistical physics community in converting stochastic agent-based models for random walkers into a partial differential equation description for the average agent density. Here a collection of noninteracting biased random walkers on a one-dimensional lattice is considered. The usual master equation approach requires that two continuum limits, involving three parameters, namely step length, time step and the random walk bias, approach zero in a specific way. We are interested in the case where the two limits are not consistent. New results are obtained using a Fokker–Planck equation and the results are highly dependent on the simulation update schemes. The theoretical results are confirmed with examples. These findings provide insight into the importance of updating schemes to an accurate macroscopic description of stochastic local movement rules in agent-based models when the lattice spacing represents a physical object such as cell diameter.
Consider a one-dimensional diffusion process on the diffusion interval I originated in x0 ∈ I. Let a(t) and b(t) be two continuous functions of t, t > t0, with bounded derivatives, a(t) < b(t), and a(t), b(t) ∈ I, for all t > t0. We study the joint distribution of the two random variables Ta and Tb, the first hitting times of the diffusion process through the two boundaries a(t) and b(t), respectively. We express the joint distribution of Ta and Tb in terms of ℙ(Ta < t, Ta < Tb) and ℙ(Tb < t, Ta > Tb), and we determine a system of integral equations verified by these last probabilities. We propose a numerical algorithm to solve this system and we prove its convergence properties. Examples and modeling motivation for this study are also discussed.