We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The present paper is devoted to complete convergence and the strong law of large numbers under moment conditions near those of the law of the single logarithm (LSL) for independent and identically distributed arrays. More precisely, we investigate limit theorems under moment conditions which are stronger than $2p$ for any $p<2$, in which case we know that there is almost sure convergence to 0, and weaker than $E\,X^{4}/(\log ^{+}|X|)^{2}<\infty$, in which case the LSL holds.
Using a new approach, for spectrally negative Lévy processes we find joint Laplace transforms involving the last exit time (from a semiinfinite interval), the value of the process at the last exit time, and the associated occupation time, which generalize some previous results.
By adding a vorticity matrix to the reversible transition probability matrix, we show that the commute time and average hitting time are smaller than that of the original reversible one. In particular, we give an affirmative answer to a conjecture of Aldous and Fill (2002). Further quantitive properties are also studied for the nonreversible finite Markov chains.
We study a stochastic differential equation driven by a Poisson point process, which models the continuous change in a population's environment, as well as the stochastic fixation of beneficial mutations that might compensate for this change. The fixation probability of a given mutation increases as the phenotypic lag Xt between the population and the optimum grows larger, and successful mutations are assumed to fix instantaneously (leading to an adaptive jump). Our main result is that the process is transient (i.e. converges to -∞, so that continued adaptation is impossible) if the rate of environmental change v exceeds a parameter m, which can be interpreted as the rate of adaptation in case every beneficial mutation becomes fixed with probability 1. If v < m, the process is Harris recurrent and possesses a unique invariant probability measure, while in the limiting case m = v, Harris recurrence with an infinite invariant measure or transience depends upon additional technical conditions. We show how our results can be extended to a class of time varying rates of environmental change.
A critical branching process {Zk, k = 0, 1, 2, ...} in a random environment is considered. A conditional functional limit theorem for the properly scaled process {log Zpu, 0 ≤ u < ∞} is established under the assumptions that Zn > 0 and p ≪ n. It is shown that the limiting process is a Lévy process conditioned to stay nonnegative. The proof of this result is based on a limit theorem describing the distribution of the initial part of the trajectories of a driftless random walk conditioned to stay nonnegative.
Modern processing networks often consist of heterogeneous servers with widely varying capabilities, and process job flows with complex structure and requirements. A major challenge in designing efficient scheduling policies in these networks is the lack of reliable estimates of system parameters, and an attractive approach for addressing this challenge is to design robust policies, i.e. policies that do not use system parameters such as arrival and/or service rates for making scheduling decisions. In this paper we propose a general framework for the design of robust policies. The main technical novelty is the use of a stochastic gradient projection method that reacts to queue-length changes in order to find a balanced allocation of service resources to incoming tasks. We illustrate our approach on two broad classes of processing systems, namely the flexible fork-join networks and the flexible queueing networks, and prove the rate stability of our proposed policies for these networks under nonrestrictive assumptions.
We present solutions to nonzero-sum games of optimal stopping for Brownian motion in [0, 1] absorbed at either 0 or 1. The approach used is based on the double partial superharmonic characterisation of the value functions derived in Attard (2015). In this setting the characterisation of the value functions has a transparent geometrical interpretation of 'pulling two ropes' above 'two obstacles' which must, however, be constrained to pass through certain regions. This is an extension of the analogous result derived by Peskir (2009), (2012) (semiharmonic characterisation) for the value function in zero-sum games of optimal stopping. To derive the value functions we transform the game into a free-boundary problem. The latter is then solved by making use of the double smooth fit principle which was also observed in Attard (2015). Martingale arguments based on the Itô–Tanaka formula will then be used to verify that the solution to the free-boundary problem coincides with the value functions of the game and this will establish the Nash equilibrium.
We prove a second-order limit law for additive functionals of a d-dimensional fractional Brownian motion with Hurst index H = 1 / d, using the method of moments and extending the Kallianpur–Robbins law, and then give a functional version of this result. That is, we generalize it to the convergence of the finite-dimensional distributions for corresponding stochastic processes.
Drawdown (respectively, drawup) of a stochastic process, also referred as the reflected process at its supremum (respectively, infimum), has wide applications in many areas including financial risk management, actuarial mathematics, and statistics. In this paper, for general time-homogeneous Markov processes, we study the joint law of the first passage time of the drawdown (respectively, drawup) process, its overshoot, and the maximum of the underlying process at this first passage time. By using short-time pathwise analysis, under some mild regularity conditions, the joint law of the three drawdown quantities is shown to be the unique solution to an integral equation which is expressed in terms of fundamental two-sided exit quantities of the underlying process. Explicit forms for this joint law are found when the Markov process has only one-sided jumps or is a Lévy process (possibly with two-sided jumps). The proposed methodology provides a unified approach to study various drawdown quantities for the general class of time-homogeneous Markov processes.
The vertices of the kth power of a directed path with n vertices are exposed one by one to a selector in some random order. At any time the selector can see the graph induced by the vertices that have already appeared. The selector's aim is to choose online the maximal vertex (i.e. the vertex with no outgoing edges). We give upper and lower bounds for the asymptotic behaviour of pn,kn1/(k+1), where pn,k is the probability of success under the optimal algorithm. In order to derive the upper bound, we consider a model in which the selector obtains some extra information about the edges that have already appeared. We give the exact asymptotics of the probability of success under the optimal algorithm in this case. In order to derive the lower bound, we analyse a site percolation process on a sequence of the kth powers of a directed path with n vertices.
We consider two players, starting with m and n units, respectively. In each round, the winner is decided with probability proportional to each player's fortune, and the opponent loses one unit. We prove an explicit formula for the probability p(m, n) that the first player wins. When m ~ Nx0, n ~ Ny0, we prove the fluid limit as N → ∞. When x0 = y0, z → p(N, N + z√N) converges to the standard normal cumulative distribution function and the difference in fortunes scales diffusively. The exact limit of the time of ruin τN is established as (T - τN) ~ N-βW1/β, β = ¼, T = x0 + y0. Modulo a constant, W ~ χ21(z02 / T2).
In the paper we present a novel construction of Marshall–Olkin (MO) multivariate exponential distributions of failure times as distributions of the first-passage times of the coordinates of multidimensional Lévy subordinator processes above independent unit-mean exponential random variables. A time-inhomogeneous version is also given that replaces Lévy subordinators with additive subordinators. An attractive feature of MO distributions for applications, such as to portfolio credit risk, is its singular component that yields positive probabilities of simultaneous defaults of multiple obligors, capturing the default clustering phenomenon. The drawback of the original MO fatal shock construction of MO distributions is that it requires one to simulate 2n-1 independent exponential random variables. In practice, the dimensionality is typically on the order of hundreds or thousands of obligors in a large credit portfolio, rendering the MO fatal shock construction infeasible to simulate. The subordinator construction reduces the problem of simulating a rich subclass of MO distributions to simulating an n-dimensional subordinator. When one works with the class of subordinators constructed from independent one-dimensional subordinators with known transition distributions, such as gamma and inverse Gaussian, or their Sato versions in the additive case, the simulation effort is linear in n. To illustrate, we present a simulation of 100,000 samples of a credit portfolio with 1,000 obligors that takes less than 18 seconds on a PC.
An rth-order extremal process Δ(r) = (Δ(r)t)t≥0 is a continuous-time analogue of the rth partial maximum sequence of a sequence of independent and identically distributed random variables. Studying maxima in continuous time gives rise to the notion of limiting properties of Δt(r) as t ↓ 0. Here we describe aspects of the small-time behaviour of Δ(r) by characterising its upper and lower classes relative to a nonstochastic nondecreasing function bt > 0 with limt↓bt = 0. We are then able to give an integral criterion for the almost sure relative stability of Δt(r) as t ↓ 0, r = 1, 2, . . ., or, equivalently, as it turns out, for the almost sure relative stability of Δt(1) as t ↓ 0.
In the present work, some new maximal inequalities for nonnegative N-demi(super)martingales are first developed. As an application, new bounds for the cumulative distribution function of the waiting time for the first occurrence of a scan statistic in a sequence of independent and identically distributed (i.i.d.) binary trials are obtained. A numerical study is also carried out for investigating the behavior of the new bounds.
Lester Dubins and Leonard Savage posed the question as to what extent the optimal reward function U of a leavable gambling problem varies continuously in the gambling house Γ, which specifies the stochastic processes available to a player, and the utility function u, which determines the payoff for each process. Here a distance is defined for measurable houses with a Borel state space and a bounded Borel measurable utility. A trivial example shows that the mapping Γ ↦ U is not always continuous for fixed u. However, it is lower semicontinuous in the sense that, if Γn converges to Γ, then lim inf Un ≥ U. The mapping u ↦ U is continuous in the supnorm topology for fixed Γ, but is not always continuous in the topology of uniform convergence on compact sets. Dubins and Savage observed that a failure of continuity occurs when a sequence of superfair casinos converges to a fair casino, and queried whether this is the only source of discontinuity for the special gambling problems called casinos. For the distance used here, an example shows that there can be discontinuity even when all the casinos are subfair.
Iterative Filtering (IF) is an alternative technique to the Empirical Mode Decomposition (EMD) algorithm for the decomposition of non–stationary and non–linear signals. Recently in [3] IF has been proved to be convergent for any L2 signal and its stability has been also demonstrated through examples. Furthermore in [3] the so called Fokker–Planck (FP) filters have been introduced. They are smooth at every point and have compact supports. Based on those results, in this paper we introduce the Multidimensional Iterative Filtering (MIF) technique for the decomposition and time–frequency analysis of non–stationary high–dimensional signals. We present the extension of FP filters to higher dimensions. We prove convergence results under general sufficient conditions on the filter shape. Finally we illustrate the promising performance of MIF algorithm, equipped with high–dimensional FP filters, when applied to the decomposition of two dimensional signals.
We study certain classes of local sets of the two-dimensional Gaussian free field (GFF) in a simply connected domain, and their relation to the conformal loop ensemble $\text{CLE}_{4}$ and its variants. More specifically, we consider bounded-type thin local sets (BTLS), where thin means that the local set is small in size, and bounded type means that the harmonic function describing the mean value of the field away from the local set is bounded by some deterministic constant. We show that a local set is a BTLS if and only if it is contained in some nested version of the $\text{CLE}_{4}$ carpet, and prove that all BTLS are necessarily connected to the boundary of the domain. We also construct all possible BTLS for which the corresponding harmonic function takes only two prescribed values and show that all these sets (and this includes the case of $\text{CLE}_{4}$) are in fact measurable functions of the GFF.
We discuss discrete stochastic processes with two independent variables: one is the standard symmetric random walk, and the other is the Poisson process. Convergence of discrete stochastic processes is analysed, such that the symmetric random walk tends to the standard Brownian motion. We show that a discrete analogue of Ito’s formula converges to the corresponding continuous formula.
We consider a class of impulse control problems for general underlying strong Markov processes on the real line, which allows for an explicit solution. The optimal impulse times are shown to be of a threshold type and the optimal threshold is characterised as a solution of a (typically nonlinear) equation. The main ingredient we use is a representation result for excessive functions in terms of expected suprema.
In this paper we study the expected rank problem under full information. Our approach uses the planar Poisson approach from Gnedin (2007) to derive the expected rank of a stopping rule that is one of the simplest nontrivial examples combining rank dependent rules with threshold rules. This rule attains an expected rank lower than the best upper bounds obtained in the literature so far, in particular, we obtain an expected rank of 2.326 14.