To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is well known that the Kelly system of proportional betting, which maximizes the long-term geometric rate of growth of the gambler's fortune, minimizes the expected time required to reach a specified goal. Less well known is the fact that it maximizes the median of the gambler's fortune. This was pointed out by the author in a 1988 paper, but only under asymptotic assumptions that might cause one to question its applicability. Here we show that the result is true more generally, and argue that this is a desirable property of the Kelly system.
We consider a failure-prone system operating in continuous time. Condition monitoring is conducted at discrete time epochs. The state of the system is assumed to evolve as a continuous-time Markov process with a finite state space. The observation process with continuous-range values is stochastically related to the state process, which, except for the failure state, is unobservable. Combining the failure information and the condition monitoring information, we derive a general recursive filter, and, as special cases, we obtain recursive formulae for the state estimation and other quantities of interest. Updated parameter estimates are obtained using the expectation-maximization (EM) algorithm. Some practical prediction problems are discussed and finally an illustrative example is given using a real dataset.
Let Xn,…,X1 be independent, identically distributed (i.i.d.) random variables with distribution function F. A statistician, knowing F, observes the X values sequentially and is given two chances to choose Xs using stopping rules. The statistician's goal is to stop at a value of X as small as possible. Let equal the expectation of the smaller of the two values chosen by the statistician when proceeding optimally. We obtain the asymptotic behaviour of the sequence for a large class of Fs belonging to the domain of attraction (for the minimum) 𝒟(Gα), where Gα(x) = [1 - exp(-xα)]1(x ≥ 0) (with 1(·) the indicator function). The results are compared with those for the asymptotic behaviour of the classical one-choice value sequence , as well as with the ‘prophet value’ sequence
In the characterization of multivariate extremal indices of multivariate stationary processes, multivariate maxima of moving maxima processes, or M4 processes for short, have been introduced by Smith and Weissman. Central to the introduction of M4 processes is that the extreme observations of multivariate stationary processes may be characterized in terms of a limiting max-stable process under quite general conditions, and that a max-stable process can be arbitrarily closely approximated by an M4 process. In this paper, we derive some additional basic probabilistic properties for a finite class of M4 processes, each of which contains finite-range clustered moving patterns, called signature patterns, when extreme events occur. We use these properties to construct statistical estimation schemes for model parameters.
We continue the study of the asymptotic behaviour of a random walk when it exits from a symmetric region of the form {(x, n): |x| ≤ rnb} as r → ∞ which was begun in Part I of this work. In contrast to that paper, we are interested in the case where the probability of exiting at the upper boundary tends to 1. In this scenario we treat the case where the power b lies in the interval [0, 1), and we establish necessary and sufficient conditions for the overshoot to be relatively stable in probability (except for the case ), and for the pth moment of the overshoot to be O(rq) as r → ∞.
Let (Sn)n≥0 be a correlated random walk on the integers, let M0 ≥ S0 be an arbitrary integer, and let Mn = max{M0, S1,…, Sn}. An optimal stopping rule is derived for the sequence Mn - nc, where c > 0 is a fixed cost. The optimal rule is shown to be of threshold type: stop at the first time that Mn - Sn ≥ Δ, where Δ is a certain nonnegative integer. An explicit expression for this optimal threshold is given.
Suppose that μ is the branching measure on the boundary of a supercritical Galton–Watson tree with offspring distribution N such that E[N log N] < ∞ and P{N = 1} > 0. We determine the multifractal spectrum of μ using a method different from that proposed by Shieh and Taylor, which is flawed.
We introduce a new model for the infection of one or more subjects by a single agent, and calculate the probability of infection after a fixed length of time. We model the agent and subjects as random walkers on a complete graph of N sites, jumping with equal rates from site to site. When one of the walkers is at the same site as the agent for a length of time τ, we assume that the infection probability is given by an exponential law with parameter γ, i.e. q(τ) = 1 - e-γτ. We introduce the boundary condition that all walkers return to their initial site (‘home’) at the end of a fixed period T. We also assume that the incubation period is longer than T, so that there is no immediate propagation of the infection. In this model, we find that for short periods T, i.e. such that γT ≪ 1 and T ≪ 1, the infection probability is remarkably small and behaves like T3. On the other hand, for large T, the probability tends to 1 (as might be expected) exponentially. However, the dominant exponential rate is given approximately by 2γ/[(2+γ)N] and is therefore small for large N.
We study a family of locally self-similar stochastic processes Y = {Y(t)}t∈ℝ with α-stable distributions, called linear multifractional stable motions. They have infinite variance and may possess skewed distributions. The linear multifractional stable motion processes include, in particular, the classical linear fractional stable motion processes, which have stationary increments and are self-similar with self-similarity parameter H. The linear multifractional stable motion process Y is obtained by replacing the self-similarity parameter H in the integral representation of the linear fractional stable motion process by a deterministic function H(t). Whereas the linear fractional stable motion is always continuous in probability, this is not in general the case for Y. We obtain necessary and sufficient conditions for the continuity in probability of the process Y. We also examine the effect of the regularity of the function H(t) on the local structure of the process. We show that under certain Hölder regularity conditions on the function H(t), the process Y is locally equivalent to a linear fractional stable motion process, in the sense of finite-dimensional distributions. We study Y by using a related α-stable random field and its partial derivatives.
In this paper, we show that the mean comparison theorem, which is valid for Brownian motion, cannot be extended to Poisson processes. A counterexample in the Poisson case for which the mean comparison theorem does not hold is provided.
The ‘rendezvous time’ of two stochastic processes is the first time at which they cross or hit each other. We consider such times for a Brownian motion with drift, starting at some positive level, and a compound Poisson process or a process with one random jump at some random time. We also ask whether a rendezvous takes place before the Brownian motion hits zero and, if so, at what time. These questions are answered in terms of Laplace transforms for the underlying distributions. The analogous problem for reflected Brownian motion is also studied.
We analyse several aspects of a class of simple counting processes that can emerge in some fields of applications where a change point occurs. In particular, under simple conditions we prove a significant inequality for the stochastic intensity.
We use a discrete-time analysis, giving necessary and sufficient conditions for the almost-sure convergence of ARCH(1) and GARCH(1,1) discrete-time models, to suggest an extension of the ARCH and GARCH concepts to continuous-time processes. Our ‘COGARCH’ (continuous-time GARCH) model, based on a single background driving Lévy process, is different from, though related to, other continuous-time stochastic volatility models that have been proposed. The model generalises the essential features of discrete-time GARCH processes, and is amenable to further analysis, possessing useful Markovian and stationarity properties.
A continuous-time random walk is a simple random walk subordinated to a renewal process used in physics to model anomalous diffusion. In this paper we show that, when the time between renewals has infinite mean, the scaling limit is an operator Lévy motion subordinated to the hitting time process of a classical stable subordinator. Density functions for the limit process solve a fractional Cauchy problem, the generalization of a fractional partial differential equation for Hamiltonian chaos. We also establish a functional limit theorem for random walks with jumps in the strict generalized domain of attraction of a full operator stable law, which is of some independent interest.
Within reliability theory, identifiability problems arise through competing risks. If we have a series system of several components, and if that system is replaced or repaired to as good as new on failure, then the different component failures represent competing risks for the system. It is well known that the underlying component failure distributions cannot be estimated from the observable data (failure time and identity of failed component) without nontestable assumptions such as independence. In practice many systems are not subject to the ‘as good as new’ repair regime. Hence, the objective of this paper is to contrast the identifiability issues arising for different repair regimes. We consider the problem of identifying a model within a given class of probabilistic models for the system. Different models corresponding to different repair strategies are considered: a partial-repair model, where only the failing component is repaired; perfect repair, where all components are as good as new after a failure; and minimal repair, where components are only minimally repaired at failures. We show that on the basis of observing a single socket, the partial-repair model is identifiable, while the perfect- and minimal-repair models are not.
We study the suprema over compact time intervals of stationary locally bounded α-stable processes. The behaviour of these suprema as the length of the time interval increases turns out to depend significantly on the ergodic-theoretical properties of a flow generating the stationary process.
Let C1, C2,…,Cm be independent subordinators with finite expectations and denote their sum by C. Consider the classical risk process X(t) = x + ct - C(t). The ruin probability is given by the well-known Pollaczek–Khinchin formula. If ruin occurs, however, it will be caused by a jump of one of the subordinators Ci. Formulae for the probability that ruin is caused by Ci are derived. These formulae can be extended to perturbed risk processes of the type X(t) = x + ct - C(t) + Z(t), where Z is a Lévy process with mean 0 and no positive jumps.
In Bhatt and Roy's minimal directed spanning tree construction for n random points in the unit square, all edges must be in a south-westerly direction and there must be a directed path from each vertex to the root placed at the origin. We identify the limiting distributions (for large n) for the total length of rooted edges, and also for the maximal length of all edges in the tree. These limit distributions have been seen previously in analysis of the Poisson-Dirichlet distribution and elsewhere; they are expressed in terms of Dickman's function, and their properties are discussed in some detail.
This paper studies the subexponential properties of the stationary workload, actual waiting time and sojourn time distributions in work-conserving single-server queues when the equilibrium residual service time distribution is subexponential. This kind of problem has been previously investigated in various queueing and insurance risk settings. For example, it has been shown that, when the queue has a Markovian arrival stream (MAS) input governed by a finite-state Markov chain, it has such subexponential properties. However, though MASs can approximate any stationary marked point process, it is known that the corresponding subexponential results fail in the general stationary framework. In this paper, we consider the model with a general stationary input and show the subexponential properties under some additional assumptions. Our assumptions are so general that the MAS governed by a finite-state Markov chain inherently possesses them. The approach used here is the Palm-martingale calculus, that is, the connection between the notion of Palm probability and that of stochastic intensity. The proof is essentially an extension of the M/GI/1 case to cover ‘Poisson-like’ arrival processes such as Markovian ones, where the stochastic intensity is admitted.
We examine the joint finite structure of extremes of the ARCH process and find an unexpected phenomenon: when assessing probabilities of failure during some finite time interval in the future, the extremal index seems not to be the object to look at. Two possible ramifications of this phenomenon are put forward.