We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let θ (a) be the first time when the range (Rn; n ≧ 0) is equal to a, Rn being equal to the difference of the maximum and the minimum, taken at time n, of a simple random walk on ℤ. We compute the g.f. of θ (a); this allows us to compute the distributions of θ (a) and Rn. We also investigate the asymptotic behaviour of θ (n), n going to infinity.
We consider the mathematical properties of a time-discrete stochastic process describing explosive proliferation of DNA repeats in human genetic diseases. The process is constructed using a cascade of Galton–Watson branching processes. The main results concern the probability of absorption and the supergeometric growth of the process in the supercritical case. Examples of simulations are provided.
The distribution of the sample quantiles of random processes is important for the pricing of some of the so-called financial ‘look-back' options. In this paper a representation of the distribution of the α-quantile of an additive renewal reward process is obtained as the sum of the supremum and the infimum of two rescaled independent copies of the process. This representation has already been proved for processes with stationary and independent increments. As an example, the distribution of the α-quantile of a randomly observed Brownian motion is obtained.
We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: the optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limit diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuous-time gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem.
Individuals in communities in which different strains of pathogen are circulating can acquire resistance by accumulating immunity to each strain. After considering susceptibility, models of infection and immunity are defined for vector-borne diseases such as malaria and trypanosomiasis. For these models the prevalence of infection, the number of infections per individual, and the mean duration of infection, increase rapidly in young individuals, but decrease in older individuals as immunity is acquired to the various strains of pathogen; the mean interval between successive infections lengthens with age. The bivariate Poisson distribution is shown to be a close approximation to some stochastic processes. The models explain observed cross-sectional patterns of age prevalence, and longitudinal patterns in which individuals typically continue to become infected as they age, albeit with decreasing frequency. In these models the time spent infected depends on parasite diversity, as well as the inoculation and recovery rates. It is shown that control measures can cause an increase in the number of infections and the prevalence of infection in older individuals, and in the average prevalence in the community, even when strain-specific immunity is life-long.
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
A threshold AR(1) process with boundary width 2δ > 0 was defined by Brockwell and Hyndman [5] in terms of the unique strong solution of a stochastic differential equation whose coefficients are piecewise linear and Lipschitz. The positive boundary-width is a convenient mathematical device to smooth out the coefficient changes at the boundary and hence to ensure the existence and uniqueness of the strong solution of the stochastic differential equation from which the process is derived. In this paper we give a direct definition of a threshold AR(1) process with δ = 0 in terms of the weak solution of a certain stochastic differential equation. Two characterizations of the distributions of the process are investigated. Both express the characteristic function of the transition probability distribution as an explicit functional of standard Brownian motion. It is shown that the joint distributions of this solution with δ = 0 are the weak limits as δ ↓ 0 of the distributions of the solution with δ > 0. The sense in which an approximating sequence of processes used by Brockwell and Hyndman [5] converges to this weak solution is also investigated. Some numerical examples illustrate the value of the latter approximation in comparison with the more direct representation of the process obtained from the Cameron–Martin–Girsanov formula and results of Engelbert and Schmidt [9]. We also derive the stationary distribution (under appropriate assumptions) and investigate stability of these processes.
The Stein–Chen method for Poisson approximation is adapted to the setting of the geometric distribution. This yields a convenient method for assessing the accuracy of the geometric approximation to the distribution of the number of failures preceding the first success in dependent trials. The results are applied to approximating waiting time distributions for patterns in coin tossing, and to approximating the distribution of the time when a stationary Markov chain first visits a rare set of states. The error bounds obtained are sharper than those obtainable using related Poisson approximations.
A generalization of the Bernoulli–Laplace diffusion model is proposed. We consider the case where the number of balls exchanged is greater than one. We show that the stationary distribution is the same as in the classical scheme and we give the mean and the variance of the process. In a second stage, we study the asymptotic approximation based on the diffusion process. A solution of transition density is given using Legendre polynomials.
The quasi-stationary distribution of the closed stochastic SIS model changes drastically as the basic reproduction ratio R0 passes the deterministic threshold value 1. Approximations are derived that describe these changes. The quasi-stationary distribution is approximated by a geometric distribution (discrete!) for R0 distinctly below 1 and by a normal distribution (continuous!) for R0 distinctly above 1. Uniformity of the approximation with respect to R0 allows one to study the transition between these two extreme distributions. We also study the time to extinction and the invasion and persistence thresholds of the model.
Consider the optimal control problem of leaving an interval (– a, a) in a limited playing time. In the discrete-time problem, a is a positive integer and the player's position is given by a simple random walk on the integers with initial position x. At each time instant, the player chooses a coin from a control set where the probability of returning heads depends on the current position and the remaining amount of playing time, and the player is betting a unit value on the toss of the coin: heads returning +1 and tails − 1. We discuss the optimal strategy for this discrete-time game. In the continuous-time problem the player chooses infinitesimal mean and infinitesimal variance parameters from a control set which may depend upon the player's position. The problem is to find optimal mean and variance parameters that maximize the probability of leaving the interval [— a, a] within a finite time T > 0.
We examine the existence of limiting behavior, or stability, for storage models with shot noise input and general release rules. The shot noise feature of the input process allows the individual inputs to gradually enter the store.
We first show that a store under the unit release rule is stable if and only if the traffic intensity is less than one; this extends the classic result of Prabhu (1980) to the case of shot noise input. The stability of the unit release rule store and various stochastic orderings are then used to derive a sufficient condition for a store with a general release rule to be stable. Finally, we show that when restricted to a compact state space, our storage model is always stable.
An important component of the paper is the methodology employed: coupling and stochastic monotonicity play a key role in analyzing the non-Markov processes encountered.
The paper is concerned with the distribution of the level N of the first crossing of a counting process trajectory with a lower boundary. Compound and simple Poisson or binomial processes, gamma renewal processes, and finally birth processes are considered. In the simple Poisson case, expressing the exact distribution of N requires the use of a classical family of Abel–Gontcharoff polynomials. For other cases convenient extensions of these polynomials into pseudopolynomials with a similar structure are necessary. Such extensions being applicable to other fields of applied probability, the central part of the present paper has been devoted to the building of these pseudopolynomials in a rather general framework.
We establish a necessary condition for any importance sampling scheme to give bounded relative error when estimating a performance measure of a highly reliable Markovian system. Also, a class of importance sampling methods is defined for which we prove a necessary and sufficient condition for bounded relative error for the performance measure estimator. This class of probability measures includes all of the currently existing failure biasing methods in the literature. Similar conditions for derivative estimators are established.
The paper is first concerned with a comparison of the partial sums associated with two sequences of n exchangeable Bernoulli random variables. It then considers a situation where such partial sums are obtained through an iterative procedure of branching type stopped at the first-passage time in a linearly decreasing upper barrier. These comparison results are illustrated with applications to certain urn models, sampling schemes and epidemic processes. A key tool is a non-standard hierarchical class of stochastic orderings between discrete random variables valued in {0, 1,· ··, n}.
The distributions of nearest neighbour random walks on hypercubes in continuous time t 0 can be expressed in terms of binomial distributions; their limit behaviour for t, N → ∞ is well-known. We study here these random walks in discrete time and derive explicit bounds for the deviation of their distribution from their counterparts in continuous time with respect to the total variation norm. Our results lead to a recent asymptotic result of Diaconis, Graham and Morrison for the deviation from uniformity for N →∞. Our proofs use Krawtchouk polynomials and a version of the Diaconis–Shahshahani upper bound lemma. We also apply our methods to certain birth-and-death random walks associated with Krawtchouk polynomials.
We derive the limit behaviour of the distribution tail of the global maximum of a critical Galton–Watson process and also of the expectations of partial maxima of the process, when the offspring law belongs to the domain of attraction of a stable law. Thus the Lindvall (1976) and Athreya (1988) results are extended to the infinite variance case. It is shown that in the general case these two asymptotics are closely related to each other, and the latter follows readily from the former. We also discuss a related problem from the theory of general branching processes.
In this paper, transient characteristics related to excursions of the occupation process of M/M/∞ queues are studied, when the excursion level is large and close to the mean offered load. We show that the classical diffusion approximation by an Ornstein–Uhlenbeck (OU) process captures well the average values of the transient variables considered, while the asymptotic distributions of these variables depart from those corresponding to the OU process. They exhibit, however, equivalent tail behaviour at infinity and numerical evidence shows that they are amazingly close to each other over the whole half-line.
Consider a queueing network with batch services at each node. The service time of a batch is exponential and the batch size at each node is arbitrarily distributed. At a service completion the entire batch coalesces into a single unit, and it either leaves the system or goes to another node according to given routing probabilities. When the batch sizes are identical to one, the network reduces to a classical Jackson network. Our main result is that this network possesses a product form solution with a special type of traffic equations which depend on the batch size distribution at each node. The product form solution satisfies a particular type of partial balance equation. The result is further generalized to the non-ergodic case. For this case the bottleneck nodes and the maximal subnetwork that achieves steady state are determined. The existence of a unique solution is shown and stability conditions are established. Our results can be used, for example, in the analysis of production systems with assembly and subassembly processes.
We explore a dynamic approach to the problems of call admission and resource allocation for communication networks with connections that are differentiated by their quality of service requirements. In a dynamic approach, the amount of spare resources is estimated on-line based on feedbacks from the network's quality of service monitoring mechanism. The schemes we propose remove the dependence on accurate traffic models and thus simplify the tasks of supplying traffic statistics required of network users. In this paper we present two dynamic algorithms. The objective of these algorithms is to find the minimum bandwidth necessary to satisfy a cell loss probability constraint at an asynchronous transfer mode (ATM) switch. We show that in both schemes the bandwidth chosen by the algorithm approaches the optimal value almost surely. Furthermore, in the second scheme, which determines the point closest to the optimal bandwidth from a finite number of choices, the expected learning time is finite.