To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper aims to show how certain known martingales for epidemic models may be derived using general techniques from the theory of stochastic integration, and hence to extend the allowable infection and removal rate functions of the model as far as possible. Denoting by x, y the numbers of susceptible and infective individuals in the population, then we assume that new infections occur at rate βxyxy and infectives are removed at rate γxyy, where the ratio βxy / γxy can be written in the form q(x+y) / xp(x) for appropriate functions p,q. Under this condition, we find equations giving the distribution of the number of susceptibles remaining in the population at appropriately defined stopping times. Using results on Abel–Gontcharoff pseudopolynomials we also derive an expression for the expectation of any function of the number of susceptibles at these times, as well as considering certain integrals over the course of the epidemic. Finally, some simple examples are given to illustrate our results.
We study the records and related variables for sequences with linear trends. We discuss the properties of the asymptotic rate function and relationships between the distribution of the long-term maxima in the sequence and that of a particular observation, including two characterization type results. We also consider certain Markov chains related to the process of records and prove limit theorems for them, including the ergodicity theorem in the regular case (convergence rates are given under additional assumptions), and derive the limiting distributions for the inter-record times and increments of records.
Repetitive Markov processes form a class of processes where the generator matrix has a particular repeating form. Many queueing models fall in this category such as M/M/1 queues, quasi-birth-and-death processes, and processes with M/G/1 or GI/M/1 generator matrices. In this paper, a new iterative scheme is proposed for computing the stationary probabilities of such processes. An infinite state process is approximated by a finite state process by lumping an infinite number of states into a super-state. What we call the feedback rate, the conditional expected rate of flow from the super-state to the remaining states, given the process is in the super-state, is approximated simultaneously with the steady state probabilities. The method is theoretically developed and numerically tested for quasi-birth-and-death processes. It turns out that the new concept of the feedback rate can be effectively used in computing the stationary probabilities.
We obtain characterizations of densities on the real line and provide solutions to stochastic equations using the Gibbs sampler. Particular stochastic equations considered are of the type X =dB(X+C) and X =dBX+C.
We introduce a continuous-time Markov chain model for the evolution of microsatellites, simple sequence repeats in DNA. We prove the existence of a unique stationary distribution for our model, and fit the model to data from approximately 106 base pairs of DNA from fruit flies, mice, and humans. The slippage rates from the best fit for our model are consistent with experimental findings.
An open hierarchical (manpower) system divided into a totally ordered set of k grades is discussed. The transitions occur only from one grade to the next or to an additional (k+1)th grade representing the external environment of the system. The model used to describe the dynamics of the system is a continuous-time homogeneous Markov chain with k+1 states and infinitesimal generator R = (rij) satisfying rij = 0 if i > j or i + 1 < j ≤ k (i, j = 1,…,k+1), the transition matrix P between times 0 and 1 being P = expR. In this paper, two-wave panel data about the hierarchical system are considered and the resulting fact that, in general, the maximum-likelihood estimated transition matrix cannot be written as an exponential of an infinitesimal generator R having the form described above. The purpose of this paper is to investigate when this can be ascribed to the effect of sampling variability.
Markovian algorithms for estimating the global maximum or minimum of real valued functions defined on some domain Ω ⊂ ℝd are presented. Conditions on the search schemes that preserve the asymptotic distribution are derived. Global and local search schemes satisfying these conditions are analysed and shown to yield sharper confidence intervals when compared to the i.i.d. case.
We consider a singularly perturbed (finite state) Markov chain and provide a complete characterization of the fundamental matrix. In particular, we obtain a formula for the regular part simpler than a previous formula obtained by Schweitzer, and the singular part is obtained via a reduction process similar to Delebecque's reduction for the stationary distribution. In contrast to previous approaches, one works with aggregate Markov chains of much smaller dimension than the original chain, an essential feature for practical computation. An application to mean first-passage times is also presented.
This paper is concerned with the standard bivariate death process as well as with some Markovian modifications and extensions of the process that are of interest especially in epidemic modeling. A new and powerful approach is developed that allows us to obtain the exact distribution of the population state at any point in time, and to highlight the actual nature of the solution. Firstly, using a martingale technique, a central system of relations with two indices for the temporal state distribution will be derived. A remarkable property is that for all the models under consideration, these relations exhibit a similar algebraic structure. Then, this structure will be exploited by having recourse to a theory of Abel-Gontcharoff pseudopolynomials with two indices. This theory generalizes the univariate case examined in a preceding paper and is briefly introduced in the Appendix.
The number Yn of offspring of the most prolific individual in the nth generation of a Bienaymé–Galton–Watson process is studied. The asymptotic behaviour of Yn as n → ∞ may be viewed as an extreme value problem for i.i.d. random variables with random sample size. Limit theorems for both Yn and EYn provided that the offspring mean is finite are obtained using some convergence results for branching processes as well as a transfer limit lemma for maxima. Subcritical, critical and supercritical branching processes are considered separately.
A trivariate equality of laws given by Doney and Yor is extended to a quadrivariate version. Some related explicit density calculations are carried out. It is shown that a bivariate case which has been used to establish the Dassios–Port–Wendel equality of laws is in fact equivalent to it.
Kohonen self-organizing interval maps are considered. In this model a linear graph is embedded randomly into the unit interval. At each time a point is chosen randomly according to a fixed distribution. The nearest vertex and some of its nearby neighbors are moved closer to the point. These models have been proposed as models of learning in the audio-cortex. The models possess not only the structure of a Markov chain, but also the added structure of a random dynamical system. This structure is used to show that for a large class of these models, in a strong way, the initial conditions are unimportant and only the dynamics govern the future. A contractive condition is proven in spite of the fact that the maps are not continuous. This, in turn, shows that the Markov chain is uniformly ergodic.
Classical results describe the asymptotic behaviour of a Galton–Watson branching process conditioned on non-extinction. We give new proofs of limit theorems in critical and subcritical cases. The proofs are based on the representation of conditioned Galton–Watson generation sizes as a sum of independent increments which is derived from the decomposition of the conditioned Galton–Watson family tree along the line of descent of the left-most particle.
A population-size-dependent branching process {Zn} is considered where the population's evolution is controlled by a Markovian environment process {ξn}. For this model, let mk,θ and be the mean and the variance respectively of the offspring distribution when the population size is k and a environment θ is given. Let B = {ω : Zn(ω) = 0 for some n} and q = P(B). The asymptotic behaviour of limnZn and is studied in the case where supθ|mk,θ − mθ| → 0 for some real numbers {mθ} such that infθmθ > 1. When the environmental sequence {ξn} is a irreducible positive recurrent Markov chain (particularly, when its state space is finite), certain extinction (q = 1) and non-certain extinction (q < 1) are studied.
We study a variety of optimal investment problems for objectives related to attaining goals by a fixed terminal time. We start by finding the policy that maximizes the probability of reaching a given wealth level by a given fixed terminal time, for the case where an investor can allocate his wealth at any time between n + 1 investment opportunities: n risky stocks, as well as a risk-free asset that has a positive return. This generalizes results recently obtained by Kulldorff and Heath for the case of a single investment opportunity. We then use this to solve related problems for cases where the investor has an external source of income, and where the investor is interested solely in beating the return of a given stochastic benchmark, as is sometimes the case in institutional money management. One of the benchmarks we consider for this last problem is that of the return of the optimal growth policy, for which the resulting controlled process is a supermartingale. Nevertheless, we still find an optimal strategy. For the general case, we provide a thorough analysis of the optimal strategy, and obtain new insights into the behavior of the optimal policy. For one special case, namely that of a single stock with constant coefficients, the optimal policy is independent of the underlying drift. We explain this by exhibiting a correspondence between the probability maximizing results and the pricing and hedging of a particular derivative security, known as a digital or binary option. In fact, we show that for this case, the optimal policy to maximize the probability of reaching a given value of wealth by a predetermined time is equivalent to simply buying a European digital option with a particular strike price and payoff. A similar result holds for the general case, but with the stock replaced by a particular (index) portfolio, namely the optimal growth or log-optimal portfolio.
In this paper we consider limit theorems for a random walk in a random environment, (Xn). Known results (recurrence-transience criteria, law of large numbers) in the case of independent environments are naturally extended to the case where the environments are only supposed to be stationary and ergodic. Furthermore, if ‘the fluctuations of the random transition probabilities around are small’, we show that there exists an invariant probability measure for ‘the environments seen from the position of (Xn)’. In the case of uniquely ergodic (therefore non-independent) environments, this measure exists as soon as (Xn) is transient so that the ‘slow diffusion phenomenon’ does not appear as it does in the independent case. Thus, under regularity conditions, we prove that, in this case, the random walk satisfies a central limit theorem for any fixed environment.
Well-known inequalities for the spectral gap of a discrete-time Markov chain, such as Poincaré's and Cheeger's inequalities, do not perform well if the transition graph of the Markov chain is strongly connected. For example in the case of nearest-neighbour random walk on the n-dimensional cube Poincaré's and Cheeger's inequalities are off by a factor n. Using a coupling technique and a contraction principle lower bounds on the spectral gap can be derived. Finally, we show that using the contraction principle yields a sharp estimate for nearest-neighbour random walk on the n-dimensional cube.
Expressions for the multi-dimensional densities of Brownian excursion local time are derived by two different methods: a direct method based on Kac's formula for Brownian functionals and an indirect one based on a limit theorem for Galton–Watson trees.
A Markov decision model is considered for the control of a truncated general immigration process, which represents a pest population, by the introduction of total catastrophes. The optimality criterion is that of minimizing the expected long-run average cost per unit time. Firstly, a necessary and sufficient condition is found under which the policy of never controlling is optimal. If this condition fails, a parametric analysis, in which a fictitious parameter is varied over the entire real line, is used to establish the optimality of a control-limit policy. Furthermore, an efficient Markov decision algorithm operating on the class of control-limit policies is developed for the computation of the optimal policy.
Conditions are derived for the components of the normed limit of a multi-type branching process with varying environments, to be continuous on (0, ∞). The main tool is an inequality for the concentration function of sums of independent random variables, due originally to Petrov. Using this, we show that if there is a discontinuity present, then a particular linear combination of the population types must converge to a non-random constant (Equation (1)). Ensuring this can not happen provides the desired continuity conditions.