We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let {ξn, n ≧1} be a sequence of independent real random variables, F denote the common distribution function of identically distributed random variables ξn, n ≧1 and let ξ1 have an arbitrary distribution. Define Xn+1 = k max(Xn, ξ n+1), Yn+ 1 = max(Yn, ξ n+1) – c, Un+1 = l min(Un, ξ n+1), Vn+1 = min(Vn, ξ n+1) + c, n ≧ 1, 0 < k < 1, l > 1, 0 < c < ∞, and X1 = Υ1= U1 = V1 = ξ1. We establish conditions under which the limit law of max(X1, · ··, Xn) coincides with that of max(ξ2, · ··, ξ n+1) when both are appropriately normed. A similar exercise is carried out for the extreme statistics max(Y1, · ··, Yn), min(U1,· ··, Un) and min(V1, · ··, Vn).
A problem of regrinding and recycling worn train wheels leads to a Markov population process with distinctive properties, including a product-form equilibrium distribution. A convenient framework for analyzing this process is via the notion of dynamic reversal, a natural extension of ordinary (time) reversal. The dynamically reversed process is of the same type as the original process, which allows a simple derivation of some important properties. The process seems not to belong to any class of Markov processes for which stationary distributions are known.
We derive two kinds of rate conservation laws for describing the time-dependent behavior of a process defined with a stationary marked point process and starting at time 0. These formulas are called TRCLs (time-dependent rate conservation laws). It is shown that TRCLs are useful to study the transient behaviors of risk and storage processes with stationary claim and supply processes and with a general premium and release rates, respectively. Detailed discussions are given for the severity for the risk process, and for the workload process of a single-server queue.
The diffusions on the shape and size-and-shape spaces induced by brownian motions on the pre-size-and-shape spaces have been investigated in several papers (cf.). We here address the dual problem: the character of the diffusions on the pre-shape and pre-size-and-shape spaces which induce brownian motions on the shape and size-and-shape spaces. In particular we show that the shape and size-and-shape spaces for k labelled points in ℝm are stochastically complete if k > m and obtain the heat kernels of certain diffusions which induce brownian motions on the size-and-shape spaces.
Given a family of Markov chains with a single recurrent class, we present a potential application of Schweitzer's exact formula relating the steady-state probability and fundamental matrices of any two chains in the family. We propose a new policy iteration scheme for Markov decision processes where in contrast to policy iteration, the new criterion for selecting an action ensures the maximal one-step average cost improvement. Its computational complexity and storage requirement are analysed.
We describe a computational procedure for evaluating the quasi-stationary distributions of a continuous-time Markov chain. Our method, which is an ‘iterative version' of Arnoldi's algorithm, is appropriate for dealing with cases where the matrix of transition rates is large and sparse, but does not exhibit a banded structure which might otherwise be usefully exploited. We illustrate the method with reference to an epidemic model and we compare the computed quasi-stationary distribution with an appropriate diffusion approximation.
A rumour model due to Maki and Thompson (1973) is slightly modified to incorporate a continuous-time random contact process and varying individual behaviours in front of the rumour. Two important measures of the final extent of the rumour are provided by the ultimate number of people who have heard the rumour, and the total personal time units during which the rumour is spread. Our purpose in this note is to derive the exact joint distribution of these two statistics. That will be done by constructing a family of martingales for the rumour process and then using a particular family of Gontcharoff polynomials.
Through the study of a simple embedded martingale we obtain an extension of the Kesten–Stigum theorem and prove a central limit theorem for controlled Galton-Watson processes.
Exact and ordinary lumpability in finite Markov chains is considered. Both concepts naturally define an aggregation of the Markov chain yielding an aggregated chain that allows the exact determination of several stationary and transient results for the original chain. We show which quantities can be determined without an error from the aggregated process and describe methods to calculate bounds on the remaining results. Furthermore, the concept of lumpability is extended to near lumpability yielding approximative aggregation.
We consider the problem of conditioning a non-explosive birth and death process to remain positive until time T, and consider weak convergence of this conditional process as T → ∞. By a suitable almost sure construction we prove weak convergence. The almost sure construction used is of independent interest but relies heavily on the strong monotonic properties of birth and death processes.
We show that the one-dimensional self-organizing Kohonen algorithm (with zero or two neighbours and constant step ε) is a Doeblin recurrent Markov chain provided that the stimuli distribution μ is lower bounded by the Lebesgue measure on some open set. Some properties of the invariant probability measure vε (support, absolute continuity, etc.) are established as well as its asymptotic behaviour as ε ↓ 0 and its robustness with respect to μ.
A three-stage real diffusion process is used as a model of the T-cell count of an HIV-positive individual who is to receive antiviral therapy such as AZT. The ‘quality of life' of such a person is identified as the sojourn time of the diffusion process above a certain critical T-cell level c. The time of introducing therapy is defined as the first-passage time of the diffusion to a prescribed level z > c. The distribution of the sojourn time of the diffusion above the level c depends on the level z at which therapy is initiated. The expected sojourn time is explicitly computed as a function of z for the particular diffusion process defining the model. There is a simple criterion for determining when to start therapy as early as possible.
Some exact and asymptotic joint distributions are given for certain random variables defined on the excursions of a simple symmetric random walk. We derive appropriate recursion formulas and apply them to get certain expressions for the joint generating or characteristic functions of the random variables.
We consider single-server queueing systems that are modulated by a discrete-time Markov chain on a countable state space. The underlying stochastic process is a Markov random walk (MRW) whose increments can be expressed as differences between service times and interarrival times. We derive the joint distributions of the waiting and idle times in the presence of the modulating Markov chain. Our approach is based on properties of the ladder sets associated with this MRW and its time-reversed counterpart. The special case of a Markov-modulated M/M/1 queueing system is then analysed and results analogous to the classical case are obtained.
An example of a simple integrable increasing process, possessing special properties relative to different filtrations, is examined from the point of view of probabilistic potential theory.
Estimation of parameters in diffusion models is usually handled by maximum likelihood and involves the calculation of a Radon–Nikodym derivative. This methodology is often not available when minor changes are made to the model. However, these complications can usually be avoided and results obtained under more general conditions using quasi-likelihood methods. The basic ideas are explained in this paper and are illustrated through discussion of the Cox–Ingersoll–Ross model and a modification of the Langevin model.
A measure-valued diffusion approximation to a two-level branching structure was introduced in Dawson and Hochberg (1991) where it was shown that conditioned on non-extinction at time t, and appropriately rescaled, the process converges as t → ∞to a non-trivial limiting distribution. Here we discuss a different approach to conditioning on non-extinction (popular in one-level branching) and relate the two limiting distributions.
This paper is concerned with the problem of estimation for the diffusion coefficient of a diffusion process on R, in a non-parametric situation. The drift function can be unknown and considered as a nuisance parameter. We propose an estimator of σ based on discrete observation of the diffusion X throughout a given finite time interval. We describe the asymptotic behaviour of this estimator when the step of discretization tends to zero. We prove consistency and asymptotic normality, the rate of convergence to the normal law being a random variable linked to the local time of the diffusion or to its suitable discrete approximation. This can also be interpreted as a convergence to a mixture of normal law.
The tail behaviour of the limit of the normalized population size in the simple supercritical branching process, W, is studied. Most of the results concern those cases when a tail of the distribution function of W decays exponentially quickly. In essence, knowledge of the behaviour of transforms can be combined with some ‘large-deviation' theory to get detailed information on the oscillation of the distribution function of W near zero or at infinity. In particular we show how an old result of Harris (1948) on the asymptotics of the moment-generating function of W translates to tail behaviour.
A two-parameter Ehrenfest urn model is derived according to the approach taken by Karlin and McGregor [7] where Krawtchouk polynomials are used. Furthermore, formulas for the mean passage times of finite homogeneous Markov chains with general tridiagonal transition matrices are given. In the special case of the Ehrenfest model they have quite a different structure as compared with those of Blom [2] or Kemperman [9].