We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider the following secretary problem: items ranked from 1 to n are randomly selected without replacement, one at a time, and to ‘win' is to stop at an item whose overall rank is less than or equal to s, given only the relative ranks of the items drawn so far. Our method of analysis is based on the existence of an imbedded Markov chain and uses the technique of backwards induction. In principal the approach can be used to give exact results for any value of s; we do the working for s = 3. We give exact results for the optimal strategy, the probability of success and the distribution of T, and the total number of draws when the optimal strategy is implemented. We also give some asymptotic results for these quantities as n → ∞.
When Siegrist (1989) derived an expression for the probability that player A wins a game that consists of a sequence of Bernoulli trials, the winner being the first player to win n trials and have a lead of at least k, he noted the desirability of giving a direct probabilistic argument. Here we present such an argument, and extend the domain of applicability of the results beyond Bernoulli trials, including cases (such as the tie-break in lawn tennis) where the probability of winning each trial cannot reasonably be taken as constant, and to where there is Markov dependence between successive trials.
We consider a dam in which the release rate depends both on the state and some modulating process. Conditions for the existence of a limiting distribution are established in terms of an associated risk process. The case where the release rate is a product of the state and the modulating process is given special attention, and in particular explicit formulas are obtained for a finite state space Markov modulation.
This paper is concerned with the study of death processes with time-homogeneous non-linear death rates. An explicit formula is obtained for the joint distribution of the state XT and the variable ∫T0g(Xt), where g is any given real function and T corresponds to some appropriate stopping time. This is achieved by constructing a family of martingales and then by using a particular family of Abel–Gontcharoff pseudopolynomials (the theory of which has been introduced in a companion paper) and related Abelian-type expansions. Moreover, the distribution of the first crossing level of such a death process through a general upper boundary is also evaluated in terms of pseudopolynomials of that kind. The flexibility of the methods developed makes easy the extension to multidimensional death processes.
We give criteria for the finiteness or infiniteness of the passage-time moments for continuous non-negative stochastic processes in terms of sub/supermartingale inequalities for powers of these processes. We apply these results to one-dimensional diffusions and also reflected Brownian motion in a wedge. The discrete-time analogue of this problem was studied previously by Lamperti and more recently by Aspandiiarov, Iasnogorodski and Menshikov [2]. Our results are continuous analogues of those in [2], but our proofs are direct and do not rely on approximation by discrete-time processes.
In this paper we consider phase-type distributions, their Laplace transforms which are rational functions and their representations which are finite-state Markov chains with an absorbing state. We first prove that, in any representation, the minimal number of states which are visited before absorption is equal to the difference of degree between denominator and numerator in the Laplace transform of the distribution. As an application, we prove that when the Laplace transform has a denominator with n real poles and a numerator of degree less than or equal to one the distribution has order n. We show that, in general, this result can be extended neither to the case where the numerator has degree two nor to the case of non-real poles.
In this paper we consider a class of reliability structures which can be efficiently described through (imbedded in) finite Markov chains. Some general results are provided for the reliability evaluation and generating functions of such systems. Finally, it is shown that a great variety of well known reliability structures can be accommodated in this general framework, and certain properties of those structures are obtained on using their Markov chain imbedding description.
A planar graph contains faces which can be classified into types depending on the number of edges on the face boundaries. Under various natural rules for randomly dividing faces by the addition of new edges, we investigate the limiting distribution of face type as the number of divisions increases.
Markov chain processes are becoming increasingly popular as a means of modelling various phenomena in different disciplines. For example, a new approach to the investigation of the electrical activity of molecular structures known as ion channels is to analyse raw digitized current recordings using Markov chain models. An outstanding question which arises with the application of such models is how to determine the number of states required for the Markov chain to characterize the observed process. In this paper we derive a realization theorem showing that observations on a finite state Markov chain embedded in continuous noise can be synthesized as values obtained from an autoregressive moving-average data generating mechanism. We then use this realization result to motivate the construction of a procedure for identifying the state dimension of the hidden Markov chain. The identification technique is based on a new approach to the estimation of the order of an autoregressive moving-average process. Conditions for the method to produce strongly consistent estimates of the state dimension are given. The asymptotic distribution of the statistic underlying the identification process is also presented and shown to yield critical values commensurate with the requirements for strong consistency.
When analyzing the equilibrium behavior of M/G/1 type Markov chains by transform methods, restrictive hypotheses are often made to avoid technical problems that arise in applying results from complex analysis and linear algebra. It is shown that such restrictive assumptions are unnecessary, and an analysis of these chains using generating functions is given under only the natural hypotheses that first moments (or second moments in the null recurrent case) exist. The key to the analysis is the identification of an important subspace of the space of bounded solutions of the system of homogeneous vector-valued Wiener–Hopf equations associated with the chain. In particular, the linear equations in the boundary probabilities obtained from the transform method are shown to correspond to a spectral basis of the shift operator on this subspace. Necessary and sufficient conditions under which the chain is ergodic, null recurrent or transient are derived in terms of properties of the matrix-valued generating functions determined by transitions of the Markov chain. In the transient case, the Martin exit boundary is identified and shown to be associated with certain eigenvalues and vectors of one of these generating functions. An equilibrium analysis of the class of G/M/1 type Markov chains by similar methods is also presented.
We solve the Fokker-Planck equation for the Wiener process with drift in the presence of elastic boundaries and a fixed start point. An explicit expression is obtained for the first passage density. The cases with pure absorbing and/or reflecting barriers arise for a special choice of a parameter constellation. These special cases are compared with results in Darling and Siegert [5] and Sweet and Hardin [15].
The series expansion for the solution of the integral equation for the first-passage-time probability density function, obtained by resorting to the fixed point theorem, is used to achieve approximate evaluations for which error bounds are indicated. A different use of the fixed point theorem is then made to determine lower and upper bounds for asymptotic approximations, and to examine their range of validity.
A Markov chain is used as a model for a sequence of random experiments. The waiting time for sequence patterns is considered. Recursive-type relations for the distribution of waiting times are obtained.
In the Bayesian estimation of higher-order Markov transition functions on finite state spaces, a prior distribution may assign positive probability to arbitrarily high orders. If there are n observations available, we show (for natural priors) that, with probability one, as n → ∞ the Bayesian posterior distribution ‘discriminates accurately' for orders up to β log n, if β is smaller than an explicitly determined β0. This means that the ‘large deviations' of the posterior are controlled by the relative entropies of the true transition function with respect to all others, much as the large deviations of the empirical distributions are governed by their relative entropies with respect to the true transition function. An example shows that the result can fail even for orders β log n if β is large.
We consider a real-valued random walk S which drifts to –∞ and is such that E(exp θS1) < ∞ for some θ > 0, but for which Cramér's condition fails. We investigate the asymptotic tail behaviour of the distributions of the all time maximum, the upwards and downwards first passage times and the last passage times. As an application, we obtain new limit theorems for certain conditional laws.
A general model for the evolution of the frequency distribution of types in a population under mutation and selection is derived and investigated. The approach is sufficiently general to subsume classical models with a finite number of alleles, as well as models with a continuum of possible alleles as used in quantitative genetics. The dynamics of the corresponding probability distributions is governed by an integro-differential equation in the Banach space of Borel measures on a locally compact space. Existence and uniqueness of the solutions of the initial value problem is proved using basic semigroup theory. A complete characterization of the structure of stationary distributions is presented. Then, existence and uniqueness of stationary distributions is proved under mild conditions by applying operator theoretic generalizations of Perron–Frobenius theory. For an extension of Kingman's original house-of-cards model, a classification of possible stationary distributions is obtained.
A Cox risk process with a piecewise constant intensity is considered where the sequence (Li) of successive levels of the intensity forms a Markov chain. The duration σi of the level Li is assumed to be only dependent via Li. In the small-claim case a Lundberg inequality is obtained via a martingale approach. It is shown furthermore by a Lundberg bound from below that the resulting adjustment coefficient gives the best possible exponential bound for the ruin probability. In the case where the stationary distribution of Li contains a discrete component, a Cramér–Lundberg approximation can be obtained. By way of example we consider the independent jump intensity model (Björk and Grandell 1988) and the risk model in a Markovian environment (Asmussen 1989).
Using an easy linear-algebraic method, we obtain spectral representations, without the need for eigenvector determination, of the transition probability matrices for completely general continuous time Markov chains with finite state space. Comparing the proof presented here with that of Brown (1991), who provided a similar result for a special class of finite Markov chains, we observe that ours is more concise.
The statistical properties of a population of immigrant pairs of individuals subject to loss through emigration are calculated. Exact analytical results are obtained which exhibit characteristic even–odd effects. The population is monitored externally by counting the number of emigrants leaving in a fixed time interval. The integrated statistics for this process are evaluated and it is shown that under certain conditions only even numbers of individuals will be observed.
We prove that every infinite-state stochastic matrix P say, that is irreducible and consists of positive-recurrrent states can be represented in the form I – P=(A – I)(B – S), where A is strictly upper-triangular, B is strictly lower-triangular, and S is diagonal. Moreover, the elements of A are expected values of random variables that we will specify, and the elements of B and S are probabilities of events that we will specify. The decomposition can be used to obtain steady-state probabilities, mean first-passage-times and the fundamental matrix.