To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.
We study a markovian evolutionary process which encompasses the classical simple genetic algorithm. This process is obtained by randomly perturbing a very simple selection scheme. Using the Freidlin-Wentzell theory, we carry out a precise study of the asymptotic dynamics of the process as the perturbations disappear. We show how a delicate interaction between the perturbations and the selection pressure may force the convergence toward the global maxima of the fitness function. We put forward the existence of a critical population size, above which this kind of convergence can be achieved. We compute upper bounds of this critical population size for several examples. We derive several conditions to ensure convergence in the homogeneous case and these provide the first mathematically well-founded convergence results for genetic algorithms.
The paper deals with exponential functionals of the linear Brownian motion which arise in different contexts, such as continuous time finance models and one-dimensional disordered models. We study some properties of these exponential functionals in relation with the problem of a particle coupled to a heat bath in a Wiener potential. Explicit expressions for the distribution of the free energy are presented.
We prove a new representation of the generator of a subordinate semigroup as limit of bounded operators. Our construction yields, in particular, a characterization of the domain of the generator. The generator of a subordinate semigroup can be viewed as a function of the generator of the original semigroup. For a large class these functions we show that operations at the level of functions has its counterpart at the level of operators.
A recent result of Takács (1995) gives explicitly the density of the time spent before t above a level x ≠ 0 by Brownian motion with drift. Takács' proof is by means of random walk approximations to Brownian motion, but in this paper we give two different proofs of this result by considerations involving only Brownian motion. We also give a reformulation of Takács' result which involves Brownian meanders, and an extension of Denisov's representation of Brownian motion in terms of two independent Brownian meanders.
In this paper, we study Markov chains with infinite state block-structured transition matrices, whose states are partitioned into levels according to the block structure, and various associated measures. Roughly speaking, these measures involve first passage times or expected numbers of visits to certain levels without hitting other levels. They are very important and often play a key role in the study of a Markov chain. Necessary and/or sufficient conditions are obtained for a Markov chain to be positive recurrent, recurrent, or transient in terms of these measures. Results are obtained for general irreducible Markov chains as well as those with transition matrices possessing some block structure. We also discuss the decomposition or the factorization of the characteristic equations of these measures. In the scalar case, we locate the zeros of these characteristic functions and therefore use these zeros to characterize a Markov chain. Examples and various remarks are given to illustrate some of the results.
An infinite dam with input formed by a compound Poisson process is considered. As an output policy, we adopt the PλM-policy. The stationary distribution and expectation of the level of water in the reservoir are obtained.
Criteria are determined for the variance to mean ratio to be greater than one (over-dispersed) or less than one (under-dispersed). This is done for random variables which are functions of a Markov chain in continuous time, and for the counts in a simple point process on the line. The criteria for the Markov chain are in terms of the infinitesimal generator and those for the point process in terms of the conditional intensity. Examples include a conjecture of Faddy (1994). The case of time-reversible point processes is particularly interesting, and here underdispersion is not possible. In particular, point processes which arise from Markov chains which are time-reversible, have finitely many states and are irreducible are always overdispersed.
The multitype discrete time indecomposable branching process with immigration is considered. Using a martingale approach a limit theorem is proved for such processes when the totality of immigrating individuals at a given time depends on evolution of the processes generating by previously immigrated individuals. Corollaries of the limit theorem are obtained for the cases of finite and infinite second moments of offspring distribution in critical processes.
We consider Brownian motion with a negative drift conditioned to stay positive. We give a sufficient condition for an initial measure to be in the domain of attraction of a quasi-stationary distribution. We construct a counter-example that strongly suggests that this condition is optimal.
For truncated birth-and-death processes with two absorbing or two reflecting boundaries, necessary and sufficient conditions on the transition rates are given such that the transition probabilities satisfy a suitable spatial symmetry relation. This allows one to obtain simple expressions for first-passage-time densities and for certain avoiding transition probabilities. An application to an M/M/1 queueing system with two finite sequential queueing rooms of equal sizes is finally provided.
In this paper we study the so-called random coeffiecient autoregressive models (RCA models) and (generalized) autoregressive models with conditional heteroscedasticity (ARCH/GARCH models). Both models can be represented as random systems with complete connections. Within this framework we are led (under certain conditions) to CL-regular Markov processes and we will give conditions under which (i) asymptotic stationarity, (ii) a law of large numbers and (iii) a central limit theorem can be shown for the corresponding models.
A new class of Gibbsian models with potentials associated with the connected components or homogeneous parts of images is introduced. For these models the neighbourhood of a pixel is not fixed as for Markov random fields, but is given by the components which are adjacent to the pixel. The relationship to Markov random fields and marked point processes is explored and spatial Markov properties are established. Extensions to infinite lattices are also studied, and statistical inference problems including geostatistical applications and statistical image analysis are discussed. Finally, simulation studies are presented which show that the models may be appropriate for a variety of interesting patterns, including images exhibiting intermediate degrees of spatial continuity and images of objects against background.
If (Fn)n∈ℕ is a sequence of independent and identically distributed random mappings from a second countable locally compact state space 𝕏 to 𝕏 which itself is independent of the 𝕏-valued initial variable X0, the discrete-time stochastic process (Xn)n≥0, defined by the recursion equation Xn = Fn(Xn−1) for n∈ℕ, has the Markov property. Since 𝕏 is Polish in particular, a complete metric d exists. The random mappings (Fn)n∈ℕ are assumed to satisfy ℙ-a.s.Conditions on the distribution of l(Fn) are given for the existence of an invariant distribution of X0 making the process (Xn)n≥0 stationary and ergodic. Our main result corrects a central limit theorem by Łoskot and Rudnicki (1995) and removes an error in its proof. Instead of trying to compare the sequence φ (Xn)n≥0 for some φ : 𝕏 → ℝ with a triangular scheme of independent random variables our proof is based on an approximation by a martingale difference scheme.
In this paper we consider a network of queues with batch services, customer coalescence and state-dependent signaling. That is, customers are served in batches at each node, and coalesce into a single unit upon service completion. There are signals circulating in the network and, when a signal arrives at a node, a batch of customers is either deleted or triggered to move as a single unit within the network. The transition rates for both customers and signals are quite general and can depend on the state of the whole system. We show that this network possesses a product form solution. The existence of a steady state distribution is also discussed. This result generalizes some recent results of Henderson et al. (1994), as well as those of Chao et al. (1996).
We obtain bounds for the distribution of the number of comparisons needed by Hoare's randomized selection algorithm FIND and give a new proof for Grübel and Rösler's (1996) result on the convergence of this distribution. Our approach is based on the construction and analysis of a suitable associated Markov chain. Some numerical results for the quantiles of the limit distributions are included, leading for example to the statement that, for a set S with n elements and n large, FIND will need with probability 0.9 about 4.72 x n comparisons to find the median of S.
Let {Ai : i ≥ 1} be a sequence of non-negative random variables and let M be the class of all probability measures on [0,∞]. Define a transformation T on M by letting Tμ be the distribution of ∑i=1∞AiZi, where the Zi are independent random variables with distribution μ, which are also independent of {Ai}. Under first moment assumptions imposed on {Ai}, we determine exactly when T has a non-trivial fixed point (of finite or infinite mean) and we prove that all fixed points have regular variation properties; under moment assumptions of order 1 + ε, ε > 0, we find all the fixed points and we prove that all non-trivial fixed points have stable-like tails. Convergence theorems are given to ensure that each non-trivial fixed point can be obtained as a limit of iterations (by T) with an appropriate initial distribution; convergence to the trivial fixed points δ0 and δ∞ is also examined, and a result like the Kesten-Stigum theorem is established in the case where the initial distribution has the same tails as a stable law. The problem of convergence with an arbitrary initial distribution is also considered when there is no non-trivial fixed point. Our investigation has applications in the study of: (a) branching processes; (b) invariant measures of some infinite particle systems; (c) the model for turbulence of Yaglom and Mandelbrot; (d) flows in networks and Hausdorff measures in random constructions; and (e) the sorting algorithm Quicksort. In particular, it turns out that the basic functional equation in the branching random walk always has a non-trivial solution.
In this paper we consider an explicit solution of an optimal stopping problem arising in connection with a dice game. An optimal stopping rule and the maximum expected reward in this problem can easily be computed by means of the distributions involved and the specific rules of the game
Asymptotic expansions are obtained for the distribution function of a studentized estimator of the offspring mean sequence in an array branching process with immigration. The expansion result is shown to hold in a test function topology. As an application of this result, it is shown that the bootstrapping distribution of the estimator of the offspring mean in a sub-critical branching process with immigration also admits the same expansion (in probability). From these considerations, it is concluded that the bootstrapping distribution provides a better approximation asymptotically than the normal distribution.
Dynamic asset allocation strategies that are continuously rebalanced so as to always keep a fixed constant proportion of wealth invested in the various assets at each point in time play a fundamental role in the theory of optimal portfolio strategies. In this paper we study the rate of return on investment, defined here as the net gain in wealth divided by the cumulative investment, for such investment strategies in continuous time. Among other results, we prove that the limiting distribution of this measure of return is a gamma distribution. This limit theorem allows for comparisons of different strategies. For example, the mean return on investment is maximized by the same strategy that maximizes logarithmic utility, which is also known to maximize the exponential rate at which wealth grows. The return from this policy turns out to have other stochastic dominance properties as well. We also study the return on the risky investment alone, defined here as the present value of the gain from investment divided by the present value of the cumulative investment in the risky asset needed to achieve the gain. We show that for the log-optimal, or optimal growth policy, this return tends to an exponential distribution. We compare the return from the optimal growth policy with the return from a policy that invests a constant amount in the risky stock. We show that for the case of a single risky investment, the constant investor's expected return is twice that of the optimal growth policy. This difference can be considered the cost for insuring that the proportional investor does not go bankrupt.