To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Competing patterns are compound patterns that compete to be the first to occur pattern-specific numbers of times. They represent a generalisation of the sooner waiting time problem and of start-up demonstration tests with both acceptance and rejection criteria. Through the use of finite Markov chain imbedding, the waiting time distribution of competing patterns in multistate trials that are Markovian of a general order is derived. Also obtained are probabilities that each particular competing pattern will be the first to occur its respective prescribed number of times, both in finite time and in the limit.
We extend the definition of level-crossing ordering of stochastic processes, proposed by Irle and Gani (2001), to the case in which the times to exceed levels are compared using an arbitrary stochastic order, and work, in particular, with integral stochastic orders closed for convolution. Using a sample-path approach, we establish level-crossing ordering results for the case in which the slower of the processes involved in the comparison is skip-free to the right. These results are specially useful in simulating processes that are ordered in level crossing, and extend results of Irle and Gani (2001), Irle (2003), and Ferreira and Pacheco (2005) for skip-free-to-the-right discrete-time Markov chains, semi-Markov processes, and continuous-time Markov chains, respectively.
We establish Klar's (2002) conjecture about sharp reliability bounds for life distributions in the ℒα-class in reliability theory. The key idea is to construct a set of two-point distributions whose support points satisfy a certain system of equalities and inequalities.
Let denote the class of local subexponential distributions and F∗ν the ν-fold convolution of distribution F, where ν belongs to one of the following three cases: ν is a random variable taking only a finite number of values, in particular ν ≡ n for some n ≥ 2; ν is a Poisson random variable; or ν is a geometric random variable. Along the lines of Embrechts, Goldie, and Veraverbeke (1979), the following assertion is proved under certain conditions: This result is applied to the infinitely divisible laws and some new results are established. The results obtained extend the corresponding findings of Asmussen, Foss, and Korshunov (2003).
We define an inverse subordinator as the passage times of a subordinator to increasing levels. It has previously been noted that such processes have many similarities to renewal processes. Here we present an expression for the joint moments of the increments of an inverse subordinator. This is an analogue of a result for renewal processes. The main tool is a theorem on processes which are both renewal processes and Cox processes.
In this paper, we propose a new urn model. A single urn contains b black balls and w white balls. For each observation, we randomly draw m balls and note their colors, say k black balls and m − k white balls. We return the drawn balls to the urn with an additional ck black balls and c(m − k) white balls. We repeat this procedure n times and denote by Xn the fraction of black balls after the nth draw. To investigate the asymptotic properties of Xn, we first perform some computational studies. We then show that {Xn} forms a martingale, which converges almost surely to a random variable X. The distribution of X is then shown to be absolutely continuous.
We derive a tight perturbation bound for hidden Markov models. Using this bound, we show that, in many cases, the distribution of a hidden Markov model is considerably more sensitive to perturbations in the emission probabilities than to perturbations in the transition probability matrix and the initial distribution of the underlying Markov chain. Our approach can also be used to assess the sensitivity of other stochastic models, such as mixture processes and semi-Markov processes.
We provide sufficient conditions for the following types of random variable to have the increasing-failure-rate (IFR) property: sums of a random number of random variables; the time at which a Markov chain crosses a random threshold; the time until a random number of events have occurred in an inhomogeneous Poisson process; and the number of events of a renewal process, and of a general counting process, that have occurred by a randomly distributed time.
We provide a scaling for compound Poisson distributions that leads (under certain conditions on the Fourier transform) to a weak convergence result as the parameter of the distribution tends to infinity. We show that the limiting probability measure belongs to the class of stable Cauchy laws with Fourier transform t ↦ exp(−c|t|− iat log|t|). We apply this convergence result to the standard discrete Luria–Delbrück distribution and derive an integral representation for the corresponding limiting density, as an alternative to that found in a closely related paper of Kepler and Oprea. Moreover, we verify local convergence and we derive an integral representation for the distribution function of the limiting continuous Luria–Delbrück distribution.
In this article, we consider the limit behavior of the hazard rate function of mixture distributions, assuming knowledge of the behavior of each individual distribution. We show that the asymptotic baseline function of the hazard rate function is preserved under mixture.
In this paper, we introduce the minimum dynamic discrimination information (MDDI) approach to probability modeling. The MDDI model relative to a given distribution G is that which has least Kullback-Leibler information discrepancy relative to G, among all distributions satisfying some information constraints given in terms of residual moment inequalities, residual moment growth inequalities, or hazard rate growth inequalities. Our results lead to MDDI characterizations of many well-known lifetime models and to the development of some new models. Dynamic information constraints that characterize these models are tabulated. A result for characterizing distributions based on dynamic Rényi information divergence is also given.
A random variable Y is branching stable (B-stable) for a nonnegative integer-valued random variable J with E(J)>1 if Y*J∿cY for some scalar c, where Y*J is the sum of J independent copies of Y. We explore some aspects of this notion of stability and show that, for any Y0 with finite nonzero mean, if we define Yn+1=Yn*J/E(J) then the sequence Yn converges in law to a random variable Y∞ that is B-stable for J. Also Y∞ is the unique B-stable law with mean E(Y0). We also present results relating to random variables Y0 with zero means and infinite means. The notion of B-stability arose in a scheme for cataloguing a large network of computers.
Random vectors in the positive orthant whose distributions possess hidden regular variation are a subclass of those whose distributions are multivariate regularly varying with asymptotic independence. The concept is an elaboration of the coefficient of tail dependence of Ledford and Tawn. We show that the rank transform that brings unequal marginals to the standard case also preserves the hidden regular variation. We discuss applications of the results to two examples, one involving flood risk and the other Internet data.
We consider a service system (QS) that operates according to the first-come-first-served (FCFS) discipline, and in which the service rate is an increasing function of the queue length. Customers arrive sequentially at the system, and decide whether or not to join using decision rules based upon the queue length on arrival. Each customer is interested in selecting a rule that meets a certain optimality criterion with regard to their expected sojourn time in the system; as a consequence, the decision rules of other customers must be taken into account. Within a particular class of decision rules for an associated infinite-player game, the structure of the Nash equilibrium routeing policies is characterized. We prove that, within this class, there exist a finite number of Nash equilibria, and that at least one of these is nonrandomized. Finally, with the aid of simulation experiments, we explore the extent to which the Nash equilibria are characteristic of customer joining behaviour under a learning rule based on system-wide data.
Stochastic processes with Student marginals and various types of dependence structure, allowing for both short- and long-range dependence, are discussed in this paper. A particular motivation is the modelling of risky asset time series.
As proposed by Irle and Gani in 2001, a process X is said to be slower in level crossing than a process Y if it takes X stochastically longer to exceed any given level than it does Y. In this paper, we extend a result of Irle (2003), relative to the level crossing ordering of uniformizable skip-free-to-the-right continuous-time Markov chains, to derive a new set of sufficient conditions for the level crossing ordering of these processes. We apply our findings to birth-death processes with and without catastrophes, and M/M/s/c systems.
In this paper we consider some dependence properties and orders among multivariate distributions, and we study their preservation under mixtures. Applications of these results in reliability, risk theory, and mixtures of discrete distributions are provided.
In bioinformatics, the notion of an ‘island’ enhances the efficient simulation of gapped local alignment statistics. This paper generalizes several results relevant to gapless local alignment statistics from one to higher dimensions, with a particular eye to applications in gapped alignment statistics. For example, reversal of paths (rather than of discrete time) generalizes a distributional equality, from queueing theory, between the Lindley (local sum) and maximum processes. Systematic investigation of an ‘ownership’ relationship among vertices in ℤ2 formalizes the notion of an island as a set of vertices having a common owner. Predictably, islands possess some stochastic ordering and spatial averaging properties. Moreover, however, the average number of vertices in a subcritical stationary island is 1, generalizing a theorem of Kac about stationary point processes. The generalization leads to alternative ways of simulating some island statistics.
Let {Xn, n = 0, 1,…} be the sequence of the lower records for an arbitrary underlying distribution μ on [0, ∞). We show that is equal in distribution to where {τi, i = 1, 2,…} is a Poisson flow of unit intensity and g is a right-continuous and nonincreasing function defined by μ. This observation allows us to extend results of Bose et al. and simplify their proofs.
We study a family of locally self-similar stochastic processes Y = {Y(t)}t∈ℝ with α-stable distributions, called linear multifractional stable motions. They have infinite variance and may possess skewed distributions. The linear multifractional stable motion processes include, in particular, the classical linear fractional stable motion processes, which have stationary increments and are self-similar with self-similarity parameter H. The linear multifractional stable motion process Y is obtained by replacing the self-similarity parameter H in the integral representation of the linear fractional stable motion process by a deterministic function H(t). Whereas the linear fractional stable motion is always continuous in probability, this is not in general the case for Y. We obtain necessary and sufficient conditions for the continuity in probability of the process Y. We also examine the effect of the regularity of the function H(t) on the local structure of the process. We show that under certain Hölder regularity conditions on the function H(t), the process Y is locally equivalent to a linear fractional stable motion process, in the sense of finite-dimensional distributions. We study Y by using a related α-stable random field and its partial derivatives.