To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The classical martingale characterizations of the Poisson process were obtained for point process or purely discontinuous martingale i.e. under additional assumptions on properties of trajectories. Here our aim is to search for related characterizations without relying on properties of trajectories. Except for a new martingale characterization, results based on conditional moments jointly involving the past and the nearest future are presented.
We exhibit solutions of Monge–Kantorovich mass transportation problems with constraints on the support of the feasible transportation plans and additional capacity restrictions. The Hoeffding–Fréchet inequalities are extended for bivariate distribution functions having fixed marginal distributions and satisfying additional constraints. Sharp bounds for different probabilistic functionals (e.g. Lp-distances, covariances, etc.) are given when the family of joint distribution functions has prescribed marginal distributions, satisfies restrictions on the support, and is bounded from above, or below, by other distributions.
In this paper, we are concerned with preservation properties of first and second order by an operator L representable in terms of a stochastic process Z with non-decreasing right-continuous paths. We introduce the derived operator D of L and the derived process V of Z in order to characterize the preservation of absolute continuity and convexity. To obtain different characterizations of the preservation of convexity, we introduce two kinds of duality, the first referring to the process Z and the second to the derived process V. We illustrate the preceding results by considering some examples of interest both in probability and in approximation theory - namely, mixtures, centred subordinators, Bernstein polynomials and beta operators. In most of them, we find bidensities to describe the duality between the derived processes. A unified approach based on stochastic orders is given.
We will state a general version of Simpson's paradox, which corresponds to the loss of some dependence properties under marginalization. We will then provide conditions under which the paradox is avoided. Finally we will relate these Simpson-type paradoxes to some well-known paradoxes concerning the loss of ageing properties when the level of information changes.
For (μ,σ2) ≠ (0,1), and 0 < z < ∞, we prove thatwhere φ and Φ are, respectively, the p.d.f. and the c.d.f. of a standard normal random variable. This inequality is sharp in the sense that the right-hand side cannot be replaced by a larger quantity which depends only on μ and σ. In other words, for any given (μ,σ) ≠ (0,1), the infimum, over 0 < z < ∞, of the left-hand side of the inequality is equal to the right-hand side. We also point out how this inequality arises in the context of defining individual bioequivalence.
The study of the distribution of the distance between words in a random sequence of letters is interesting in view of application in genome sequence analysis. In this paper we give the exact distribution probability and cumulative distribution function of the distances between two successive occurrences of a given word and between the nth and the (n+m)th occurrences under three models of generation of the letters: i.i.d. with the same probability for each letter, i.i.d. with different probabilities and Markov process. The generating function and the first two moments are also given. The point of studying the distances instead of the counting process is that we get some knowledge not only about the frequency of a word but also about its longitudinal distribution in the sequence.
Interest has been shown in Markovian sequences of geometric shapes. Mostly the equations for invariant probability measures over shape space are extremely complicated and multidimensional. This paper deals with rectangles which have a simple one-dimensional shape descriptor. We explore the invariant distributions of shape under a variety of randomised rules for splitting the rectangle into two sub-rectangles, with numerous methods for selecting the next shape in sequence. Many explicit results emerge. These help to fill a vacant niche in shape theory, whilst contributing at the same time, new distributions on [0,1] and interesting examples of Markov processes or, in the language of another discipline, of stochastic dynamical systems.
We provide a probabilistic proof of the Stein's factors based on properties of birth and death Markov chains, solving a tantalizing puzzle in using Markov chain knowledge to view the celebrated Stein–Chen method for Poisson approximations. This work complements the work of Barbour (1988) for the case of Poisson random variable approximation.
Let ζ be a Markov chain on a finite state space D, f a function from D to ℝd, and Sn = ∑k=1nf(ζk). We prove an invariance theorem for S and derive an explicit expression of the limit covariance matrix. We give its exact value for p-reinforced random walks on ℤ2 with p = 1, 2, 3.
In a real n-1 dimensional affine space E, consider a tetrahedron T0, i.e. the convex hull of n points α1, α2, …, αn of E. Choose n independent points β1, β2, …, βn randomly and uniformly in T0, thus obtaining a new tetrahedron T1 contained in T0. Repeat the operation with T1 instead of T0, obtaining T2, and so on. The sequence of the Tk shrinks to a point Y of T0 and this note computes the distribution of the barycentric coordinates of Y with respect to (α1, α2, …, αn) (Corollary 2.3). We also obtain the explicit distribution of Y in more general cases. The technique used is to reduce the problem to the study of a random walk on the semigroup of stochastic (n,n) matrices, and this note is a geometrical application of a former result of Chamayou and Letac (1994).
Let pα,θ be the Linnik density, that is, the probability density with the characteristic function . The following problem is studied: Let (α θ), (β, ϑ) be two point of PD. When is it possible to represent β,ϑ as a scale mixture of pαθ? A subset of the admissible pairs (α, θ), (β, ϑ) is described.
This paper deals with existence of bivariate Fréchet optimal lower bounds for two sets of events, and provides a practical approach to find this kind of bound. The main device used is linear programming ideas, coupled with construction of probability spaces. The highlight of this paper is that perturbation terms in the optimization process, even when a tie occurs, are not necessary in this practical implementation.
Let X = (X1, …, Xn) be a random binary vector, with a known joint distribution P. It is necessary to inspect the coordinates sequentially in order to determine if Xi = 0 for every i, i = 1, …, n. We find bounds for the ratio of the expected number of coordinates inspected using optimal and greedy searching policies.
The inverse absorption distribution is shown to be a q-Pascal analogue of the Kemp and Kemp (1991) q-binomial distribution. The probabilities for the direct absorption distribution are obtained via the inverse absorption probabilities and exact expressions for its first two factorial moments are derived using q-series transformations of its probability generating function. Alternative models for the distribution are given.
Criteria are determined for the variance to mean ratio to be greater than one (over-dispersed) or less than one (under-dispersed). This is done for random variables which are functions of a Markov chain in continuous time, and for the counts in a simple point process on the line. The criteria for the Markov chain are in terms of the infinitesimal generator and those for the point process in terms of the conditional intensity. Examples include a conjecture of Faddy (1994). The case of time-reversible point processes is particularly interesting, and here underdispersion is not possible. In particular, point processes which arise from Markov chains which are time-reversible, have finitely many states and are irreducible are always overdispersed.
In this paper we study an approximation of system reliability using one-step conditioning. It is shown that, without greatly increasing the computational complexity, the conditional method may be used instead of the usual minimal cut and minimal path bounds to obtain more accurate approximations and bounds. We also study the conditions under which the approximations are bounds on the reliability. Some further extensions are also presented.
The characterization of the exponential distribution via the coefficient of the variation of the blocking time in a queueing system with an unreliable server, as given by Lin (1993), is improved by substantially weakening the conditions. Based on the coefficient of variation of certain random variables, including the blocking time, the normal service time and the minimum of the normal service and the server failure times, two new characterizations of the exponential distribution are obtained.
We consider the convex ordering for random vectors and some weaker versions of it, like the convex ordering for linear combinations of random variables. First we establish conditions of stochastic equality for random vectors that are ordered by one of the convex orderings. Then we establish necessary and sufficient conditions for the convex ordering to hold in the case of multivariate normal distributions and sufficient conditions for the positive linear convex ordering (without the restriction to multi-normality).
The Brownian density process is a Gaussian distribution-valued process. It can be defined either as a limit of a functional over a Poisson system of independent Brownian particles or as a solution of a stochastic partial differential equation with respect to Gaussian martingale measure. We show that, with an appropriate change in the initial distribution of the infinite particle system, the limiting density process is non-Gaussian and it solves a stochastic partial differential equation where the initial measure and the driving measure are non-Gaussian, possibly having infinite second moment.
Consider two systems, labeled system 1 and system 2, each with m components. Suppose component i in system k, k = 1, 2, is subjected to a sequence of shocks occurring randomly in time according to a non-explosive counting process {Γ i(t), t > 0}, i = 1, ···, m. Assume that Γ1, · ··, Γm are independent of Mk = (Mk,1, · ··, Mk,m), the number of shocks each component in system k can sustain without failure. Let Zk,i be the lifetime of component i in system k. We find conditions on processes Γ1, · ··, Tm such that some stochastic orders between M1 and M2 are transformed into some stochastic orders between Z1 and Z2. Most results are obtained under the assumption that Γ1, · ··, Γm are independent Poisson processes, but some generalizations are possible and can be seen from the proofs of theorems.