We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study a not necessarily symmetric random walk with interactions on ℤ, which is an extension of the one-dimensional discrete version of the sausage Wiener path measure. We prove the existence of a repulsion/attraction phase transition for the critical value λc ≡ −μ of the repulsion coefficient λ, where μ is a drift parameter. In the self-repellent case, we determine the escape speed, as a function of λ and μ, and we prove a law of large numbers for the end-point.
We give an alternative proof of a point-process version of the FKG–Holley–Preston inequality which provides a sufficient condition for stochastic domination of probability measures, and for positive correlations of increasing functions.
Age replacement policy is commonly used in order to reduce the number of in-service failures. In this paper we define a multivariate version of this policy and develop some of its desirable properties. We also obtain an optimal age replacement policy.
Recently, Asmussen and Koole (Journal of Applied Probability30, pp. 365–372) showed that any discrete or continuous time marked point process can be approximated by a sequence of arrival streams modulated by finite state continuous time Markov chains. If the original process is customer (time) stationary then so are the approximating processes. Also, the moments in the stationary case converge. For discrete marked point processes we construct a sequence of discrete processes modulated by discrete time finite state Markov chains. All the above features of approximating sequences of Asmussen and Koole continue to hold. For discrete arrival sequences (to a queue) which are modulated by a countable state Markov chain we form a different sequence of approximating arrival streams by which, unlike in the Asmussen and Koole case, even the stationary moments of waiting times can be approximated. Explicit constructions for the output process of a queue and the total input process of a discrete time Jackson network with these characteristics are obtained.
We take a fresh look at some transient characteristics of an M/M/∞ queue, studied previously by Guillemin and Simonian using delicate complex analysis. Along the way we obtain the Laplace transform of the joint distribution of the duration, number of arrivals and swept area associated with a busy period of an M/M/1 queue.
Consider two systems, labeled system 1 and system 2, each with m components. Suppose component i in system k, k = 1, 2, is subjected to a sequence of shocks occurring randomly in time according to a non-explosive counting process {Γ i(t), t > 0}, i = 1, ···, m. Assume that Γ1, · ··, Γm are independent of Mk = (Mk,1, · ··, Mk,m), the number of shocks each component in system k can sustain without failure. Let Zk,i be the lifetime of component i in system k. We find conditions on processes Γ1, · ··, Tm such that some stochastic orders between M1 and M2 are transformed into some stochastic orders between Z1 and Z2. Most results are obtained under the assumption that Γ1, · ··, Γm are independent Poisson processes, but some generalizations are possible and can be seen from the proofs of theorems.
The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space, partitioned into two classes termed ‘open’ and ‘closed’. It is possible to observe only which class the process is in. A burst of channel openings is defined to be a succession of open sojourns separated by closed sojourns all having duration less than t0. Let N(t) be the number of bursts commencing in (0, t]. Then
are measures of the degree of temporal clustering of bursts. We develop two methods for determining the above measures. The first method uses an embedded Markov renewal process and remains valid when the underlying channel process is semi-Markov and/or brief sojourns in either the open or closed classes of state are undetected. The second method uses a ‘backward’ differential-difference equation.
The observed channel process when brief sojourns are undetected can be modelled by an embedded Markov renewal process, whose kernel is shown, by exploiting connections with bursts when all sojourns are detected, to satisfy a differential-difference equation. This permits a unified derivation of both exact and approximate expressions for the kernel, and leads to a thorough asymptotic analysis of the kernel as the length of undetected sojourns tends to zero.
We develop a technique for establishing statistical tests with precise confidence levels for upper bounds on the critical probability in oriented percolation. We use it to give pc < 0.647 with a 99.999967% confidence. As Monte Carlo simulations suggest that pc ≈ 0.6445, this bound is fairly tight.
We consider a regenerative queueing process that is (partially) generated by an embedded phase-type renewal process. We show that, under some specified conditions, a performance measure is an analytic function of the rate of the renewal process. We then develop several methods for deriving its Taylor polynomial in the renewal rate. These polynomials are asymptotically exact as the rate decreases, and, thus, are called light traffic approximations of the performance measure. We show via examples that these new methods are not only more efficient compared to existing ones, but also more versatile due to their general settings, such as to conduct perturbation analysis and study transient behavior.
We show that if an input process ζ to a queue is asymptotic stationary in some sense, satisfies a condition AB and some other natural conditions, then the output processes (w, ζ) and (w, q,ζ) are asymptotic stationary in the same sense. Here, w and q are the waiting time and queue length processes, respectively.
The dynamical aspects of single channel gating can be modelled by a Markov renewal process, with states aggregated into two classes corresponding to the receptor channel being open or closed, and with brief sojourns in either class not detected. This paper is concerned with the relation between the amount of time, for a given record, in which the channel appears to be open compared to the amount in which it is actually open and the difference in their proportions; this may be used to obtain information on the unobserved actual process from the observed one. Results, with extensions, on exponential families have been applied to obtain relevant generating functions and asymptotic normal distributions, including explicit forms for the parameters. Numerical results are given as illustration in special cases.
To study the limiting behaviour of the random running-time of the FIND algorithm, the so-called FIND process was introduced by Grübel and Rösler [1]. In this paper an approach for determining the nth moment function is presented. Applied to the second moment this provides an explicit expression for the variance.
We consider some important systems in reliability theory situated in a random environment, where shocks occur and cause component failure in a specific way. We study some appropriate coefficients, which play an important role in the reduction of our systems to a linear combination of parallel subsystems.
n applicants of similar qualification are on an interview list and their salary demands are from a known and continuous distribution. Two managers, I and II, will interview them one at a time. Right after each interview, manager I always has the first opportunity to decide to hire the applicant or not unless he has hired one already. If manager I decides not to hire the current applicant, then manager II can decide to hire the applicant or not unless he has hired one already. If both managers fail to hire the current applicant, they interview the next applicant, but both lose the chance of hiring the current applicant. If one of the managers does hire the current one, then they proceed with interviews until the other manager also hires an applicant. The interview process continues until both managers hire an applicant each. However, at the end of the process, each manager must have hired an applicant. In this paper, we first derive the optimal strategies for them so that the probability that the one he hired demands less salary than the one hired by the other does is maximized. Then we derive an algorithm for computing manager II's winning probability when both managers play optimally. Finally, we show that manager II's winning probability is strictly increasing in n, is always less than c, and converges to c as n →∞, where c = 0.3275624139 · ·· is a solution of the equation ln(2) + x ln(x) = x.
An aggregation technique of ‘near complete decomposable' Markovian systems has been proposed by Courtois [3]. It is an approximate method in many cases, except for some queuing networks, so the error between the exact and the approximate solution is an important problem. We know that the error is O(ε), where ε is defined as the maximum coupling between aggregates. Some authors developed techniques to obtain a O(ε k) error with k > 1 error with k > 1, while others developed a technique called ‘bounded aggregation’. All these techniques use linear algebra tools and do not utilize the fact that the steady-state probability vector represents the distribution of a random variable. In this work we propose a stochastic approach and we give a method to obtain stochastic bounds on all possible Markovian approximations of the two main dynamics: short-term and long-term dynamics.
A spatial process is considered in which two general birth-death processes are linked by migration of individuals. We examine conditions for weak symmetry and regularity, and develop necessary and sufficient conditions for recurrence. The results are easily extended to the k-process case.
In this paper we describe how the joint large deviation properties of traffic streams are altered when the traffic passes through a shared buffer according to a FCFS service policy with stochastic service capacity. We also consider the stationary case, proving large deviation principles for the state of the system in equilibrium and for departures from an equilibrium system.
Recently Miyazawa and Taylor (1997) proposed a new class of queueing networks with batch arrival batch service and assemble-transfer features. In such networks customers arrive and are served in batches, and may change size when a batch transfers from one node to another. With the assumption of an additional arrival process at each node when it is empty, they obtain a simple product-form steady-state probability distribution, which is a (stochastic) upper bound for the original network. This paper shows that this class of network possesses a set of non-standard partial balance equations, and it is demonstrated that the condition of the additional arrival process introduced by Miyazawa and Taylor is there precisely to satisfy the partial balance equations, i.e. it is necessary and sufficient not only for having a product form solution, but also for the partial balance equations to hold.
For Markov chains of M/G/1 type that are not skip-free to the left, the corresponding G matrix is shown to have special structure and be determined by its first block row. An algorithm that takes advantage of this structure is developed for computing G. For non-skip-free M/G/1 type Markov chains, the algorithm significantly reduces the computational complexity of calculating the G matrix, when compared with reblocking to a system that is skip-free to the left and then applying usual iteration schemes to find G. A similar algorithm to calculate the R matrix for G/M/1 type Markov chains that are not skip-free to the right is also described.