To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A set-valued analog of the elementary renewal theorem for Minkowski sums of random closed sets is considered. The corresponding renewal function is defined aswhere are Minkowski (element-wise) sums of i.i.d. random compact convex sets. In this paper we determine the limit of H(tK)/t as t tends to infinity. For K containing the origin as an interior point,where hK(u) is the support function of K and is the set of all unit vectors u with EhA(u) > 0. Other set-valued generalizations of the renewal function are also suggested.
In this paper we consider a family of product-form loss models, including loss networks (or circuit-switched communication networks) and a class of resource-sharing models. There can be multiple classes of requests for multiple resources. Requests arrive according to independent Poisson processes. The requests can be for multiple units in each resource (the multi-rate case, e.g. several circuits on a trunk). There can be upper-limit and guaranteed-minimum sharing policies as well as the standard complete-sharing policy. If all the requirements of a request cannot be met upon arrival, then the request is blocked and lost. We develop an algorithm for computing the (exact) steady-state blocking probability of each class and other steady state descriptions in these loss models. The algorithm is based on numerically inverting generating functions of the normalization constants. In a previous paper we introduced this approach to product-form models and developed a full algorithm for a class of closed queueing networks. The inversion algorithm promises to be even more useful for loss models than for closed queueing networks because fewer alternative algorithms are available for loss models. Indeed, for many loss models with sharing policies other than traditional complete sharing, our algorithm is the first effective algorithm. Unlike some recursive algorithms, our algorithm has a low storage requirement. To treat the loss models here, we derive the generating functions of the normalization constants and develop a new scaling algorithm especially tailored to the loss models. In general, the computational complexity grows exponentially in the number of resources, but the computation can often be reduced dramatically by exploiting conditional decomposition based on special structure and by appropriately truncating large finite sums. We illustrate our numerical inversion algorithm by applying it to several examples. To validate our algorithm on small models, we also develop a direct algorithm. The direct algorithm itself is of interest, because it tends to be more efficient when the number of resources is large, but the number of request classes is small. Furthermore, it also allows a form of conditional decomposition based on special structure.
Consider a single-server queue with zero buffer. The arrival process is a three-level Markov modulated Poisson process with an arbitrary transition matrix. The time the system remains at level i (i = 1, 2, 3) is exponentially distributed with rate cα i. The arrival rate at level i is λ i and the service time is exponentially distributed with rate μ i. In this paper we first derive an explicit formula for the loss probability and then prove that it is decreasing in the parameter c. This proves a conjecture of Ross and Rolski's for a single-server queue with zero buffer.
In this paper, we develop mathematical machinery for verifying that a broad class of general state space Markov chains reacts smoothly to certain types of perturbations in the underlying transition structure. Our main result provides conditions under which the stationary probability measure of an ergodic Harris-recurrent Markov chain is differentiable in a certain strong sense. The approach is based on likelihood ratio ‘change-of-measure' arguments, and leads directly to a ‘likelihood ratio gradient estimator' that can be computed numerically.
We consider a family of M(t)/M(t)/1/1 loss systems with arrival and service intensities (λt (c), μt (c)) = (λct, μct), where (λt, μt) are governed by an irreducible Markov process with infinitesimal generator Q = (qij)m × m such that (λt, μt) = (λi, μi) when the Markov process is in state i. Based on matrix analysis we show that the blocking probability is decreasing in c in the interval [0, c∗], where c∗ = 1/maxi Σj≠iqij/(λi + μi). Two special cases are studied for which the result can be extended to all c. These results support Ross's conjecture that a more regular arrival (and service) process leads to a smaller blocking probability.
We consider a migration process whose singleton process is a time-dependent Markov replacement process. For the singleton process, which may be treated as either open or closed, we study the limiting distribution, the distribution of the time to replacement and related quantities. For a replacement process in equilibrium we obtain a version of Little's law and we provide conditions for reversibility. For the resulting linear population process we characterize exponential ergodicity for two types of environmental behaviour, i.e. either convergent or cyclic, and finally for large population sizes a diffusion approximation analysis is provided.
Formulas for the asymptotic failure rate, long-term average availability, and the limiting distribution of the number of long ‘outages' are obtained for a general class of two-state reliability models for maintained systems. The results extend known formulas for alternating renewal processes to a wider class of point processes that includes sequences of dependent or non-identically distributed operating and repair times.
A trivariate stochastic process is considered, describing a sequence of random shocks {Xn} at random intervals {Yn} with random system state {Jn}. The triviariate stochastic process satisfies a Markov renewal property in that the magnitude of shocks and the shock intervals are correlated pairwise and the corresponding joint distributions are affected by transitions of the system state which occur after each shock according to a Markov chain. Of interest is a system lifetime terminated whenever a shock magnitude exceeds a prespecified level z. The distribution of system lifetime, its moments and a related exponential limit theorem are derived explicitly. A similar transform analysis is conducted for a second type of system lifetime with system failures caused by the cumulative magnitude of shocks exceeding a fixed level z.
We consider scheduling a batch of jobs with stochastic processing times on single or parallel machines, with the objective of minimizing the expected holding costs. Preemption of jobs is allowed, and the holding costs of preempted jobs may depend on the stage of completion. We provide a new proof of the optimality of a Gittins priority rule for the single machine and use the same proof to show that the Gittins priority rule is nearly optimal for parallel machines.
It is shown by means of several examples that probability metrics are a useful tool to study the asymptotic behaviour of (stochastic) recursive algorithms. The basic idea of this approach is to find a ‘suitable' probability metric which yields contraction properties of the transformations describing the limits of the algorithm. In order to demonstrate the wide range of applicability of this contraction method we investigate examples from various fields, some of which have already been analysed in the literature.
Let X(t) be a non-homogeneous birth and death process. In this paper we develop a general method of estimating bounds for the state probabilities for X(t), based on inequalities for the solutions of the forward Kolmogorov equations. Specific examples covered include simple estimates of Pr(X(t) < j | X(0) = k) for the M(t)/M(t)/N/0 and M(t)/M(t)/N queue-length processes.
This paper deals with a service system in which the processor must serve two types of impatient units. In the case of blocking, the first type units leave the system whereas the second type units enter a pool and wait to be processed later.
We develop an exhaustive analysis of the system including embedded Markov chain, fundamental period and various classical stationary probability distributions. More specific performance measures, such as the number of lost customers and other quantities, are also considered. The mathematical analysis of the model is based on the theory of Markov renewal processes, in Markov chains of M/G/l type and in expressions of ‘Takács' equation' type.
Many applications of smoothed perturbation analysis lead to estimators with hazard rate functions of underlying distributions. A key assumption used in proving unbiasedness of the resulting estimator is that the hazard rate function be bounded, a restrictive assumption which excludes all distributions with finite support. Here, we prove through a simple example that this assumption can in fact be removed.
Let ψ(u) be the ruin probability in a risk process with initial reserve u, Poisson arrival rate β, claim size distribution B and premium rate p(x) at level x of the reserve. Let y(x) be the non-zero solution of the local Lundberg equation . It is shown that is non-decreasing and that log ψ(u) ≈ –I(u) in a slow Markov walk limit. Though the results and conditions are of large deviations type, the proofs are elementary and utilize piecewise comparisons with standard risk processes with a constant p. Also simulation via importance sampling using local exponential change of measure defined in terms of the γ(x) is discussed and some numerical results are presented.
We study a fundamental feature of the generalized semi-Markov processes (GSMPs), called event coupling. The event coupling reflects the logical behavior of a GSMP that specifies which events can be affected by any given event. Based on the event-coupling property, GSMPs can be classified into three classes: the strongly coupled, the hierarchically coupled, and the decomposable GSMPs. The event-coupling property on a sample path of a GSMP can be represented by the event-coupling trees. With the event-coupling tree, we can quantify the effect of a single perturbation on a performance measure by using realization factors. A set of equations that specifies the realization factors is derived. We show that the sensitivity of steady-state performance with respect to a parameter of an event lifetime distribution can be obtained by a simple formula based on realization factors and that the sample-path performance sensitivity converges to the sensitivity of the steady-state performance with probability one as the length of the sample path goes to infinity. This generalizes the existing results of perturbation analysis of queueing networks to GSMPs.
We consider a multiserver queuing process specified by i.i.d. interarrival time, batch size and service time sequences. In the case that different servers have different service time distributions we say the system is heterogeneous. In this paper we establish conditions for the queuing process to be characterized as a geometrically Harris recurrent Markov chain, and we characterize the stationary probabilities of large queue lengths and waiting times. The queue length is asymptotically geometric and the waiting time is asymptotically exponential. Our analysis is a generalization of the well-known characterization of the GI/G/1 queue obtained using classical probabilistic techniques of exponential change of measure and renewal theory.
Availability is an important characteristic of a system. Different types of availability are defined. For the case when a sequence of bivariate random variables of lifetime and repair time are i.i.d. certain properties have been established previously. In practice, however, we need to consider the situation where these bivariate random variables are independent but not identically distributed. Properties of two measures of availability for the i.i.d. case are extended to this more general case.
We consider M/G/1 queues with exhaustive service and generalized vacations, where at the end of every busy period the server either follows a mixed vacation policy from a given vacation policy set or stays idle. A simple recursive formula for the moments of the stationary waiting time is provided. This formula results in the decomposition property for our model immediately. It also enables us to derive many existing results for the M/G/1 queues with various vacation policies.
We continue our investigation of the batch arrival-heterogeneous multiserver queue begun in Part I. In a general setting we prove the positive Harris recurrence of the system, and with no additional conditions we prove logarithmic tail limits for the stationary queue length and waiting time distributions.
Stochastic models for the origin and extinction of species have been rather neglected in applied probability. As an alternative to modelling speciation and extinction as intrinsically random, I shall describe and show simulations of a rulebased model. This involves mathematical representations of notions such as genetic type of species, environmental niche, fitness of a species in a niche, and adaptation. There are underlying random mechanisms for changes of niche sizes and for disconnection and reconnection of geographical regions, and these ultimately drive the evolution of species.
Other approaches to mathematical modelling of evolution are briefly mentioned.