To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Ginibre point process (GPP) is one of the main examples of determinantal point processes on the complex plane. It is a recurring distribution of random matrix theory as well as a useful model in applied mathematics. In this paper we briefly overview the usual methods for the simulation of the GPP. Then we introduce a modified version of the GPP which constitutes a determinantal point process more suited for certain applications, and we detail its simulation. This modified GPP has the property of having a fixed number of points and having its support on a compact subset of the plane. See Decreusefond et al. (2013) for an extended version of this paper.
Shot-noise processes are used in applied probability to model a variety of physical systems in, for example, teletraffic theory, insurance and risk theory, and in the engineering sciences. In this paper we prove a large deviation principle for the sample-paths of a general class of multidimensional state-dependent Poisson shot-noise processes. The result covers previously known large deviation results for one-dimensional state-independent shot-noise processes with light tails. We use the weak convergence approach to large deviations, which reduces the proof to establishing the appropriate convergence of certain controlled versions of the original processes together with relevant results on existence and uniqueness.
In this paper we consider an M/M/c queue modified to allow both mass arrivals when the system is empty and the workload to be removed. Properties of queues which terminate when the server becomes idle are firstly developed. Recurrence properties, equilibrium distribution, and equilibrium queue-size structure are studied for the case of resurrection and no mass exodus. All of these results are then generalized to allow for the removal of the entire workload. In particular, we obtain the Laplace transformation of the transition probability for the absorptive M/M/c queue.
In this paper we analyze a tollbooth tandem queueing problem with an infinite number of servers. A customer starts service immediately upon arrival but cannot leave the system before all customers who arrived before him/her have left, i.e. customers depart the system in the same order as they arrive. Distributions of the total number of customers in the system, the number of departure-delayed customers in the system, and the number of customers in service at time t are obtained in closed form. Distributions of the sojourn times and departure delays of customers are also obtained explicitly. Both transient and steady state solutions are derived first for Poisson arrivals, and then extended to cases with batch Poisson and nonstationary Poisson arrival processes. Finally, we report several stochastic ordering results on how system performance measures are affected by arrival and service processes.
In this paper we describe a perfect simulation algorithm for the stable M/G/c queue. Sigman (2011) showed how to build a dominated coupling-from-the-past algorithm for perfect simulation of the super-stable M/G/c queue operating under first-come-first-served discipline. Sigman's method used a dominating process provided by the corresponding M/G/1 queue (using Wolff's sample path monotonicity, which applies when service durations are coupled in order of initiation of service). The method exploited the fact that the workload process for the M/G/1 queue remains the same under different queueing disciplines, in particular under the processor sharing discipline, for which a dynamic reversibility property holds. We generalise Sigman's construction to the stable case by comparing the M/G/c queue to a copy run under random assignment. This allows us to produce a naïve perfect simulation algorithm based on running the dominating process back to the time it first empties. We also construct a more efficient algorithm that uses sandwiching by lower and upper processes constructed as coupled M/G/c queues started respectively from the empty state and the state of the M/G/c queue under random assignment. A careful analysis shows that appropriate ordering relationships can still be maintained, so long as service durations continue to be coupled in order of initiation of service. We summarise statistical checks of simulation output, and demonstrate that the mean run-time is finite so long as the second moment of the service duration distribution is finite.
We study cyclic polling models with exhaustive service at each queue under a variety of non-FCFS (first-come-first-served) local service orders, namely last-come-first-served with and without preemption, random-order-of-service, processor sharing, the multi-class priority scheduling with and without preemption, shortest-job-first, and the shortest remaining processing time policy. For each of these policies, we first express the waiting-time distributions in terms of intervisit-time distributions. Next, we use these expressions to derive the asymptotic waiting-time distributions under heavy-traffic assumptions, i.e. when the system tends to saturate. The results show that in all cases the asymptotic waiting-time distribution at queue i is fully characterized and of the form Γ Θi, with Γ and Θi independent, and where Γ is gamma distributed with known parameters (and the same for all scheduling policies). We derive the distribution of the random variable Θi which explicitly expresses the impact of the local service order on the asymptotic waiting-time distribution. The results provide new fundamental insight into the impact of the local scheduling policy on the performance of a general class of polling models. The asymptotic results suggest simple closed-form approximations for the complete waiting-time distributions for stable systems with arbitrary load values.
Predictability of revenue and costs to both operators and users is critical for payment schemes. We study the issue of the design of payment schemes in networks with bandwidth sharing. The model we consider is a processor sharing system that is accessed by various classes of users with different processing requirements or file sizes. The users are charged according to a Vickrey–Clarke–Groves mechanism because of its efficiency and fairness when logarithmic utility functions are involved. Subject to a given mean revenue for the operator, we study whether it is preferable for a user to pay upon arrival, depending on the congestion level, or whether the user should opt to pay at the end. This leads to a study of the volatility of payment schemes and we show that opting for prepayment is preferable from a user point of view. The analysis yields new results on the asymptotic behavior of conditional response times for processor sharing systems and connections to associated orthogonal polynomials.
We present the first class of perfect sampling (also known as exact simulation) algorithms for the steady-state distribution of non-Markovian loss systems. We use a variation of dominated coupling from the past. We first simulate a stationary infinite server system backwards in time and analyze the running time in heavy traffic. In particular, we are able to simulate stationary renewal marked point processes in unbounded regions. We then use the infinite server system as an upper bound process to simulate the loss system. The running time analysis of our perfect sampling algorithm for loss systems is performed in the quality-driven (QD) and the quality-and-efficiency-driven regimes. In both cases, we show that our algorithm achieves subexponential complexity as both the number of servers and the arrival rate increase. Moreover, in the QD regime, our algorithm achieves a nearly optimal rate of complexity.
In this paper we consider the stationary PH/M/c queue with deterministic impatience times (PH/M/c+D). We show that the probability density function of the virtual waiting time takes the form of a matrix exponential whose exponent is given explicitly by system parameters.
In this paper we investigate the limiting behavior of the failure rate for the convolution of two or more life distributions. In a previous paper on mixtures, Block, Mi and Savits (1993) showed that the failure rate behaves like the limiting behavior of the strongest component. We show a similar result here for convolutions. We also show by example that unlike a mixture population, the ultimate direction of monotonicity does not necessarily follow that of the strongest component.
This paper is concerned with the bottom-up hierarchical system and public debate model proposed by Galam (2008), as well as a spatial version of the public debate model. In all three models, there is a population of individuals who are characterized by one of two competing opinions, say opinion −1 and opinion +1. This population is further divided into groups of common size s. In the bottom-up hierarchical system, each group elects a representative candidate, whereas in the other two models, all the members of each group discuss at random times until they reach a consensus. At each election/discussion, the winning opinion is chosen according to Galam's majority rule: the opinion with the majority of representatives wins when there is a strict majority, while one opinion, say opinion −1, is chosen by default in the case of a tie. For the public debate models we also consider the following natural updating rule that we call proportional rule: the winning opinion is chosen at random with a probability equal to the fraction of its supporters in the group. The three models differ in term of their population structure: in the bottom-up hierarchical system, individuals are located on a finite regular tree, in the nonspatial public debate model, they are located on a complete graph, and in the spatial public debate model, they are located on the d-dimensional regular lattice. For the bottom-up hierarchical system and nonspatial public debate model, Galam studied the probability that a given opinion wins under the majority rule and, assuming that individuals' opinions are initially independent, making the initial number of supporters of a given opinion a binomial random variable. The first objective of this paper is to revisit Galam's result, assuming that the initial number of individuals in favor of a given opinion is a fixed deterministic number. Our analysis reveals phase transitions that are sharper under our assumption than under Galam's assumption, particularly with small population size. The second objective is to determine whether both opinions can coexist at equilibrium for the spatial public debate model under the proportional rule, which depends on the spatial dimension.
This paper is concerned with the Birnbaum importance measure of a component in a binary coherent system. A representation for the Birnbaum importance of a component is obtained when the system consists of exchangeable dependent components. The results are closely related to the concept of the signature of a coherent system. Some examples are presented to illustrate the results.
Consider an absolutely continuous distribution on [0, ∞) with finite mean μ and hazard rate function h(t) ≤ b for all t. For bμ close to 1, we would expect F to be approximately exponential. In this paper we obtain sharp bounds for the Kolmogorov distance between F and an exponential distribution with mean μ, as well as between F and an exponential distribution with failure rate b. We apply these bounds to several examples. Applications are presented to geometric convolutions, birth and death processes, first-passage times, and to decreasing mean residual life distributions.
We consider a production-inventory model operating in a stochastic environment that is modulated by a finite state continuous-time Markov chain. When the inventory level reaches zero, an order is placed from an external supplier. The costs (purchasing and holding costs) are modulated by the state at the order epoch time. Applying a matrix analytic approach, fluid flow techniques, and martingales, we develop methods to obtain explicit equations for these cost functionals in the discounted case and under the long-run average criterion. Finally, we extend the model to allow backlogging.
The goal of this paper is to identify exponential convergence rates and to find computable bounds for them for Markov processes representing unreliable Jackson networks. First, we use the bounds of Lawler and Sokal (1988) in order to show that, for unreliable Jackson networks, the spectral gap is strictly positive if and only if the spectral gaps for the corresponding coordinate birth and death processes are positive. Next, utilizing some results on birth and death processes, we find bounds on the spectral gap for network processes in terms of the hazard and equilibrium functions of the one-dimensional marginal distributions of the stationary distribution of the network. These distributions must be in this case strongly light-tailed, in the sense that their discrete hazard functions have to be separated from 0. We relate these hazard functions with the corresponding networks' service rate functions using the equilibrium rates of the stationary one-dimensional marginal distributions. We compare the obtained bounds on the spectral gap with some other known bounds.
In this paper we use the Siegert formula to derive alternative expressions for the moments of the first passage time of the Ornstein-Uhlenbeck process through a constant threshold. The expression for the nth moment is recursively linked to the lower-order moments and consists of only n terms. These compact expressions can substantially facilitate (numerical) applications also for higher-order moments.
In many-server systems it is crucial to staff the right number of servers so that targeted service levels are met. These staffing problems typically lead to constraint satisfaction problems that are difficult to solve. During the last decade, a powerful many-server asymptotic theory has been developed to solve such problems and optimal staffing rules are known to obey the square-root staffing principle. In this paper we develop many-server asymptotics in the so-called quality and efficiency driven regime, and present refinements to many-server asymptotics and square-root staffing for a Markovian queueing model with admission control and retrials.
In this paper a stochastic failure model for a system with stochastically dependent competing failures is analyzed. The system is subject to two types of failure: degradation failure and catastrophic failure. Both types of failure share an initial common source: an external shock process. This implies that they are stochastically dependent. In our developments of the model, the type of dependency between the two kinds of failure will be characterized. Conditional properties of the two competing risks are also investigated. These properties are the fundamental basis for the development of the maintenance strategy studied in this paper. Considering this maintenance strategy, the long-run average cost rate is derived and the optimal maintenance policy is discussed.
Burn-in is a method of ‘elimination’ of initial failures (infant mortality). In the conventional burn-in procedures, to burn-in a component or a system means to subject it to a fixed time period of simulated use prior to actual operation. Then those which fail during the burn-in procedure are scrapped and only those which survived the burn-in procedure are considered to be of satisfactory quality. Thus, in this case, the only information used for the elimination procedure is the lifetime of the corresponding item. In this paper we consider a new burn-in procedure which additionally employs a dependent covariate process in the elimination procedure. Through the comparison with the conventional burn-in procedure, we show that the new burn-in procedure is preferable under commonly satisfied conditions. The problem of determining the optimal burn-in parameters is also considered and the properties of the optimal parameters are derived. A numerical example is provided to illustrate the theoretical results obtained in this paper.