We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce a new 1-dependent percolation model to describe and analyze the spread of an epidemic on a general directed and locally finite graph. We assign a two-dimensional random weight vector to each vertex of the graph in such a way that the weights of different vertices are independent and identically distributed, but the two entries of the vector assigned to a vertex need not be independent. The probability for an edge to be open depends on the weights of its end vertices, but, conditionally on the weights, the states of the edges are independent of each other. In an epidemiological setting, the vertices of a graph represent the individuals in a (social) network and the edges represent the connections in the network. The weights assigned to an individual denote its (random) infectivity and susceptibility, respectively. We show that one can bound the percolation probability and the expected size of the cluster of vertices that can be reached by an open path starting at a given vertex from above by the corresponding quantities for independent bond percolation with a certain density; this generalizes a result of Kuulasmaa (1982). Many models in the literature are special cases of our general model.
We consider several versions of the job assignment problem for an M/M/m queue with servers of different speeds. When there are two classes of customers, primary and secondary, the number of secondary customers is infinite, and idling is not permitted, we develop an intuitive proof that the optimal policy that minimizes the mean waiting time has a threshold structure. That is, for each server, there is a server-dependent threshold such that a primary customer will be assigned to that server if and only if the queue length of primary customers meets or exceeds the threshold. Our key argument can be generalized to extend the structural result to models with impatient customers, discounted waiting time, batch arrivals and services, geometrically distributed service times, and a random environment. We show how to compute the optimal thresholds, and study the impact of heterogeneity in server speeds on mean waiting times. We also apply the same machinery to the classical slow-server problem without secondary customers, and obtain more general results for the two-server case and strengthen existing results for more than two servers.
Consider a continuous-time Markov process with transition rates matrix Q in the state space Λ ⋃ {0}. In the associated Fleming-Viot process N particles evolve independently in Λ with transition rates matrix Q until one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Λ is finite, we show that the empirical distribution of the particles at a fixed time converges as N → ∞ to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process with N particles converges as N → ∞ to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1 / N.
We consider the level hitting times τy = inf{t ≥ 0 | Xt = y} and the running maximum process Mt = sup{Xs | 0 ≤ s ≤ t} of a growth-collapse process (Xt)t≥0, defined as a [0, ∞)-valued Markov process that grows linearly between random ‘collapse’ times at which downward jumps with state-dependent distributions occur. We show how the moments and the Laplace transform of τy can be determined in terms of the extended generator of Xt and give a power series expansion of the reciprocal of Ee−sτy. We prove asymptotic results for τy and Mt: for example, if m(y) = Eτy is of rapid variation then Mt / m-1(t) →w 1 as t → ∞, where m-1 is the inverse function of m, while if m(y) is of regular variation with index a ∈ (0, ∞) and Xt is ergodic, then Mt / m-1(t) converges weakly to a Fréchet distribution with exponent a. In several special cases we provide explicit formulae.
Brown (1980), (1981) proved that the renewal function is concave if the interrenewal distribution is DFR (decreasing failure rate), and conjectured the converse. This note settles Brown's conjecture with a class of counterexamples. We also give a short proof of Shanthikumar's (1988) result that the DFR property is closed under geometric compounding.
We consider the first passage percolation problem on the random graph with vertex set N x {0, 1}, edges joining vertices at a Euclidean distance equal to unity, and independent exponential edge weights. We provide a central limit theorem for the first passage times ln between the vertices (0, 0) and (n, 0), thus extending earlier results about the almost-sure convergence of ln / n as n → ∞. We use generating function techniques to compute the n-step transition kernels of a closely related Markov chain which can be used to explicitly calculate the asymptotic variance in the central limit theorem.
Web servers have to be protected against overload since overload can lead to a server breakdown, which in turn causes high response times and low throughput. In this paper, a stochastic model for breakdowns of server systems due to overload is proposed and an admission control policy which protects Web servers by controlling the amount and rate of work entering the system is studied. Requests from the clients arrive at the server following a nonhomogeneous Poisson process and each requested job takes a random time to be completed. It is assumed that the breakdown rate of the server depends on the number of jobs which are currently being performed by the server. Based on the proposed model, the reliability function and the breakdown rate function of the server system are derived. Furthermore, the long-run expected number of jobs completed per unit time is derived as the efficiency measure, and the optimal admission control policy which maximizes the efficiency will be discussed.
In this paper we consider a single-server queue with Lévy input, and, in particular, its workload process (Qt)t≥0, focusing on its correlation structure. With the correlation function defined as r(t):= cov(Q0, Qt) / varQ0
(assuming that the workload process is in stationarity at time 0), we first study its transform ∫0∞r(t)e-ϑtdt, both for when the Lévy process has positive jumps and when it has negative jumps. These expressions allow us to prove that r(·) is positive, decreasing, and convex, relying on the machinery of completely monotone functions. For the light-tailed case, we estimate the behavior of r(t) for large t. We then focus on techniques to estimate r(t) by simulation. Naive simulation techniques require roughly (r(t))-2 runs to obtain an estimate of a given precision, but we develop a coupling technique that leads to substantial variance reduction (the required number of runs being roughly (r(t))-1). If this is augmented with importance sampling, it even leads to a logarithmically efficient algorithm.
In extreme shock models, only the impact of the current, possibly fatal shock is usually taken into account, whereas in cumulative shock models, the impact of the preceding shocks is accumulated as well. A shock model which combines these two types is called a ‘combined shock model’. In this paper we study new classes of extreme shock models and, based on the obtained results and model interpretations, we extend these results to several specific combined shock models. For systems subject to nonhomogeneous Poisson processes of shocks, we derive the corresponding survival probabilities and discuss some meaningful interpretations and examples.
Optimal control of stochastic bandwidth-sharing networks is typically difficult. In order to facilitate the analysis, deterministic analogues of stochastic bandwidth-sharing networks, the so-called fluid models, are often taken for analysis, as their optimal control can be found more easily. The tracking policy translates the fluid optimal control policy back to a control policy for the stochastic model, so that the fluid optimality can be achieved asymptotically when the stochastic model is scaled properly. In this work we study the efficiency of the tracking policy, that is, how fast the fluid optimality can be achieved in the stochastic model with respect to the scaling parameter. In particular, our result shows that, under certain conditions, the tracking policy can be as efficient as feedback policies.
Denote the Palm measure of a homogeneous Poisson process Hλ with two points 0 and x by P0,x. We prove that there exists a constant μ ≥ 1 such that P0,x(D(0, x) / μ||x||2 ∉ (1 − ε, 1 + ε) | 0, x ∈ C∞) exponentially decreases when ||x||2 tends to ∞, where D(0, x) is the graph distance between 0 and x in the infinite component C∞ of the random geometric graph G(Hλ; 1). We derive a large deviation inequality for an asymptotic shape result. Our results have applications in many fields and especially in wireless sensor networks.
We consider a multiclass single-server queueing network as a model of a packet switching network. The rates packets are sent into this network are controlled by queues which act as congestion windows. By considering a sequence of congestion controls, we analyse a sequence of stationary queueing networks. In this asymptotic regime, the service capacity of the network remains constant and the sequence of congestion controllers act to exploit the network's capacity by increasing the number of packets within the network. We show that the stationary throughput of routes on this sequence of networks converges to an allocation that maximises aggregate utility subject to the network's capacity constraints. To perform this analysis, we require that our utility functions satisfy an exponential concavity condition. This family of utilities includes weighted α-fair utilities for α > 1.
Customers arrive sequentially at times x1 < x2 < · · · < xn and stay for independent random times Z1, …, Zn > 0. The Z-variables all have the same distribution Q. We are interested in situations where the data are incomplete in the sense that only the order statistics associated with the departure times xi + Zi are known, or that the only available information is the order in which the customers arrive and depart. In the former case we explore possibilities for the reconstruction of the correct matching of arrival and departure times. In the latter case we propose a test for exponentiality.
A simple model for a randomly oscillating variable is suggested, which is a variant of the two-state random velocity model. As in the latter model, the variable keeps rising or falling with constant velocity for some time before randomly reversing its direction. In contrast however, its propensity to reverse depends on its current value and it is for this desirable feature that the model is proposed here. This feature has two implications: (a) neither the changing variable nor its velocity is Markovian, although the joint process is, and (b) the linear differential equations arising in the case of our model do not have constant coefficients. The results given in this paper are meant to illustrate the straightforward nature of some of the calculations involved and to highlight the relationship with one-dimensional diffusions.
In this paper we consider the stochastic analysis of information ranking algorithms of large interconnected data sets, e.g. Google's PageRank algorithm for ranking pages on the World Wide Web. The stochastic formulation of the problem results in an equation of the form where N, Q, {Ri}i≥1, and {C, Ci}i≥1 are independent nonnegative random variables, the {C, Ci}i≥1 are identically distributed, and the {Ri}i≥1 are independent copies of stands for equality in distribution. We study the asymptotic properties of the distribution of R that, in the context of PageRank, represents the frequencies of highly ranked pages. The preceding equation is interesting in its own right since it belongs to a more general class of weighted branching processes that have been found to be useful in the analysis of many other algorithms. Our first main result shows that if ENE[Cα] = 1, α > 0, and Q, N satisfy additional moment conditions, then R has a power law distribution of index α. This result is obtained using a new approach based on an extension of Goldie's (1991) implicit renewal theorem. Furthermore, when N is regularly varying of index α > 1, ENE[Cα] < 1, and Q, C have higher moments than α, then the distributions of R and N are tail equivalent. The latter result is derived via a novel sample path large deviation method for recursive random sums. Similarly, we characterize the situation when the distribution of R is determined by the tail of Q. The preceding approaches may be of independent interest, as they can be used for analyzing other functionals on trees. We also briefly discuss the engineering implications of our results.
In this paper we consider the first passage process of a spectrally negative Markov additive process (MAP). The law of this process is uniquely characterized by a certain matrix function, which plays a crucial role in fluctuation theory. We show how to identify this matrix using the theory of Jordan chains associated with analytic matrix functions. This result provides us with a technique that can be used to derive various further identities.
In this paper we study the moment generating function order and the new better than used in the moment generating function order (NBUMG) life distributions. A closure property of this order under an independent random sum is deduced, and stochastic comparisons among the block replacement policy, the age replacement policy, the complete repair policy, and the minimal repair policy of an NBUMG component are investigated.
We consider a Markov-modulated Brownian motion reflected to stay in a strip [0, B]. The stationary distribution of this process is known to have a simple form under some assumptions. We provide a short probabilistic argument leading to this result and explain its simplicity. Moreover, this argument allows for generalizations including the distribution of the reflected process at an independent, exponentially distributed epoch. Our second contribution concerns transient behavior of the model. We identify the joint law of the processes defining the model at inverse local times.
A sequence of random variables is said to be extended negatively dependent (END) if the tails of its finite-dimensional distributions in the lower-left and upper-right corners are dominated by a multiple of the tails of the corresponding finite-dimensional distributions of a sequence of independent random variables with the same marginal distributions. The goal of this paper is to establish the strong law of large numbers for a sequence of END and identically distributed random variables. In doing so we derive some new inequalities of large deviation type for the sums of END and identically distributed random variables being suitably truncated. We also show applications of our main result to risk theory and renewal theory.
Some major companies have the policy of annually giving numerical scores to their employees according to their performance, firing those whose performance scores are below a given percentile of the scores of all employees, and then recruiting new employees to replace those who were fired. We introduce a probabilistic model to describe how this practice affects the quality of employee performance as measured over time by the annual scores. Let n be the number of years that the policy has been in effect, and let Fn(x) be the distribution function of the evaluation scores in year n. We show, under certain technical assumptions, that the sequence (Fn(x)) satisfies a particular nonlinear difference equation, and furnish estimates of the solution of the equation and expressions for the quantiles of Fn. The mathematical tools that are used include convex functions, difference equations, and extreme value theory for independent and identically distributed random variables.