To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
For many practical situations in reliability engineering, components in the system are usually dependent since they generally work in a collaborative environment. In this paper we build sufficient conditions for comparing two coherent systems under different random environments in the sense of the usual stochastic, hazard rate, reversed hazard rate, and likelihood ratio orders. Applications and numerical examples are provided to illustrate all the theoretical results established here.
In this paper we prove that a parallel system consisting of Weibull components with different scale parameters ages faster than a parallel system comprising Weibull components with equal scale parameters in the convex transform order when the lifetimes of components of both systems have different shape parameters satisfying some restriction. Moreover, while comparing these two systems, we show that the dispersive and the usual stochastic orders, and the right-spread order and the increasing convex order are equivalent. Further, some of the known results in the literature concerning comparisons of k-out-of-n systems in the exponential model are extended to the Weibull model. We also provide solutions to two open problems mentioned by Balakrishnan and Zhao (2013) and Zhao et al. (2016).
We consider a special version of random sequential adsorption (RSA) with nearest-neighbor interaction on infinite tree graphs. In classical RSA, starting with a graph with initially inactive nodes, each of the nodes of the graph is inspected in a random order and is irreversibly activated if none of its nearest neighbors are active yet. We generalize this nearest-neighbor blocking effect to a degree-dependent threshold-based blocking effect. That is, each node of the graph is assumed to have its own degree-dependent threshold and if, upon inspection of a node, the number of active direct neighbors is less than that node's threshold, the node will become irreversibly active. We analyze the activation probability of nodes on an infinite tree graph, given the degree distribution of the tree and the degree-dependent thresholds. We also show how to calculate the correlation between the activity of nodes as a function of their distance. Finally, we propose an algorithm which can be used to solve the inverse problem of determining how to set the degree-dependent thresholds in infinite tree graphs in order to reach some desired activation probabilities.
In this paper we investigate the stochastic properties of the number of failed components of a three-state network. We consider a network made up of n components which is designed for a specific purpose according to the performance of its components. The network starts operating at time t = 0 and it is assumed that, at any time t > 0, it can be in one of states up, partial performance, or down. We further suppose that the state of the network is inspected at two time instants t1 and t2 (t1 < t2). Using the notion of the two-dimensional signature, the probability of the number of failed components of the network is calculated, at t1 and t2, under several scenarios about the states of the network. Stochastic and ageing properties of the proposed failure probabilities are studied under different conditions. We present some optimal age replacement policies to show applications of the proposed criteria. Several illustrative examples are also provided.
A parallel server system with n identical servers is considered. The service time distribution has a finite mean 1 / μ, but otherwise is arbitrary. Arriving customers are routed to one of the servers immediately upon arrival. The join-idle-queue routeing algorithm is studied, under which an arriving customer is sent to an idle server, if such is available, and to a randomly uniformly chosen server, otherwise. We consider the asymptotic regime where n → ∞ and the customer input flow rate is λn. Under the condition λ / μ < ½, we prove that, as n → ∞, the sequence of (appropriately scaled) stationary distributions concentrates at the natural equilibrium point, with the fraction of occupied servers being constant at λ / μ. In particular, this implies that the steady-state probability of an arriving customer waiting for service vanishes.
We study how to sample paths of a random walk up to the first time it crosses a fixed barrier, in the setting where the step sizes are independent and identically distributed with negative mean and have a regularly varying right tail. We introduce a desirable property for a change of measure to be suitable for exact simulation. We study whether the change of measure of Blanchet and Glynn (2008) satisfies this property and show that it does so if and only if the tail index α of the right tail lies in the interval (1, 3/2).
We consider transport networks with nodes scattered at random in a large domain. At certain local rates, the nodes generate traffic flows according to some navigation scheme in a given direction. In the thermodynamic limit of a growing domain, we present an asymptotic formula expressing the local traffic flow density at any given location in the domain in terms of three fundamental characteristics of the underlying network: the spatial intensity of the nodes together with their traffic generation rates, and of the links induced by the navigation. This formula holds for a general class of navigations satisfying a link-density and a sub-ballisticity condition. As a specific example, we verify these conditions for navigations arising from a directed spanning tree on a Poisson point process with inhomogeneous intensity function.
Current literature on stochastic dominance assumes utility/loss functions to be the same across random variables. However, decision models with inconsistent utility functions have been proposed in the literature. The use of inconsistent loss functions when comparing between two random variables can also be appropriate under other problem settings. In this paper we generalize almost stochastic dominance to problems with inconsistent utility/loss functions. In particular, we propose a set of conditions that is necessary and sufficient for clear preferences when the utility/loss functions are allowed to vary across different random variables.
We consider the Δ(i)/G/1 queue, in which a total of n customers join a single-server queue for service. Customers join the queue independently after exponential times. We consider heavy-tailed service-time distributions with tails decaying as x-α, α ∈ (1, 2). We consider the asymptotic regime in which the population size grows to ∞ and establish that the scaled queue-length process converges to an α-stable process with a negative quadratic drift. We leverage this asymptotic result to characterize the head start that is needed to create a long period of uninterrupted activity (a busy period). The heavy-tailed service times should be contrasted with the case of light-tailed service times, for which a similar scaling limit arises (Bet et al. (2015)), but then with a Brownian motion instead of an α-stable process.
We study the rare-event behavior of the workload process in a transitory queue, where the arrival epochs (or 'points') of a finite number of jobs are assumed to be the ordered statistics of independent and identically distributed (i.i.d.) random variables. The service times (or 'marks') of the jobs are assumed to be i.i.d. random variables with a general distribution, that are jointly independent of the arrival epochs. Under the assumption that the service times are strictly positive, we derive the large deviations principle (LDP) satisfied by the workload process. The analysis leverages the connection between ordered statistics and self-normalized sums of exponential random variables to establish the LDP. In this paper we present the first analysis of rare events in transitory queueing models, supplementing prior work that has focused on fluid and diffusion approximations.
Multistate monotone systems are used to describe technological or biological systems when the system itself and its components can perform at different operationally meaningful levels. This generalizes the binary monotone systems used in standard reliability theory. In this paper we consider the availabilities of the system in an interval, i.e. the probabilities that the system performs above the different levels throughout the whole interval. In complex systems it is often impossible to calculate these availabilities exactly, but if the component performance processes are independent, it is possible to construct lower bounds based on the component availabilities to the different levels over the interval. In this paper we show that by treating the component availabilities over the interval as if they were availabilities at a single time point, we obtain an improved lower bound. Unlike previously given bounds, the new bound does not require the identification of all minimal path or cut vectors.
We consider the complete graph 𝜅n on n vertices with exponential mean n edge lengths. Writing Cij for the weight of the smallest-weight path between vertices i, j ∈ [n], Janson [18] showed that maxi,j∈[n]Cij/logn converges in probability to 3. We extend these results by showing that maxi,j∈[n]Cij − 3 logn converges in distribution to some limiting random variable that can be identified via a maximization procedure on a limiting infinite random structure. Interestingly, this limiting random variable has also appeared as the weak limit of the re-centred graph diameter of the barely supercritical Erdős–Rényi random graph in [22].
Consider the complete graph on n vertices, with edge weights drawn independently from the exponential distribution with unit mean. Janson showed that the typical distance between two vertices scales as log n/n, whereas the diameter (maximum distance between any two vertices) scales as 3 log n/n. Bollobás, Gamarnik, Riordan and Sudakov showed that, for any fixed k, the weight of the Steiner tree connecting k typical vertices scales as (k − 1)log n/n, which recovers Janson's result for k = 2. We extend this to show that the worst case k-Steiner tree, over all choices of k vertices, has weight scaling as (2k − 1)log n/n and finally, we generalize this result to Steiner trees with a mixture of typical and worst case vertices.
Modern processing networks often consist of heterogeneous servers with widely varying capabilities, and process job flows with complex structure and requirements. A major challenge in designing efficient scheduling policies in these networks is the lack of reliable estimates of system parameters, and an attractive approach for addressing this challenge is to design robust policies, i.e. policies that do not use system parameters such as arrival and/or service rates for making scheduling decisions. In this paper we propose a general framework for the design of robust policies. The main technical novelty is the use of a stochastic gradient projection method that reacts to queue-length changes in order to find a balanced allocation of service resources to incoming tasks. We illustrate our approach on two broad classes of processing systems, namely the flexible fork-join networks and the flexible queueing networks, and prove the rate stability of our proposed policies for these networks under nonrestrictive assumptions.
We consider a server with large capacity delivering video files encoded in various resolutions. We assume that the system is under saturation in the sense that the total demand exceeds the server capacity C. In such a case, requests may be rejected. For the policies considered in this paper, instead of rejecting a video request, it is downgraded. When the occupancy of the server is above some value C0 < C, the server delivers the video at a minimal bit rate. The quantity C0 is the bit rate adaptation threshold. For these policies, request blocking is thus replaced with bit rate adaptation. Under the assumptions of Poisson request arrivals and exponential service times, we show that, by rescaling the system, a process associated with the occupancy of the server converges to some limiting process whose invariant distribution is computed explicitly. This allows us to derive an asymptotic expression of the key performance measure of such a policy, namely the equilibrium probability that a request is transmitted at requested bitrate. Numerical applications of these results are presented.
We study a nonpreemptive scheduling on two parallel identical machines with a dedicated loading server and a dedicated unloading server. Each job has to be loaded by the loading server before being processed on one of the machines and unloaded immediately by the unloading server after its processing. The loading and unloading times are both equal to one unit of time. The goal is to minimize the makespan. Since the problem is NP-hard, we apply the classical list scheduling and largest processing time heuristics, and show that they have worst-case ratios, $8/5$ and $6/5$, respectively.
We analyse a parallel (identical) machine scheduling problem with job delivery to a single customer. For this problem, each job needs to be processed on $m$ parallel machines non-pre-emptively and then transported to a customer by one vehicle with a limited physical capacity. The optimization goal is to minimize the makespan, the time at which all the jobs are processed and delivered and the vehicle returns to the machine. We present an approximation algorithm with a tight worst-case performance ratio of $7/3-1/m$ for the general case, $m\geq 3$.
Computer or communication networks are so designed that they do not easily get disrupted under external attack. Moreover, they are easily reconstructed when they do get disrupted. These desirable properties of networks can be measured by various parameters, such as connectivity, toughness and scattering number. Among these parameters, the isolated scattering number is a comparatively better parameter to measure the vulnerability of networks. In this paper we first prove that for split graphs, this number can be computed in polynomial time. Then we determine the isolated scattering number of the Cartesian product and the Kronecker product of special graphs and special permutation graphs.