To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we treat a two-stage grouping procedure of building a k-out-of-n system from several clusters of components. We use a static framework in which the component reliabilities are fixed. Under such a framework, we address the impact of the selecting strategies, the sampling probabilities, and the component reliabilities on the constructed system’s reliability. An interesting finding is that the level of component reliabilities could be identified as a decisive factor in determining how the selecting strategies and the component reliabilities affect the system reliability. The new results generalize and extend those established earlier in the literature such as Di Crescenzo and Pellerey (2011), Hazra and Nanda (2014), Navarro, Pellerey, and Di Crescenzo (2015), and Hazra, Finkelstein, and Cha (2017). Several Monte Carlo simulation experiments are provided to illustrate the theoretical results.
An abelian processor is an automaton whose output is independent of the order of its inputs. Bond and Levine have proved that a network of abelian processors performs the same computation regardless of processing order (subject only to a halting condition). We prove that any finite abelian processor can be emulated by a network of certain very simple abelian processors, which we call gates. The most fundamental gate is a toppler, which absorbs input particles until their number exceeds some given threshold, at which point it topples, emitting one particle and returning to its initial state. With the exception of an adder gate, which simply combines two streams of particles, each of our gates has only one input wire, which sends letters (‘particles’) from a unary alphabet. Our results can be reformulated in terms of the functions computed by processors, and one consequence is that any increasing function from ℕk to ℕℓ that is the sum of a linear function and a periodic function can be expressed in terms of (possibly nested) sums of floors of quotients by integers.
The nonstationary Erlang-A queue is a fundamental queueing model that is used to describe the dynamic behavior of large-scale multiserver service systems that may experience customer abandonments, such as call centers, hospitals, and urban mobility systems. In this paper we develop novel approximations to all of its transient and steady state moments, the moment generating function, and the cumulant generating function. We also provide precise bounds for the difference of our approximations and the true model. More importantly, we show that our approximations have explicit stochastic representations as shifted Poisson random variables. Moreover, we are also able to show that our approximations and bounds also hold for nonstationary Erlang-B and Erlang-C queueing models under certain stability conditions.
In this paper we introduce and solve a generalization of the classic average cost Brownian control problem in which a system manager dynamically controls the drift rate of a diffusion process X. At each instant, the system manager chooses the drift rate from a pair {u, v} of available rates and can invoke instantaneous controls either to keep X from falling or to keep it from rising. The objective is to minimize the long-run average cost consisting of holding or delay costs, processing costs, costs for invoking instantaneous controls, and fixed costs for changing the drift rate. We provide necessary and sufficient conditions on the cost parameters to ensure the problem admits a finite optimal solution. When it does, a simple control band policy specifying economic buffer sizes (α, Ω) and up to two switching points is optimal. The controller should invoke instantaneous controls to keep X in the interval (α, Ω). A policy with no switching points relies on a single drift rate exclusively. When there is no cost to change the drift rate, a policy with a single switching point s indicates that the controller should change to the slower drift rate when X exceeds s and use the faster drift rate otherwise. When there is a cost to change the drift rate, a policy with two switching points s < S indicates that the controller should maintain the faster drift rate until X exceeds S and maintain the slower drift rate until X falls below s.
In large storage systems, files are often coded across several servers to improve reliability and retrieval speed. We study load balancing under the batch sampling routeing scheme for a network of n servers storing a set of files using the maximum distance separable (MDS) code (cf. Li (2016)). Specifically, each file is stored in equally sized pieces across L servers such that any k pieces can reconstruct the original file. When a request for a file is received, the dispatcher routes the job into the k-shortest queues among the L for which the corresponding server contains a piece of the file being requested. We establish a law of large numbers and a central limit theorem as the system becomes large (i.e. n → ∞), for the setting where all interarrival and service times are exponentially distributed. For the central limit theorem, the limit process take values in ℓ2, the space of square summable sequences. Due to the large size of such systems, a direct analysis of the n-server system is frequently intractable. The law of large numbers and diffusion approximations established in this work provide practical tools with which to perform such analysis. The power-of-d routeing scheme, also known as the supermarket model, is a special case of the model considered here.
A relationally exchangeable structure is a random combinatorial structure whose law is invariant with respect to relabeling its relations, as opposed to its elements. Historically, exchangeable random set partitions have been the best known examples of relationally exchangeable structures, but the concept now arises more broadly when modeling interaction data in modern network analysis. Aside from exchangeable random partitions, instances of relational exchangeability include edge exchangeable random graphs and hypergraphs, path exchangeable processes, and a range of other network-like structures. We motivate the general theory of relational exchangeability, with special emphasis on the alternative perspective it provides and its benefits in certain applied probability problems. We then prove a de Finetti-type structure theorem for the general class of relationally exchangeable structures.
We discuss a rich family of directed series–parallel (SP) graphs grown by the simultaneous random series or parallel development of multiple edges. The family portrays a spectrum that spans a wide range of SP graphs: from simple models, where only as few as one edge is chosen for evolution at each discrete point in time, to complex hierarchical lattice networks grown by a take-all strategy, where all the edges in the existing network are developed.
The family of SP graphs we discuss is grown from an initial seed graph with τ0 edges under an arbitrary building sequence, $\{k_{n}\}_{n=1}^{\infty}$, of nonnegative integers (with $k_n \le \tau _0 + \sum\nolimits_{i = 1}^n {k_i} $, for arbitrary τ0 ≥ 1), that specifies the number of edges subjected to evolution at time n. We study the average north polar degree and show that we can go beyond averages to strong laws. We also find the exact average number of critical edges. The asymptotics of the critical edges are facilitated under the regularity condition that $k_n/\sum\nolimits_{i = 1}^n {k_i} $ converges to a constant (as n → ∞), a natural condition easily met by practical strategies, such as single-edge evolution and take-all choice, and much in between.
We study the problem of choosing the best subset of $p$ features in linear regression, given $n$ observations. This problem naturally contains two objective functions including minimizing the amount of bias and minimizing the number of predictors. The existing approaches transform the problem into a single-objective optimization problem. We explain the main weaknesses of existing approaches and, to overcome their drawbacks, we propose a bi-objective mixed integer linear programming approach. A computational study shows the efficacy of the proposed approach.
The mean time to failure (MTTF) function in age replacement is used to evaluate the performance and effectiveness of the age replacement policy. In this paper, based on the MTTF function, we introduce two new nonparametric classes of lifetime distributions with nonmonotonic mean time to failure in age replacement; increasing then decreasing MTTF (IDMTTF) and decreasing then increasing MTTF (DIMTTF). The implications between these classes of distributions and some existing classes of nonmonotonic ageing classes are studied. The characterizations of IDMTTF and DIMTTF in terms of the scaled total time on test transform are also obtained.
In this paper we present a set of results relating to the occupation time α(t) of a process X(·). The first set of results concerns exact characterizations of α(t), e.g. in terms of its transform up to an exponentially distributed epoch. In addition, we establish a central limit theorem (entailing that a centered and normalized version of α(t)∕t converges to a zero-mean normal random variable as t→∞) and the tail asymptotics of ℙ(α(t)∕t≥q). We apply our findings to spectrally positive Lévy processes reflected at the infimum and establish various new occupation time results for the corresponding model.
We consider an open problem of obtaining the optimal operational sequence for the 1-out-of-n system with warm standby. Using the virtual age concept and the cumulative exposure model, we show that the components should be activated in accordance with the increasing sequence of their lifetimes. Lifetimes of the components and the system are compared with respect to the stochastic precedence order and its generalization. Only specific cases of this optimal problem were considered in the literature previously.
In this paper we continue the examination of inventory control in which the inventory is modeled by a diffusion process and a long-term average cost criterion is used to make decisions. The class of such models under consideration has general drift and diffusion coefficients, and boundary points that are consistent with the notion that demand should tend to reduce the inventory level. The conditions on the cost functions are greatly relaxed from those in Helmes et al. (2017). Characterization of the cost of a general (s, S) policy as a function of two variables naturally leads to a nonlinear optimization problem over the ordering levels s and S. Existence of an optimizing pair (s*, S*) is established for these models under very weak conditions; nonexistence of an optimizing pair is also discussed. Using average expected occupation and ordering measures and weak convergence arguments, weak conditions are given for the optimality of the (s*, S*) ordering policy in the general class of admissible policies. The analysis involves an auxiliary function that is globally C2 and which, together with the infimal cost, solves a particular system of linear equations and inequalities related to but different from the long-term average Hamilton‒Jacobi‒Bellman equation. This approach provides an analytical solution to the problem rather than a solution involving intricate analysis of the stochastic processes. The range of applicability of these results is illustrated on a drifted Brownian motion inventory model, both unconstrained and reflected, and on a geometric Brownian motion inventory model under two different cost structures.
We consider a Markov-modulated fluid flow production model under the D-policy, that is, as soon as the storage reaches level 0, the machine becomes idle until the total storage exceeds a predetermined threshold D. Thus, the production process alternates between a busy and an idle machine. During the busy period, the storage decreases linearly due to continuous production and increases due to supply; during the idle period, no production is rendered by the machine and the storage level increases by only supply arrivals. We consider two types of model with different supply process patterns: continuous inflows with linear rates (fluid type), and batch inflows, where the supplies arrive according to a Markov additive process (MAP) and their sizes are independent and have phase-type distributions depending on the type of arrival (MAP type). Four types of cost are considered: a setup cost, a production cost, a penalty cost for an idle machine, and a storage cost. Using tools from multidimensional martingale and hitting time theory, we derive explicit formulae for these cost functionals in the discounted case. Numerical examples, a sensitivity analysis, and insights are provided.
Sequential order statistics can be used to describe the ordered lifetimes of components of a system when the failure of a component may affect the reliability of the remaining components. After a reliability system consisting of n components fails, some of its components may still be alive. In this paper we first establish some univariate stochastic orderings and ageing properties of the residual lifetimes of the live components in a sequential (n-r+1)-out-of-n system. We also obtain a characterizing result for the exponential distribution based on uncorrelated residual lifetimes of live components. Finally, we provide some sufficient conditions for comparing vectors of residual lifetimes of the live components from two sequential (n-r+1)-out-of-n systems. The results established here extend some well-known results in the literature.
We use the Stein‒Chen method to obtain compound Poisson approximations for the distribution of the number of subgraphs in a generalised stochastic block model which are isomorphic to some fixed graph. This model generalises the classical stochastic block model to allow for the possibility of multiple edges between vertices. We treat the case that the fixed graph is a simple graph and that it has multiple edges. The former results apply when the fixed graph is a member of the class of strictly balanced graphs and the latter results apply to a suitable generalisation of this class to graphs with multiple edges. We also consider a further generalisation of the model to pseudo-graphs, which may include self-loops as well as multiple edges, and establish a parameter regime in the multiple edge stochastic block model in which Poisson approximations are valid. The results are applied to obtain Poisson and compound Poisson approximations (in different regimes) for subgraph counts in the Poisson stochastic block model and degree corrected stochastic block model of Karrer and Newman (2011).
One model of real-life spreading processes is that of first-passage percolation (also called the SI model) on random graphs. Social interactions often follow bursty patterns, which are usually modelled with independent and identically distributed heavy-tailed passage times on edges. On the other hand, random graphs are often locally tree-like, and spreading on trees with leaves might be very slow due to bottleneck edges with huge passage times. Here we consider the SI model with passage times following a power-law distribution ℙ(ξ>t)∼t-α with infinite mean. For any finite connected graph G with a root s, we find the largest number of vertices κ(G,s) that are infected in finite expected time, and prove that for every k≤κ(G,s), the expected time to infect k vertices is at most O(k1/α). Then we show that adding a single edge from s to a random vertex in a random tree 𝒯 typically increases κ(𝒯,s) from a bounded variable to a fraction of the size of 𝒯, thus severely accelerating the process. We examine this acceleration effect on some natural models of random graphs: critical Galton--Watson trees conditioned to be large, uniform spanning trees of the complete graph, and on the largest cluster of near-critical Erdős‒Rényi graphs. In particular, at the upper end of the critical window, the process is already much faster than exactly at criticality.
We consider a polling system with two queues, exhaustive service, no switchover times, and exponential service times with rate µ in each queue. The waiting cost depends on the position of the queue relative to the server: it costs a customer c per time unit to wait in the busy queue (where the server is) and d per time unit in the idle queue (where there is no server). Customers arrive according to a Poisson process with rate λ. We study the control problem of how arrivals should be routed to the two queues in order to minimize the expected waiting costs andcharacterize individually and sociallyoptimal routeing policies under three scenarios of available information at decision epochs: no, partial, and complete information. In the complete information case, we develop a new iterative algorithm to determine individually optimal policies (which are symmetric Nash equilibria), and show that such policies can be described by a switching curve. We use Markov decision processes to compute the socially optimal policies. We observe numerically that the socially optimal policy is well approximated by a linear switching curve. We prove that the control policy described by this linear switching curve is indeed optimal for the fluid version of the two-queue polling system.
In this paper we are concerned with the reliability properties of two coherent systems having shared components. We assume that the components of the systems are two overlapping subsets of a set of n components with lifetimes X1,...,Xn. Further, we assume that the components of the systems fail according to the model of sequential order statistics (which is equivalent, under some mild conditions, to the failure model corresponding to a nonhomogeneous pure-birth process). The joint reliability function of the system lifetimes is expressed as a mixture of the joint reliability functions of the sequential order statistics, where the mixing probabilities are the bivariate signature matrix associated to the structures of systems.We investigate some stochastic orderings and dependency properties of the system lifetimes. We also study conditions under which the joint reliability function of systems with shared components of order m can be equivalently written as the joint reliability function of systems of order n (n>m). In order to illustrate the results, we provide several examples.