To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We construct a response adaptive design, described in terms of a two-color urn model, targeting fixed asymptotic allocations. We prove asymptotic results for the process of colors generated by the urn and for the process of its compositions. An application of the proposed urn model is presented in an estimation problem context.
A scale-free tree with the parameter β is very close to a star if β is just a bit larger than −1, whereas it is close to a random recursive tree if β is very large. Through the Zagreb index, we consider the whole scene of the evolution of the scale-free trees model as β goes from −1 to + ∞. The critical values of β are shown to be the first several nonnegative integer numbers. We get the first two moments and the asymptotic behaviors of this index of a scale-free tree for all β. The generalized plane-oriented recursive trees model is also mentioned in passing, as well as the Gordon-Scantlebury and the Platt indices, which are closely related to the Zagreb index.
Let {X(t):t∈ℝ} be the integrated on–off process with regularly varying on-periods, and let {Y(t):t∈ℝ} be a centered Lévy process with regularly varying positive jumps (independent of X(·)). We study the exact asymptotics of ℙ(supt≥0{X(t)+Y(t)-ct}>u) as u→∞, with special attention to the case r=c, where r is the increase rate of the on–off process during the on-periods.
Multivariate regular variation plays a role in assessing tail risk in diverse applications such as finance, telecommunications, insurance, and environmental science. The classical theory, being based on an asymptotic model, sometimes leads to inaccurate and useless estimates of probabilities of joint tail regions. This problem can be partly ameliorated by using hidden regular variation (see Resnick (2002) and Mitra and Resnick (2011)). We offer a more flexible definition of hidden regular variation that provides improved risk estimates for a larger class of tail risk regions.
Consider a generic data unit of random size L that needs to be transmitted over a channel of unit capacity. The channel availability dynamic is modeled as an independent and identically distributed sequence {A, Ai}i≥1 that is independent of L. During each period of time that the channel becomes available, say Ai, we attempt to transmit the data unit. If L <Ai, the transmission is considered successful; otherwise, we wait for the next available period Ai+1 and attempt to retransmit the data from the beginning. We investigate the asymptotic properties of the number of retransmissions N and the total transmission time T until the data is successfully transmitted. In the context of studying the completion times in systems with failures where jobs restart from the beginning, it was first recognized by Fiorini, Sheahan and Lipsky (2005) and Sheahan, Lipsky, Fiorini and Asmussen (2006) that this model results in power-law and, in general, heavy-tailed delays. The main objective of this paper is to uncover the detailed structure of this class of heavy-tailed distributions induced by retransmissions. More precisely, we study how the functional relationship ℙ[L>x]-1 ≈ Φ (ℙ[A>x]-1) impacts the distributions of N and T; the approximation ‘≈’ will be appropriately defined in the paper based on the context. Depending on the growth rate of Φ(·), we discover several criticality points that separate classes of different functional behaviors of the distribution of N. For example, we show that if log(Φ(n)) is slowly varying then log(1/ℙ[N>n]) is essentially slowly varying as well. Interestingly, if log(Φ(n)) grows slower than e√(logn) then we have the asymptotic equivalence log(ℙ[N>n]) ≈ - log(Φ(n)). However, if log(Φ(n)) grows faster than e√(logn), this asymptotic equivalence does not hold and admits a different functional form. Similarly, different types of distributional behavior are shown for moderately heavy tails (Weibull distributions) where log(ℙ[N>n]) ≈ -(log Φ(n))1/(β+1), assuming that log Φ(n) ≈ nβ, as well as the nearly exponential ones of the form log(ℙ[N>n]) ≈ -n/(log n)1/γ, γ>0, when Φ(·) grows faster than two exponential scales log log (Φ(n)) ≈ nγ.
We introduce and analyze a random tree model associated to Hoppe's urn. The tree is built successively by adding nodes to the existing tree when starting with the single root node. In each step a node is added to the tree as a child of an existing node, where these parent nodes are chosen randomly with probabilities proportional to their weights. The root node has weight ϑ>0, a given fixed parameter, all other nodes have weight 1. This resembles the stochastic dynamic of Hoppe's urn. For ϑ=1, the resulting tree is the well-studied random recursive tree. We analyze the height, internal path length, and number of leaves of the Hoppe tree with n nodes as well as the depth of the last inserted node asymptotically as n→∞. Mainly expectations, variances, and asymptotic distributions of these parameters are derived.
In this paper we introduce discrete-time semi-Markov random evolutions (DTSMREs) and study asymptotic properties, namely, averaging, diffusion approximation, and diffusion approximation with equilibrium by the martingale weak convergence method. The controlled DTSMREs are introduced and Hamilton–Jacobi–Bellman equations are derived for them. The applications here concern the additive functionals (AFs), geometric Markov renewal chains (GMRCs), and dynamical systems (DSs) in discrete time. The rates of convergence in the limit theorems for DTSMREs and AFs, GMRCs, and DSs are also presented.
Scaling of proposals for Metropolis algorithms is an important practical problem in Markov chain Monte Carlo implementation. Analyses of the random walk Metropolis for high-dimensional targets with specific functional forms have shown that in many cases the optimal scaling is achieved when the acceptance rate is approximately 0.234, but that there are exceptions. We present a general set of sufficient conditions which are invariant to orthonormal transformation of the coordinate axes and which ensure that the limiting optimal acceptance rate is 0.234. The criteria are shown to hold for the joint distribution of successive elements of a stationary pth-order multivariate Markov process.
In this paper we study limit theorems for the Feller game which is constructed from one-dimensional simple symmetric random walks, and corresponds to the St. Petersburg game. Motivated by a generalization of the St. Petersburg game which was investigated by Gut (2010), we generalize the Feller game by introducing the parameter α. We investigate limit distributions of the generalized Feller game corresponding to the results of Gut. Firstly, we give the weak law of large numbers for α=1. Moreover, for 0<α≤1, we have convergence in distribution to a stable law with index α. Finally, some limit theorems for a polynomial size and a geometric size deviation are given.
Cloud-computing shares a common pool of resources across customers at a scale that is orders of magnitude larger than traditional multiuser systems. Constituent physical compute servers are allocated multiple ‘virtual machines' (VMs) to serve simultaneously. Each VM user should ideally be unaffected by others’ demand. Naturally, this environment produces new challenges for the service providers in meeting customer expectations while extracting an efficient utilization from server resources. We study a new cloud service metric that measures prolonged latency or delay suffered by customers. We model the workload process of a cloud server and analyze the process as the customer population grows. The capacity required to ensure that the average workload does not exceed a threshold over long segments is characterized. This can be used by cloud operators to provide service guarantees on avoiding long durations of latency. As part of the analysis, we provide a uniform large deviation principle for collections of random variables that is of independent interest.
Upper deviation results are obtained for the split time of a supercritical continuous-time Markov branching process. More precisely, we establish the existence of logarithmic limits for the likelihood that the split times of the process are greater than an identified value and determine an expression for the limiting quantity. We also give an estimation for the lower deviation probability of the split times, which shows that the scaling is completely different from the upper deviations.
We study the asymptotic behaviors of estimators of the parameters in an Ornstein–Uhlenbeck process with linear drift, such as the law of the iterated logarithm (LIL) and Berry–Esseen bounds. As an application of the Berry–Esseen bounds, the precise rates in the LIL for the estimators are obtained.
In this paper we extend the existing literature on the asymptotic behavior of the partial sums and the sample covariances of long-memory stochastic volatility models in the case of infinite variance. We also consider models with leverage, for which our results are entirely new in the infinite-variance case. Depending on the interplay between the tail behavior and the intensity of dependence, two types of convergence rates and limiting distributions can arise. In particular, we show that the asymptotic behavior of partial sums is the same for both long memory in stochastic volatility and models with leverage, whereas there is a crucial difference when sample covariances are considered.
Let {Xi} be a sequence of independent, identically distributed random variables with an intermediate regularly varying right tail F̄. Let (N, C1, C2,…) be a nonnegative random vector independent of the {Xi} with N∈ℕ∪ {∞}. We study the weighted random sum SN=∑{i=1}NCiXi, and its maximum, MN=sup{1≤kN+1∑i=1kCiXi. This type of sum appears in the analysis of stochastic recursions, including weighted branching processes and autoregressive processes. In particular, we derive conditions under which P(MN > x)∼ P(SN > x)∼ E[∑i=1NF̄(x/Ci)] as x→∞. When E[X1]>0 and the distribution of ZN=∑ i=1NCi is also intermediate regularly varying, we obtain the asymptotics P(MN > x)∼ P(SN > x)∼ E[∑i=1NF̄}(x/Ci)] +P(ZN > x/E[X1]). For completeness, when the distribution of ZN is intermediate regularly varying and heavier than F̄, we also obtain conditions under which the asymptotic relations P(MN > x) ∼ P(SN > x)∼ P(ZN > x / E[X1] hold.
In this paper we study the functional central limit theorem (CLT) for stationary Markov chains with a self-adjoint operator and general state space. We investigate the case when the variance of the partial sum is not asymptotically linear in n, and establish that conditional convergence in distribution of partial sums implies the functional CLT. The main tools are maximal inequalities that are further exploited to derive conditions for tightness and convergence to the Brownian motion.
Motivated by stability questions on piecewise-deterministic Markov models of bacterial chemotaxis, we study the long-time behavior of a variant of the classic telegraph process having a nonconstant jump rate that induces a drift towards the origin. We compute its invariant law and show exponential ergodicity, obtaining a quantitative control of the total variation distance to equilibrium at each instant of time. These results rely on an exact description of the excursions of the process away from the origin and on the explicit construction of an original coalescent coupling for both the velocity and position. Sharpness of the obtained convergence rate is discussed.
Using Stein's method, we derive explicit upper bounds on the total variation distance between a Poisson-binomial law (the distribution of a sum of independent but not necessarily identically distributed Bernoulli random variables) and a Pólya distribution with the same support, mean, and variance; a nonuniform bound on the pointwise distance between the probability mass functions is also given. A numerical comparison of alternative distributional approximations on a somewhat representative collection of case studies is also exhibited. The evidence proves that no single one is uniformly most accurate, though it suggests that the Pólya approximation might be preferred in several parameter domains encountered in practice.
The classical secretary problem for selecting the best item is studied when the actual values of the items are observed with noise. One of the main appeals of the secretary problem is that the optimal strategy is able to find the best observation with a nontrivial probability of about 0.37, even when the number of observations is arbitrarily large. The results are strikingly different when the qualities of the secretaries are observed with noise. If there is no noise then the only information that is needed is whether an observation is the best among those already observed. Since the observations are assumed to be independent and identically distributed, the solution to this problem is distribution free. In the case of noisy data, the results are no longer distribution free. Furthermore, we need to know the rank of the noisy observation among those already observed. Finally, the probability of finding the best secretary often goes to 0 as the number of observations, n, goes to ∞. The results heavily depend on the behavior of pn, the probability that the observation that is best among the noisy observations is also best among the noiseless observations. Results involving optimal strategies if all that is available is noisy data are described and examples are given to elucidate the results.
Suppose that both you and your friend toss an unfair coin n times, for which the probability of heads is equal to α. What is the probability that you obtain at least d more heads than your friend if you make r additional tosses? We obtain asymptotic and monotonicity/convexity properties for this competing probability as a function of n, and demonstrate surprising phase transition phenomenon as the parameters d, r, and α vary. Our main tools are integral representations based on Fourier analysis.
We study a class of tenable, irreducible, nondegenerate zero-balanced Pólya urn schemes. We give a full characterization of the class by sufficient and necessary conditions. Only forms with a certain cyclic structure in their replacement matrix are admissible. The scheme has a steady state into proportions governed by the principal (left) eigenvector of the average replacement matrix. We study the gradual change for any such urn containing n → ∞ balls from the initial condition to the steady state. We look at the status of an urn starting with an asymptotically positive proportion of each color after jn draws. We identify three phases of jn: the growing sublinear, the linear, and the superlinear. In the growing sublinear phase the number of balls of different colors has an asymptotic joint multivariate normal distribution, with mean and covariance structure that are influenced by the initial conditions. In the linear phase a different multivariate normal distribution kicks in, in which the influence of the initial conditions is attenuated. The steady state is not a good approximation until a certain superlinear amount of time has elapsed. We give interpretations for how the results in different phases conjoin at the ‘seam lines’. In fact, these Gaussian phases are all manifestations of one master theorem. The results are obtained via multivariate martingale theory. We conclude with some illustrating examples.