On the probability of a Pareto record

Given a sequence of independent random vectors taking values in ${\mathbb R}^d$ and having common continuous distribution function $F$, say that the $n^{\rm \scriptsize th}$ observation sets a (Pareto) record if it is not dominated (in every coordinate) by any preceding observation. Let $p_n(F) \equiv p_{n, d}(F)$ denote the probability that the $n^{\rm \scriptsize th}$ observation sets a record. There are many interesting questions to address concerning $p_n$ and multivariate records more generally, but this short paper focuses on how $p_n$ varies with $F$, particularly if, under $F$, the coordinates exhibit negative dependence or positive dependence (rather than independence, a more-studied case). We introduce new notions of negative and positive dependence ideally suited for such a study, called negative record-setting probability dependence (NRPD) and positive record-setting probability dependence (PRPD), relate these notions to existing notions of dependence, and for fixed $d \geq 2$ and $n \geq 1$ prove that the image of the mapping $p_n$ on the domain of NRPD (respectively, PRPD) distributions is $[p^*_n, 1]$ (resp., $[n^{-1}, p^*_n]$), where $p^*_n$ is the record-setting probability for any continuous $F$ governing independent coordinates.


Introduction, background, and main results
1.1.Introduction, notation, and definitions.We begin with some definitions, including the Definition 1.2 of (multivariate) records as studied in this paper.For x, y ∈ R d , we write x ≤ y or y ≥ x to mean that x j ≤ y j for 1 ≤ j ≤ d, and we write x ≺ y or y ≻ x to mean that x j < y j for 1 ≤ j ≤ d.For x ∈ R d , we use the usual notation x 1 := d j=1 |x j |.We use the standard notation =⇒ for weak convergence of probability measures in Euclidean spaces (or their distribution functions).
Throughout this paper, X (1) , X (2) , . . .are assumed to be i.i.d.(independent and identically distributed) copies of a d-dimensional random vector X with distribution function F and law (or distribution) denoted by L(X).Throughout the paper we restrict attention to continuous F , mainly to avoid the complicating mathematical nuisance of ties, as explained in Remark 1.1(d).
Remark 1.1.(a) As noted by a reviewer of a previous draft, a distribution function F on R d is continuous if and only if each of its d univariate marginals is.This is easy to prove from the observation in [4, Section 3 (only in first edition)] that F corresponding to random vector X is continuous at x ∈ R d if and only if F (x) = P(X ≺ x).
(b) Specializing (a) to d = 1, the distribution function of a random variable Y is continuous if and only if P(Y = y) = 0 for each y ∈ R.
(c) We note in passing, however, that, in contradistinction to (b), atomlessness of a random vector does not imply continuity of the distribution function in dimensions 2 and higher; see, e.g., [6,Section 8.5].
(d) Combining (a)-(b), it follows that, if the d-dimensional random vector X has continuous distribution function F , then almost surely for every 1 ≤ j ≤ d there are no ties among X (1) j , X (2) j , . ... Definition 1.2.(a) For n ≥ 1, we say that X (n) is a (Pareto) record (or that it sets a record at time n) if X (n) ≤ X (i) fails for all 1 ≤ i < n.
(b) If 1 ≤ k ≤ n, we say that X (k) is a current record (or remaining record, or maximum) at time n if X (k) ≤ X (i) fails for all 1 ≤ i ≤ n with i = k.
(c) For n ≥ 1 we let R n denote the number of records X (k) with 1 ≤ k ≤ n and let r n denote the number of remaining records at time n.
Remark 1.3.It is clear from Definition 1.2 that if X = (g 1 (X 1 ), . . ., g d (X d )) where g 1 , . . ., g d are strictly increasing transformations, then the stochastic processes (R n ) and (r n ) are the same for the i.i.d.sequence X (1) , X (2) , . . .as for X (1) , X (2) , . ... Further, since we assume that F is continuous, it follows from Remark 1.1 that the distribution function F of X is also continuous.Remark 1.4.We note that the expected number r n of maxima at time n is n times the probability that X (n) sets a record.Thus our main Theorems 1.11-1.12about record-setting probabilities also gives information about the expected number of maxima when i.i.d.vectors are sampled.
Omitting, for now, any dependence on F or d from the notation, the probability p n that X (n) sets a record is given by where H denotes the distribution function corresponding to −X.
Remark 1.5.For fixed d and n, the mapping from F (equivalently, from H) to p n is many-to-one; recall Remark 1.3.In particular, p n has the same value for all continuous F such that the coordinates of X are independent.We are now prepared to define a partial order on the RP equivalence classes.
Definition 1.6.Let C and C be RP equivalence classes with (arbitrarily chosen) respective representatives F and F .We say that C ≤ C in the RP ordering (or, by abuse of terminology, that F ≤ F in the RP ordering) if H(−X) ≥ H(− X) stochastically.
Remark 1.7.From (1.1) it follow immediately that if F ≤ F in the RP ordering, then p n ≤ pn for every n.
Let C * denote the RP equivalence class corresponding to independent coordinates.We next introduce new notions of negative dependence and positive dependence; we relate these notions to more standard notions later, in Section 4.
Definition 1.8.We will say that F is negatively record-setting probability dependent (NRPD) if its RP equivalence class C satisfies C ≥ C * in the RP ordering.
Definition 1.9.We will say that F is positively record-setting probability dependent (PRPD) if its RP equivalence class C satisfies C ≤ C * in the RP ordering.
Remark 1.10.Thus any F having independent coordinates is both NRPD and PRPD.
We can now state our two main results.For both, let p n (F ) ≡ p n,d (F ) denote the probability that the n th observation X (n) from the (continuous) distribution F sets a record, and let p * n denote the value when F ∈ C * .Theorem 1.11.(b) For n = 1 the results of Theorems 1.11-1.12are trivial, since we have p 1,d (F ) ≡ 1; so in proving the theorems we may assume n ≥ 2. 0 in stochastic ordering as a ↑ (because same is true in LR ordering) The strategy for proving Theorems 1.11-1.12;here the random variable "PA a " has the PA distribution F a described in Section 6.
Corollary 1.14.For fixed d ≥ 2 and n ≥ 1 the image of the mapping p n on the domain of all continuous distributions F is precisely the interval We outline here the strategy, as illustrated in Figure 1 and carried out in Section 7, for proving Theorems 1.11-1.12(and subsequently Corollary 1.14).Let R N and R P denote the respective images.It is immediate from our definitions that R N ⊆ [p * n , 1] and R P ⊆ [0, p * n ], and by considering just first coordinates (see Lemma 2.2) we quickly narrow the latter to R P ⊆ [n −1 , p * n ].To show the reverse containments, we then fill the interval [p * n , 1] with elements of R N by choosing distribution functions F from a certain class of marginalized-Dirichlet distributions and their weak limits, and we fill the interval [n −1 , p * n ] with elements of R P by choosing distribution functions F from a certain class of distributions with positively associated coordinates (more specifically, certain scale mixtures of i.i.d.Exponential distributions) and their weak limits.1.3.Brief literature review.Let us mention some related literature concerning Pareto records; we continue to assume F is continuous throughout this review.The book [1] is a standard reference for univariate records (the case d = 1).For multivariate records in the case of independent coordinates, we have already remarked that the record-setting probability p n = p * n does not depend on the distributions of the individual coordinates, but other aspects (such as the location of remaining records) do.Usually, as in [2] (see also the reference therein), the coordinates are taken to be i.i.d., either Uniform(0, 1) or standard Exponential.Bai et al. [2] obtain, for fixed d and for both R n and r n , asymptotic expansions as n → ∞ for the expected value and variance and a central limit theorem with a Berry-Esseen bound.The main contributions of Fill and Naiman [12] are localization theorems for the Pareto frontier -i.e., the topological boundary between the record-setting region and its complement when coordinates are i.i.d.standard Exponential-and some of those theorems are substantially sharpened in [13].An importance-sampling algorithm for sampling records is presented, and partially analyzed, in [11].A limiting distribution (again, for fixed d as n → ∞) is established for the number r n−1 + 1 − r n of remaining records broken by X (n) conditionally given that X (n) sets a record, for d = 2 in [9] and for general d in [10].
An underlying theme of the present paper is that it is interesting to see how results (for example, concerning asymptotics for moments and distributions for R n and r n and localization of the frontier) vary with F .When F is the uniform distribution on the d-dimensional simplex, Hwang and Tsai [14] (see also the references therein, especially Bai et al. [3]) proceed in a fashion similar to that in [2] to obtain analogues of the asymptotic results of that earlier paper.It is worth noting that the computations are more involved in the simplex case than in [2], in part because results about r n no longer translate immediately to results about R n since the use of so-called concomitants (see Remark 3.1) becomes more involved, and that the results are enormously different; indeed, for example, as noted in the last line of the table on p. 1867 of [14], we have E r n ∼ (ln n) d−1 /(d − 1)! for independent coordinates while E r n ∼ Γ(1/d) n (d−1)/d for uniform sampling from the d-simplex.
1.4.Organization.In Section 2 we record two simple but very useful general observations about the record-breaking probability p n .In Section 3 we briefly review the special case of independent coordinates.In Section 4 we relate the notions of NRPD and PRPD to existing notions of negative and positive dependence.In Section 5 we introduce and treat a class of examples of NRPD distributions F closely related to Dirichlet distributions and in Section 6 we introduce and treat a class of PRPD examples that are scale mixtures of i.i.d.Exponential coordinates.Finally, in Section 7 we prove Theorems 1.11-1.12and Corollary 1.14 and make a few additional remarks concerning the variability of p.
1.5.Manifesto.In light of Theorems 1.11-1.12(see also Figure 1 and the proof strategy discussed at the end of Section 1.2), we regard the marginalized-Dirichlet NRPD distributions and the scale-mixture PRPD distributions we will use to prove the theorems, if not as canonical examples, then at least as standard examples worthy of thorough consideration-in particular, to study how the behaviors of these examples vary with their associated parameter values.Accordingly, we regard this paper as a pilot study of sorts, and we are presently working to extend (most of) the results of references [2], [14], [12]- [13], [11], [9], and [10] to these two classes of examples.

The record-breaking probability p n : general information
To carry out our proof strategy for Theorems 1.11-1.12,we first need a result that p n is continuous as a function of L(X) at any continuous distribution on R d .For this result (Proposition 2.1), we do not need to assume that the distributions of the random vectors X(m) are continuous.
Our next result exhibits the smallest and largest possible values of p n .At the other extreme, if d ≥ 2 and (for example) X ≥ 0 has any continuous distribution (such as any Dirichlet distribution) satisfying X 1 = 1, then X (1) , X (2) , . . .form an antichain in the partial order ≤ on R d , so p n = 1.
For further general information about p n (in addition to Theorems 1.11-1.12, of course), see Remark 7.1.

Independent coordinates: p * n
This brief section concerns the case where the coordinates of each observation are independent.As noted in Remark 1.5, p n doesn't otherwise depend on F in this setting, so we may as well assume that the coordinates are i.i.d.Exponential (1).Then (writing p * n for p n in this special case) Alternatively, as pointed out by a reviewer of a previous draft, the same expression can be obtained for p * n by applying the principle of inclusionexclusion to The numbers appearing in the expression for p * n are called Roman harmonic numbers, studied in [17], [18], and [20].This H for general continuous F follows by standard consideration of concomitants: Consider X (1) , . . ., X (n) sorted according to the value of the d th coordinate.)Further, the numbers p * n,d increase strictly in d for fixed n ≥ 2, with limit 1 as d → ∞; and decrease strictly in n for fixed d ≥ 1, with limit 0 as n → ∞.
(b) For fixed d we have Bai et al. [2] give a more extensive asymptotic expansion.

Negative dependence (including NRPD) and positive dependence (including PRPD)
In this section we review existing notions of negative and positive dependence in Subsections 4.1-4.2 and relate our new notions of NRPD and PRPD to them in Subsection 4.3.

4.1.
Negative dependence.For a discussion of several notions of negative dependence, see [15].The first two notions in the next definition can be found there, with focus on the first notion (NA); we have created the third by interpolating between the first two.Definition 4.1.(a) Random variables X 1 , . . ., X k are said to be negatively associated (NA) if for every pair of disjoint subsets A 1 and A 2 of {1, . . ., k} we have Cov{f 1 (X i : i ∈ A 1 ), f 2 (X j : j ∈ A 2 )} ≤ 0 whenever f 1 and f 2 are nondecreasing (in each argument) and the covariance is defined.
(b) Random variables X 1 , . . ., X k are said to be negatively upper orthant dependent (NUOD) if for all real numbers x 1 , . . ., x k we have (c) We say that random variables X 1 , . . ., X k are negatively upper orthant associated (NUOA) if for every pair of disjoint subsets A 1 and A 2 of {1, . . ., k} and all real numbers x 1 , . . ., x k we have (b) Theorem 2.8 in [15] gives a way of constructing NA (X 1 , . . ., X k ), namely, if G 1 , . . ., G k are independent random variables with log-concave densities, then the conditional distribution of G = (G 1 , . . ., G k ) given k j=1 G j is NA almost surely.4.2.Positive dependence.For a general discussion of various notions of positive dependence focusing on the one in the next definition, see [8].Definition 4.3.Random variables X = (X 1 , . . ., X k ) are said to be positively associated (PA) (or simply associated) if Cov{f 1 (X), f 2 (X)} ≥ 0 whenever f 1 and f 2 are nondecreasing (in each argument) and the covariance is defined.is PA.The proof uses the law of total covariance (conditioning on Z), the fact ([8, Theorem 2.1]) that independent random variables are PA (applied to the conditional covariance), and the fact ([8, Property P 3 ], due to Chebyshev) that the set consisting of a single random variable is PA (applied to the covariance of the conditional expectations).

4.3.
Relation with NRPD and PRPD.Our motivation for regarding NRPD and PRPD of respective Definitions 1.8 and 1.9 as notions of negative and positive dependence, respectively, is the following observation.One might suspect that NA implies NRPD and that PA implies PRPD; we are unable to prove either implication, but we can prove the weaker results (recall Remark 1.7) that NA implies p 2 ≥ p * 2 and PA implies p 2 ≤ p * 2 .To establish the claimed inequalities, in the following proof replace * by ≤ if the observations are PA, by = if they have independent coordinates, and by ≥ if they are NA.The claim is that p 2 * 1 − 2 −d .To see this, recall (1.1).We then have where X * j is an independent copy of

Marginalized-Dirichlet distributions
with y k := 1− k−1 j=1 y j , then we say that Y has the Dirichlet(b) distribution.We will have special interest in taking X = (X 1 , . . ., X d ) to be the first d coordinates of (Y 1 , . . ., Y d+1 ) ∼ Dirichlet(1, . . ., 1, a); we denote the distribution of X in this case by Dir a and the corresponding distribution function by F a ; we refer to the distributions Dir a as marginalized-Dirichlet distributions.We will have occasional interest in taking X = (X 1 , . . ., X d ) ∼ Dirichlet(1, . . ., 1) =: Dir(1). ( are independent.(ii) For any b ≥ 1, the Gamma(b) density is log-concave.
Consider F = F a .The cases n = 1 (with p n ≡ 1) and d = 1 (where the choice of a is irrelevant) being trivial, in the following monotonicity result we consider only n ≥ 2 and d ≥ 2.
Proposition 5.4.Fix d ≥ 2 and n ≥ 2, and let F = F a , i.e., X ∼ Dir a .Then F a is strictly decreasing in the RP ordering and therefore the probability p n (a) := p n (F a ) that X (n) sets a record is strictly decreasing in a.

Proof. By successive integrations one finds
so the first assertion is an immediate consequence of Lemma 5.6 below, and the second assertion follows from Remark 1.7.
Before proceeding to Lemma 5.6, we remind the reader of the definition of the likelihood ratio partial ordering (specialized to our setting of random variables taking values in the unit interval) and its connection to the wellknown stochastic ordering.Definition 5.5.Given two real-valued random variables S and T with respective everywhere strictly positive densities f and g with respect to Lebesgue measure on (0, 1), we say that S ≤ T in the likelihood ratio (LR) ordering if g(u)/f (u) is nondecreasing in u ∈ (0, 1).As noted (for example) in [19,Section 9.4], if S ≤ T in the LR ordering, then S ≤ T stochastically.Lemma 5.6.Fix a real number d > 1, and let Z a have the Beta(a, d) distribution.Then W a := Z d+a−1 a is strictly increasing in the likelihood ratio ordering, and therefore also in the stochastic ordering, as a ∈ (0, ∞) increases.
Proof.By elementary calculation, W a has density g a on (0, 1) given by the following expression, with c a := (d + a − 1)B(a, d): Letting 0 < a < b < ∞ and setting v := w −1/(d+b−1) and it then suffices to show for any fixed t > 1 that the ratio (v − 1)/(v t − 1) decreases strictly as v increases over (1, ∞).
For this, we consider the log-ratio, whose derivative is Similarly to Proposition 5.4, in our positive-association example we have the following claim: Proposition 6.2.Fix d ≥ 2 and n ≥ 2, and let F = F a .Then F a is strictly increasing in the RP ordering and therefore the probability pn (a) := p n ( F a ) that X (n) sets a record is strictly increasing in a.
Proof.A simple computation for x ≥ 0 gives P(X ≥ x) = (1 + x 1 ) −a and thus H a (−X) = (1+ X 1 ) −a .Further, (1+ X 1 ) −1 = G/(G+ G 1 ) ∼ Beta(a, d), so the first assertion is an immediate consequence of the following lemma, and the second assertion follows from Remark 1.7.Lemma 6.3.Fix a real number d > 1, and let Z a have the Beta(a, d) distribution.Then W a := Z a a is strictly decreasing in the likelihood ratio ordering, and therefore also in the stochastic ordering, as a ∈ (0, ∞) increases.
7. Proofs of Theorems 1.11-1.12and Corollary 1.14 We are now prepared to prove Theorems 1.11-1.12and Corollary 1.14 according to the outline provided at the end of Section 1.2; see Figure 1.
Proof of Theorem 1.11.In light of Lemma 2.2, it suffices to show that the image of p n on the domain of our marginalized-Dirichlet examples F a is (p * n , 1).We can regard p n ≡ p n (a) as a function on the domain (0, ∞) corresponding to our Dirichlet index a.Since the density f a (x) corresponding to F a at each fixed argument x is a continuous function of a, it follows from Scheffé's theorem (e.g., [5,Thm. 16.12]) that the corresponding distribution functions F a are continuous in a in the topology of weak convergence.It then follows from Propositions 2.1 and 5.4 that the image in question is (p n (∞−), p n (0+)).
But, as a → ∞, it is easy to see that the density of a times an observation converges pointwise to the density for independent Exponentials.By Scheffé's theorem and Proposition 2.1, therefore, p n (∞−) = p * n .
To compute p n (0+), we first observe that the distribution of an observation X(a) from F a is that of where Y 1 , . . ., Y d are standard Exponential random variables, G a is distributed (unit-scale) Gamma(a), and all d+ 1 random variables are independent.It follows easily that X(a) converges in distribution to the distribution Dir(1) mentioned at (5.1) (for which p n = 1, as mentioned in the proof of Lemma 2.2) as a → 0. Thus, by Proposition 2.1, p n (0+) = 1.
Proof of Theorem 1.12.In light of Lemma 2.2, it suffices to show that the image on the domain of our PA examples F is (n −1 , p * n ).In this case we can regard p n ≡ pn (a) as a function on the domain (0, ∞) corresponding to our Gamma index parameter a.The value of the density of an observation at a given point which is a continuous function of a ∈ (0, ∞).It follows from Scheffé's theorem that the corresponding distribution functions F a are continuous in the topology of weak convergence.It then follows from Propositions 2.1 and 6.2 that the image in question is (p n (0+), pn (∞−)).But, as a → ∞, it's easy to see that the density of a times an observation converges pointwise to the density for independent standard Exponentials.By Scheffé's theorem and Proposition 2.1, therefore, pn (∞−) = p * n .To compute pn (0+), we can without changing pn (a) take an observation X(a) to have coordinates that are a times the logarithms of those described in our PA example.According to [16, Theorem 1] and Slutsky's theorem, X(a) converges in distribution to (Y, . . ., Y ), where Y is standard Exponential.By Proposition 2.1, therefore, pn (0+) = n −1 .
Proof of Corollary 1.14.The corollary follows immediately from Lemma 2.2 and Theorems 1.11-1.12.For a considerably simpler proof, one can use the fact (from Lemma 2.2) that there are distributions F 0 and F 1 satisfying p n (F 0 ) = n −1 and p n (F 1 ) = 1 for every n.By defining F q to be the (1 − q, q) mixture of F 0 and F 1 for q ∈ [0, 1], we see from Proposition 2.1 (since F q is clearly continuous in q in the weak topology) and the intermediate value theorem that the image of p n on the domain {F q : q ∈ [0, 1]} contains (and therefore by Lemma 2.2 equals) [n −1 , 1].Remark 7.1.We have now learned from Theorems 1.11-1.12information about how p n behaves as a function of the (continuous) distribution of X.As a complement, we conclude this paper with general (and rather more mundane) information about how p n behaves as a function of n and as a function of d.
(a) As already noted, from (1.1) it is apparent that p n is nonincreasing in n.By the dominated convergence theorem, p n ↓ p ∞ := x: P(X≥x)=0 P(X ∈ dx) as n ↑ ∞.For each fixed d ≥ 2, the image of the mapping p ∞ on the domain of all continuous distributions on R d is the entire interval [0, 1].To see by example that q ∈ [0, 1] is in the image, choose the distribution F of X to be the (q, 1 − q)-mixture of any Dirichlet distribution and any of our marginalized-Dirichlet distributions Fix n ≥ 1, and for any specified sequence (in d) of distributions of X(d) let p n (∞) := lim d→∞ p n (d).The image of the mapping p n (∞) on the domain of all sequences of continuous distributions is [n −1 , 1].This follows easily from Corollary 1.14.Indeed, given q ∈ [n −1 , 1], one can choose X = X(2) = (X 1 , X 2 ) giving p n (2) = q and then take X(d) = (X 1 , X 1 , . . ., X 1 , X 2 ) for every d.
For each fixed d ≥ 2 and n ≥ 1 the image of the mapping p n on the domain of NRPD distributions is precisely the interval [p * n , 1].Theorem 1.12.For each fixed d ≥ 1 and n ≥ 1 the image of the mapping p n on the domain of PRPD distributions is precisely the interval [n −1 , p * n ].Remark 1.13.(a) For d = 1 and n ≥ 2 the conclusion of Theorem 1.11 is false, since then p n,1 (F ) ≡ n −1 .

Proposition 2 . 1 .
Fix d ≥ 1 and n ≥ 1.If X(m) converges in distribution to X having a continuous distribution, then the corresponding record-setting probabilities satisfy p n (m) → p n as m → ∞.Proof.The distribution functions H m of −X(m) and H of −X satisfy H m =⇒ H.Moreover, H is continuous, so H m (y) converges to H(y) uniformly in y ([4, Problem 3 in Section 3 (only in first edition)]) and hence (recalling

Lemma 2 . 2 .
Fix d ≥ 2 and n ≥ 1.We always have p n ∈ [n −1 , 1], and p n = n −1 and p n = 1 are both possible.Proof.If X (n) 1 sets a one-dimensional record (which has probability n −1 ), then X (n) sets a d-dimensional record.Thus p n ≥ n −1 , and equality holds if Y has any continuous distribution on R and X = (Y, . . ., Y ).

n
can be written as a positive linear combination of products of generalized harmonic numbers H Remark 3.1.(a) In obvious notation, the numbers E r * n,d = np * n,d = E R * n,d−1 increase strictly in n for fixed d ≥ 2, with limit ∞ as n → ∞. (Note: The equality in distribution of the random variables r n,d and R n,d−1

. 1 )
Remark 5.2.(a) When a = 1, the vector X is uniformly distributed in the (open) d-dimensional unit simplex S d := {x = (x 1 , . . ., x d ) : x j > 0 for j = 1, . . ., d and x 1 < 1}.(5.2)This special case is the focus of[14].(b) We find explicit computation (exact or asymptotic) of p n intractable for general Dirichlet distributions.Dirichlet distributions exhibit negative dependence among the coordinates according to standard notions [15]: Remark 5.3.(a) The distribution F a is NUOA [recall Definition 4.1(c)] for every a ∈ (0, ∞), by a simple calculation.(b) The distribution F a is NA if a ≥ 1.Indeed, as in Definition 5.1, let b = (b 1 , . . ., b k ) ≻ 0. The proof [recall Remark 4.2(b)] that Dirichlet(b) is NA when b j ≥ 1 for every j relies on the following two standard facts: (i) If G j ∼ Gamma(b j ) are independent random variables (j = 1, . . ., k), then G 1 ∼ Gamma( b 1 ) and

6 .
Positively associated F a : strict increasing monotonicity in the RP ordering Distributions on R d with positively associated coordinates can be constructed in similar fashion to the marginalized-Dirichlet distributions F a [recall Remarks 4.4 and 5.3(b)].Given a > 0, let F a denote the PA distribution of X = G 1 G , . . ., G d G (scale mixture of i.i.d.Exponentials), where the random variables G, G 1 , . . ., G d are independent, G ∼ Gamma(a), and G j ∼ Exponential(1) ≡ Gamma(1) for j = 1, . . ., d. Remark 6.1.(a) Scale mixtures of a finite number of i.i.d.Exponential random variables appear in a study of finite versions of de Finetti's theorem [7, (3.11)].(b) We find explicit computation (exact or asymptotic) of p n intractable for general scale mixtures, let alone for general PA distributions.
F a .(b) To make sense of the question of how p n varies as a function of d ∈ {1, 2, . ..}, one should specify a sequence of distributions, with the d th distribution being over R d .It is rather obvious that if d ′ < d and X(d ′ ) is obtained by selecting any deterministic set of d ′ coordinates from X(d), then p n (d ′ ) ≤ p n (d); in this sense, p n (d) is nondecreasing in the dimension d.
For all of our standard examples (independent coordinates, our marginalized-Dirichlet distributions F a , and our PA examples F a ) we have p n (∞) = 1.In light of our earlier results, it is sufficient to prove this for the PA examples.For that, since the Beta(a, d) distributions converge weakly to unit mass at 0 as d → ∞, it follows from the consequence pn (a) = E(1 − Z a a,d ) n−1 where Z a,d ∼ Beta(a, d) (7.1) of the proof of Proposition 6.2 that p n (∞) = 1.