An inequality for log-concave functions and its use in the study of failure rates

We establish here an integral inequality for real log-concave functions, which can be viewed as an average monotone likelihood property. This inequality is then applied to examine the monotonicity of failure rates


Introduction
Convex and concave functions play an important role in statistics, probability, and especially in reliability theory, wherein they lead to some useful inequalities.Many aspects of these functions have been studied in detail in different contexts and we refer the readers to the classical monographs [5,6,16].In this note, we derive a simple integral inequality for log-concave functions and then demonstrate its application in examining the monotonicity of failure rates.
A measurable function f : R → R + is said to be log-concave if: for all x, y ∈ R and t ∈ (0, 1).It is easy to see that the support of a log-concave function is an interval and that the above definition amounts to the concavity on R of the function log f : R → R ∪ {−∞} with possible value −∞ outside Supp f.Interested readers may refer to [19] for a recent survey on log-concave functions and relevant topics in statistics.
A function f : R + → R + is said to be hyperbolically monotone if the function x ↦ → f (e x ) is logconcave on R; see, for example, Section 9.2 in [19] and the references therein for more information on this notion.In Lemma 2.3 of [2], a characterization of hyperbolic monotonicity is given as: for all y ≥ x ≥ 0 and c ≥ 1.In [2], this characterization has then utilized to study the preservation of increasing failure rate property under the formation of (n−k+1)-out-of-n systems with discrete distributions.In this note, we give a short new proof of this preservation property by means of a characterization of hyperbolically monotone functions with an average version of the inequality in (1), holding without any restriction on x, y, c.This result turns out to be a consequence of the integral characterization of log-concave functions presented in the following section:

Main results
Theorem 2.1.Let f : R → R + be continuous.Then, f is log-concave if and only if: for all a < b and c > 0.
Proof.The if part follows by midpoint convexity.Fix a ∈ R and c > 0. Dividing both sides of ( 2) by (b − a) 2 and letting b ↓ a, we obtain: by continuity of f .This implies that Supp f = {x ∈ R, f (x) ≠ 0} is an interval, which we denote by I.
Setting x = a − c, y = a + c and g = log f , we get: for all x, y ∈ I.This shows that g is midpoint concave in I and, by Sierpiński's theorem-see [7], p. 12, that it is concave in I, hence also on the whole R since g = −∞ outside I.
For the only if part, we need to show that: for all c > 0, wherein we have set h(t) = f (a + t(b − a)) which is a log-concave function on R. We will present three different proofs.In the first one, we show that the mapping: is non-increasing in (0, ∞).Then, by making the change of variables s = u − v and t = u + v, we have the decomposition I (z) = 2(I 1 (z) + I 2 (z)), with: By the continuity of h, we can differentiate under the integral and obtain: where the inequality comes from Eq. ( 2) in [3] with −z = x 1 ≤ x 2 = 2u − z and = 2z.Similarly, we obtain: 1 − z and again = 2z.This completes the proof of the theorem.
The second proof relies on discretization.For all p, N ≥ 1, the non-negative sequences {a n , n ≥ 0} and {b n , n ≥ 0}, defined by: are such that a j b k ≥ b j a k for all k ≥ j ≥ 0 by the log-concavity of h and appealing again to Eq. ( 2) in [3].It is then easy to see that this implies: Fix now c > 0 and choose an integer p = p N such that N −1 p N → c as N → ∞.Multiplying by N −2 and letting N → ∞ in (4), by Riemann approximation, we obtain the required inequality in (3).
The third proof is more conceptual and hinges upon the Prékopa-Leindler inequality.Let be a positive measure on R 2 with density g(x, y) = h(x)h(y).As g is log-concave on R 2 , the Prékopa-Leindler inequality (see Theorem 1.1 in [17]) implies that (tA + (1 − t)B) ≥ (A) t (B) 1−t for all t ∈ (0, 1) and A, B ⊂ R 2 measurable where, here and throughout, we have used the standard Minkowski notation: by the convexity of A. This implies which implies the inequality in (3).
https://doi.org/10.1017/S0269964824000056Published online by Cambridge University Press Remark 2.2.(a) Above, the continuity condition is not necessary and can be relaxed.Indeed, the proof of the only if part relies only on log-concavity, and for the if part, we used Sierpinśki's theorem, which holds under a sole measurability assumption.On the other hand, the argument for the if part also uses: for all a ∈ R, which may fail if f is assumed only to be measurable.Observe that (5) means that every real number is a so-called Lebesgue density point for f , which holds true, for example, when f is right-continuous; (b) The mid-convexity argument and the Prékopa-Leindler inequality remain true in R 2d .Then, they imply the following multidimensional generalization: a continuous function f : (c) Using either the first or the second proof for the only if part, we can show similarly that for a continuous function f : R → R + , the log-convexity of f on its support is equivalent to: for all a < b and c > 0 such that a − c, b + c ∈ Suppf .Notice that contrary to log-concavity, the support condition is important and that the characterization becomes untrue without this condition; see the end of Section 3.1 for further discussion.
We now state the aforementioned characterization of hyperbolically monotone functions.
Corollary 2.3.Let f : R + → R + be continuous.Then, the function x ↦ → f (e x ) is log-concave on R if and only if: for all 0 < a < b and c > 0.
Proof.Clearly, x ↦ → f (e x ) is log-concave on R if and only if x ↦ → e x f (e x ) is log-concave on R which by Theorem 2.1 and some straightforward simplification, is equivalent to: x → f (x)/f (cx) is monotone on (0, ∞) for every c > 0. In this regard, the characterization in (6), when rewritten as: , can be viewed as an "average monotone likelihood property." 3. Applications to the study of failure rates

On increasing failure rates and proportional failure rates
Let X be a nonnegative variable with absolutely continuous cumulative distribution function F, survival function F = 1 − F, and probability density function f.The function h = f / F, known as the failure rate function of X, is an important measure used extensively in reliability, survival analysis and stochastic modeling.The function x ↦ → xh(x) has been referred in [15] as a generalized failure rate and in [18] as a proportional failure rate.As a consequence of Theorem 2.1 and Corollary 2.3, we get a short proof for the following fact which is well-known for the function h (see [5, p. 76]) but less known for the function Proof.Suppose f is log-concave.Then, by taking a = t > 0 and letting b → ∞ in Theorem 2.1, we obtain: for all c, t > 0. By mid-point convexity, this shows that F (x) is log-concave, or equivalently, is non-increasing, as required.The proof of the second part is analogous upon using: for all c, t > 0, which is obtained by taking a = t > 0 and letting b → ∞ in Corollary 2.3.
In the above statement, we have used the fact that the non-decreasing property of h (resp.x ↦ → xh(x)) is equivalent to the log-concavity of F (resp.x ↦ → F (e x )).The following example demonstrates a situation wherein this is also equivalent to the log-concavity of f (resp.x ↦ → f (e x )).
Example 3.2.Suppose X has the generalized gamma distribution with density: where > 0 and ≠ 0 are shape parameters; see [11].This means that , where is a standard gamma random variable with parameter .We then have: for > 0 ( , y) = −1 y e −y M ( , 1 + ; y) for < 0, with y = x and the standard notation for incomplete gamma and confluent hypergeometric functions, see (5.6) and (5.6) in [20].It is easy to check that x ↦ → f (e x ) is always log-concave, and so is F (e x ) by Corollary 2.3.Based on hypergeometric functions, this can also be observed from xh(x) = y /U (1 − , 1 − , y) for > 0, with: by (2.1) and (1.4) in [20], and from xh(x) = /M ( , 1 + , y) for < 0, which is decreasing in y and hence increasing in x, by positivity of the coefficients in the series defining M.
As far as the log-concavity of F is concerned, the situation depends on the sign of .For < 0, we have h(x) = y −1/ /M (1, 1 + , y) → 0 as x → 0 and x → ∞, so that h is never monotone and neither F nor f are log-concave; on the other hand, when > 0, we have h(x) = y −1/ /U (1 − , 1 − , y) and, The first equivalence is direct, and the inclusion in the second equivalence follows from Corollary 2.3.For the second reverse inclusion, we first observe again from (2.1) and (1.4) in [20] that: From (3.1) in [20], the first quantity behaves like Γ( )(1 − )y 1/ − −1 > 0 as y → 0 when ≥ 1 and < 1, while the second quantity behaves like (1 − )y 1/ −2 > 0 as y → ∞ when < 1.This shows that h has increase points on (0, ∞) if inf{ , } < 1.Notice that by using the same argument, we can show that: for every > 0.
The following example demonstrates a situation wherein the statement of Proposition 3.1 may not be an equivalence.
Example 3.3.Suppose X has a generalized beta distribution of the first kind with density: with , > 0 and ≠ 0. This means that , , where B , is a standard Beta random variable with parameters , .Notice that when = 1, we get the so-called Kumaraswamy distribution with parameters ( , ) (see [13]).Then, we have: with y = x ∈ (0, 1) and the standard notation for the incomplete beta function and the Gaussian hypergeometric function.It is easy to check that: On the other hand, for > 0, Kummer's transformation on 2 F 1 implies: which is a decreasing function in x ∈ (0, 1) by positivity of the coefficients in the series representation of 2 F 1 ( + , 1, + 1; 1 − y).This shows that x ↦ → F (e x ) is log-concave for all , , > 0. But, for < 0, the same hypergeometric transformation leads to: which can be shown to be decreasing in x ∈ (1, ∞) for > 1 and increasing in x ∈ (1, ∞) for < 1.This implies that either ≥ 1 and x ↦ → f (e x ) and x ↦ → F (e x ) are log-concave, or ≤ 1 and x ↦ → f (e x ) and x ↦ → F (e x ) are log-convex.In particular, the statement of Proposition 3.1 is again an equivalence for < 0. It can also be shown that neither f nor F are log-concave for < 0, while for > 0.
If we consider < 1 and > 0 in the above example, it is of interest to notice that x ↦ → f (e x ) is log-convex on R − while x ↦ → F (e x ) is log-concave on R. From Remark 2.2 (c) and Corollary 2.3, this implies: for all a < c in (0, 1).We refer to [3] for further discussions on the asymmetry between log-concavity and log-convexity in a probabilistic framework.One may also refer to [9] for a characterization based on Lévy measures, in the framework of infinitely divisible distributions.

On failure rates of (n − k + 1)-out-of-n systems with discrete lifetimes
Let Z be a random variable with support S Z ⊆ N, p i = P[Z = i] be its probability mass function (pmf), F i = P[Z ≤ i] be its cumulative distribution function (cdf), and Fi = P[Z > i] be its survival function (sf).The failure rate of this distribution has been defined as (see [10] p.45): for all i ∈ S Z .Z is said to have the IFR property if its failure rate is non-decreasing.From the above, it means that i ↦ → Fi / Fi−1 is non-increasing, or equivalently, { Fi } is a logconcave sequence, i.e., for all i ≥ 0. Let Z 1 , . . ., Z n be n independent copies of Z and Z k:n be the k-th order statistic, for 1 ≤ k ≤ n.This is the same as the lifetime of an (n−k +1)-out-of-n system; see [5], for example (some properties on ageing notions and order statistics in the discrete case can be found in [1] and the references therein).
The following theorem presents an alternative proof to the main result of [2] given in [4] p. 42, for example.This implies with the notation: which is easily seen to be such that f k:n (e x ) is log-concave function for all n ≥ 1 and 1 ≤ k ≤ n.
Applying now Corollary 2.3 with a = 0, b = Fi and c = Fi−1 / Fi , we obtain: where for the second inequality we have used ( Fi ) 2 ≥ Fi−1 Fi+1 comes from the IFR property of Z.
Hence, the theorem.
The above method also allows us to obtain the stability result for the reversed failure rate of the discrete random variable Z.With the notation as above, the reversed failure rate is defined as: Then, Z is said to have the DRFR property if r(i) is non-increasing in i, which means that {F i } is a log-concave sequence.The following is a discrete counterpart to a result in Theorem 2.1 of Kundu, Nanda and Hu [14].Proof.Let us consider the expresion: for the cdf of kth order statistic, with the notation: It is evident that f k:n (e x ) is a log-concave function for all n ≥ 1 and 1 ≤ k ≤ n.The proof of this theorem then proceeds along the same lines as that of Theorem 3.4, using the inequality (F i ) 2 ≥ F i−1 F i+1 .

Remark 2 . 4 .
f (ce t ) dt ∫ b a e t f (c −1 e t ) dt , for all a < b and c > 0. The result then follows readily from the change of variable u = e t .In[12], a related characterization of the log-concavity of f (e x ) has been given as the "monotone likelihood property".More precisely, the function x ↦ → f (e x ) is log-concave iff https://doi.org/10.1017/S0269964824000056Published online by Cambridge University Press

Theorem 3 . 5 .
If Z has DRFR property, then Z k:n has DRFR property for all n ≥ 1 and 1 ≤ k ≤ n.
[8]ch states that the IFR property is preserved by order statistics.A continuous version of this result had been established about six decades ago by Esary and Proschan[8].
Theorem 3.4.If Z has IFR property, then Z k:n has IFR property for all n ≥ 1 and 1 ≤ k ≤ n.Proof.Setting p k:n i , F k:n i and Fk:n i for the respective pmf, cdf and sf of Z k:n , we start with the expression: