1. Introduction
Stochastic orders are partial orders of distribution functions. Depending on the applications, various comparisons between distributions arise in the analysis of the related stochastic systems. Shaked and Shanthikumar [Reference Shaked and Shanthikumar21] and Müller and Stoyan [Reference Müller and Stoyan17] provide comprehensive discussions of stochastic orders. Among those, many orders can be characterized as integral stochastic orders [Reference Denuit and Müller2]. Specifically, let
$X_1$
and
$X_2$
be two random variables with distribution functions
$F_{X_1}$
and
$F_{X_2}$
, respectively, and let
$\mathcal{G}$
be a class of measurable functions. We say
$X_1 \leq_{\mathcal{G}} X_2$
(or
$F_{X_1} \leq_{\mathcal{G}} F_{X_2}$
) if and only if
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g\in \mathcal{G}$
. Commonly studied function classes include the increasing (
$\textit{inc}$
), convex (
$\textit{cx}$
), and increasing convex (
$\textit{icx}$
) functions, which give rise to the characterizations of the usual stochastic order, convex order, and increasing convex orders, respectively. One may also consider multivariate functions to define the comparison between random vectors (e.g. the supermodularity order). In this paper, we focus on the univariate case.
The aforementioned function classes and the corresponding orders are non-parametric. Depending on the applications, one may want to extend a stochastic order through parameterization. Leshno and Levy [Reference Leshno and Levy7] first parameterize the usual stochastic order and the increasing concave order, which are named, respectively, ‘almost first-degree stochastic dominance’ (
$\epsilon$
-AFSD) and ‘almost second-degree stochastic dominance’ (
$\epsilon$
-ASSD). The term ‘almost’ indicates that the distributions may violate the non-parametric order but the violation is limited by the parameter
$\epsilon$
. Subsequently, many scholars, notably Guo et al. [Reference Guo, Zhu, Wong and Zhu4], Müller et al. [Reference Müller, Scarsini, Tsetlin and Winkler19], Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23], and Tzeng et al. [Reference Tzeng, Huang and Shih24], examine parametric stochastic orders that are useful for comparing utility functions in decision analysis. Levy et al. [Reference Levy, Leshno and Leibovitch10] report experimental results to demonstrate the violation of the conventional stochastic orders and provide economic interpretations for the almost orders. Studies also showcase the applications to investment valuation [Reference Levy8, Reference Levy9], insurance [Reference Zhao, Gao and Gu26], and supply chains [Reference Feng and Shanthikumar3]. Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23] introduce the ‘almost second degree risk’ (
$\epsilon$
-ASR) to complement some limitations of the ASSD. Müller et al. [Reference Müller, Scarsini, Tsetlin and Winkler18] define
$(1+\gamma)$
stochastic dominance (i.e.
$(1+\gamma)$
-SD), which is in between the first-degree (i.e. the usual stochastic order) and the second-degree stochastic dominance (i.e. the increasing concave order). Huang et al. [Reference Huang, Tzeng and Zhao5] and Mao et al. [Reference Mao, Wu and Hu16] named this order ‘fractional stochastic dominance’ for utility functions with bounded coefficient of risk aversion.
To systematically examine the parametric integral stochastic orders, we parameterize the function classes used to characterize the integral orders. One way to obtain a parametric function class is to impose restrictions on the property of some non-parametric class
$\mathcal{G}$
. Take the class of increasing functions as an example, i.e.
$\mathcal{G}=\textit{inc}$
. The increasingness of a function can be evaluated by
$ \delta_g(x_1,x_2)=(g(x_2)-g(x_1))/(x_2-x_1)$
for any
$x_1\lt x_2$
. We may restrict the increasingness with some parameter
$\Delta\geq 0$
to obtain subclasses of
$\mathcal{G}=\textit{inc}$
:
\begin{align*} \mathcal{G}(\Delta) &= \textit{inc}(\Delta)= \Bigl\{g \in \mathcal{G}\colon 0 \leq \delta_g(x_1,x_2)\leq (1+\Delta)\inf_{x_a\lt x_b} \delta_g(x_a,x_b), \forall x_1 \lt x_2\Bigr\}, \\[5pt] \mathcal{G}^+(\Delta) &= \textit{inc}^+(\Delta)= \bigl\{g \in \mathcal{G}\colon 0\leq \delta_g(x_1,x_2)\leq (1+\Delta) \delta_g(x_3,x_4), \forall x_1 \lt x_2\lt x_3\lt x_4\bigr\}, \\[5pt] \mathcal{G}^-(\Delta) &= \textit{inc}^-(\Delta)= \bigl\{g \in \mathcal{G}\colon 0 \leq \delta_g(x_3,x_4)\leq (1+\Delta) \delta_g(x_1,x_2), \forall x_1 \lt x_2\lt x_3\lt x_4\bigr\}. \end{align*}
The subclass
$\mathcal{G}(\Delta)$
is obtained by restricting the property of
$\mathcal{G}$
over the entire domain, while
$\mathcal{G}^-(\Delta)$
and
$\mathcal{G}^+(\Delta)$
are obtained by restricting the property from, respectively, the left and the right sides of the evaluation point. When varying
$\Delta$
from zero to infinity, we obtain a continuum of function classes. In particular,
$\Delta=0$
leads to the smallest subclass and
$\Delta=\infty$
gives the non-parametric class
$\mathcal{G}$
. We may also define a subclass with a two-sided restriction, i.e.
$\mathcal{G}^\pm(\Delta_+,\Delta_-) = \mathcal{G}^+(\Delta_+)\cap \mathcal{G}^-(\Delta_-)$
, which with the symmetric restriction reduces to
$\mathcal{G}^\pm(\Delta,\Delta)=\mathcal{G}(\Delta)$
.
The parametric integral stochastic order, associated with the subclasses of functions, is a relaxation of the non-parametric order
$\mathcal{G}$
, e.g.
$X_1 \leq_{\mathcal{G}(\Delta)} X_2$
if
$X_1 \leq_{\mathcal{G}} X_2$
. The amount of relaxation to the non-parametric order can be evaluated through a violation measure. We show that the stochastic orders associated with function classes
$\mathcal{G}(\Delta)$
,
$\mathcal{G}^-(\Delta)$
, and
$\mathcal{G}^+(\Delta)$
can be characterized by the overall, left-side, and right-side violations, respectively, to the distributional property of the order associated with
$\mathcal{G}$
. In particular, when the function class is restricted by a parameter
$\Delta$
, the minimum ratio of dominance violation for the corresponding non-parametric order is
$(1+\Delta)$
.
This approach, leading to simple proofs, allows one to easily see the relationship among various parametric orders proposed in previous studies (see Table 1). We find the equivalence of
$\Delta= 1/\epsilon-2$
to the almost stochastic dominance or
$\Delta = 1/\gamma-1$
to the fractional stochastic dominance. In particular, the AFSD (corresponding to
$inc(\Delta)$
functions) and ASSD (corresponding to
$\textit{inc-cv}(\Delta)$
functions) are characterized by the parametric functions with symmetric two-sided restrictions, which give rise to symmetric two-sided relaxation of the non-parametric integral stochastic orders. The
$(1+\gamma)$
-SD (corresponding to
$inc^-(\Delta)$
functions) is characterized by the left-side restriction of increasing functions, leading to the left-side relaxation of the usual stochastic order. The ASR (corresponding to
$\textit{cv}(\Delta)$
functions) is obtained through a symmetric two-sided relaxation of the concave order. For higher-order functions, we show that the almost nth-degree risk (AnR) and the generalized almost nth-degree stochastic dominance (GAnSD) defined by Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23] can be obtained by imposing a symmetric two-sided restriction of the m-concave functions and of the m-increasing-concave functions, respectively. Our way of characterization depicts a full picture of parametric integral stochastic orders for functions with mth-order properties (see Table 3 in Section 5).
Table 1. Parametric function classes and almost stochastic orders defined in previous studies.

The remainder of the paper is organized as follows. We introduce the basic concepts and notation in Section 2. The discussion of parameterized usual stochastic order is summarized in Section 3 and that of the second-order stochastic dominance is provided in Section 4. The analysis is extended to higher orders in Section 5. We conclude the study in Section 6.
2. Preliminaries
To systematically characterize the parametric integral stochastic orders with unified notation, we introduce some basic concepts in this section.
As Denuit and Müller [Reference Denuit and Müller2] suggest, it is technically convenient to consider sufficiently smooth functions to analyze integral stochastic orders. We focus on functions with bounded variation:
where
$\mathcal{X}= [\underline{x}, \overline{x}]$
is the bounded support. We assume that the functions under consideration are integrable with respect to the measures under consideration (e.g. Lebesgue integrable) over
$\mathcal{X}$
. This ensures that we can invoke the fundamental theorem of calculus and apply integration by parts. The non-parametric function classes discussed in this paper are summarized in Table 2.
Table 2. Classes of non-parametric functions.

aWe may append i or d to describe monotone subclasses. For example,
$\textit{ilin}$
stands for increasingly linear functions, and
$\textit{dcx}$
for decreasing convex functions.
Throughout this paper, we compare distributions with a common bounded support
$\mathcal{X}$
. Our analysis can be easily extended to distributions with different supports as long as they are bounded. We note that the assumption of bounded support, though commonly adopted, is not innocuous in general; see [Reference Pomatto, Strack and Tamuz20] for example.
Definition 1. (Integral stochastic order.) A random variable
$X_1$
is said to be smaller than another random variable
$X_2$
in the
$\hat{\mathcal{G}}$
-order, i.e.
$X_1 \leq_{\hat{\mathcal{G}}} X_2$
, if and only if, for any function
$g \in \hat{\mathcal{G}} \subset \overline{\mathcal{G}}$
,
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
.
By Definition 1, when restrictions are imposed on the function class, the corresponding integral stochastic order is relaxed, suggesting the possibility of violating the distributional condition for the corresponding non-parametric order. Specifically, for
$\hat{\mathcal{G}}(\Delta) \subset\mathcal{G}$
, two random variables
$X_1$
and
$X_2$
satisfy the parametric order
$X_1 \leq_{\hat{\mathcal{G}}(\Delta)} X_2$
may violate the non-parametric order
$\mathcal{G}$
. We introduce a violation measure
$v(x;\ X_1 \leq_{\mathcal{G}} X_2)$
to represent the amount of violation to the distributional characterization of the non-parametric order at value x. This measure should be zero for all
$x\in\mathcal{X}$
if
$X_1 \leq_{\mathcal{G}} X_2$
. When the random variables
$X_1$
and
$X_2$
fail to be ordered by
$\mathcal{G}$
, we use
to evaluate the overall and left-side (right-side) partial violations, respectively. We can also measure the non-violation as
$nv(x;\ X_1 \leq_{\mathcal{G}} X_2) = v(x;\ X_1 \geq_{\mathcal{G}} X_2)$
, and define the corresponding aggregate amounts. We will see that the parametric orders can be characterized by using these violation measures. The violation measures are evaluated based on the distributions. We use
$F_Z$
and
$\bar F_Z$
to denote the cumulative distribution and survival functions, respectively, for a random variable Z.
3. Parametric monotone functions
In this section, we summarize the parametric orders by measuring the amount of violation to the usual stochastic order. We consider subclasses of increasing functions by restricting the increasingness of the functions, which leads to relaxations of the usual stochastic order over the entire domain, from the left side, right side, or both sides.
3.1. Violation to the usual stochastic order
Consider two random variables
$X_1$
and
$X_2$
defined on domain
$\mathcal{X}$
. If
$X_1 \leq_{st} X_2$
, then
$\bar F_{X_1}(x)-\bar F_{X_2}(x) \leq 0$
for all
$x\in \mathcal{X}$
. The violation to the usual stochastic order at
$x \in \mathcal{X}$
is thus
If the usual stochastic order is violated at x, the amount of violation
$v(x;\ X_1 \leq_{st} X_2)$
is strictly positive. Then the overall, left-side, and right-side violations can be computed through (1)–(2).
The integral characterization of the usual stochastic order gives rise to the class of all increasing functions, i.e.
$X_1\leq_{st} X_2$
, if and only if
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g\in \textit{inc}$
. When the usual stochastic order is violated, the relationship
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
fails to hold for some
$g\in \textit{inc}$
. To identify the subclass of functions that maintains the usual stochastic order, define
\begin{align*} g^{(1)}_{\min} & = \inf_{x_1, x_2}\{ \delta_g(x_1,x_2)\colon \underline{x} \leq x_1 < x_2 \leq \overline{x} \},\\[5pt] g^{(1)}_{\max} & = \sup_{x_1, x_2}\{ \delta_g(x_1,x_2)\colon \underline{x} \leq x_1 < x_2 \leq \overline{x} \}, \\[5pt] g^{(1)}_{\min^-}(x) & = \inf_{x_1,x_2}\{ \delta_g(x_1,x_2)\colon \underline{x} \leq x_1 < x_2 \leq x\},\\[5pt] g^{(1)}_{\max^-}(x) & = \sup_{x_1,x_2}\{ \delta_g(x_1,x_2)\colon \underline{x} \leq x_1 < x_2 \leq x\},\\[5pt] g^{(1)}_{\min^+}(x) & = \inf_{x_1,x_2}\{ \delta_g(x_1,x_2)\colon x \leq x_1 < x_2 \leq \overline{x}\} ,\\[5pt] g^{(1)}_{\max^+}(x) & = \sup_{x_1,x_2}\{ \delta_g(x_1,x_2)\colon x \leq x_1 < x_2 \leq \overline{x}\} , \end{align*}
where
Thus
$g^{(1)}_{\min}$
is the ‘smallest slope’ of function g, and
is the smallest slope of g below (above) x. With these, we can define subclasses of increasing functions parameterized by some
$\Delta\in[0,\infty)$
:
These conditions are equivalent to
respectively. It is clear that any function within these classes must be continuous. We further observe that the subclasses get larger with a larger
$\Delta$
. In particular, for
$\Delta \in (0,\infty)$
,
\begin{align*} \textit{ilin}&= \textit{inc}(0) \subset \textit{inc}(\Delta) \subset \textit{inc}(\infty) = \textit{inc}, \\[5pt] \textit{icx}&=\textit{inc}^{+}(0) \subset \textit{inc}^{+}(\Delta) \subset \textit{inc}^{+}(\infty)=\textit{inc}, \\[5pt] \textit{icv} &= \textit{inc}^{-}(0)\subset \textit{inc}^{-}(\Delta) \subset \textit{inc}^{-}(\infty) = \textit{inc}, \end{align*}
where
$\textit{ilin}$
is the class of linearly increasing functions. The parameter
$\Delta$
specifies the extent to which the increasingness of the function is restricted. The functions in
$inc^{+}(\Delta)$
(
$inc^{-}(\Delta)$
) are close to, but can deviate from, increasing convex (concave) functions. Thus the associated integral stochastic orders are weaker than the usual stochastic order, but can be stronger than the increasing convex (concave) order. We can relax the usual stochastic orders by evaluating the overall, left-side, and right-side violations, as described in the next theorem.
Theorem 1. (Parametric usual stochastic order.) For some
$\Delta\in[0,\infty)$
, two random variables
$X_1$
and
$X_2$
with support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
-
(i)
$g \in \textit{inc}(\Delta)$
if and only if (6)
\begin{align} v_{O}(X_1 \leq_{st} X_2) \leq \dfrac{1}{\Delta}(\mathbb{E}[X_2] - \mathbb{E}[X_1]); \end{align}
-
(ii)
$g \in \textit{inc}^+(\Delta)$
if and only if(7)and
\begin{align} v_{\geq}(x;\ X_1 \leq_{st} X_2) \leq \dfrac{1}{\Delta}\int_x^{\overline{x}} \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi) \bigr)\,\mathrm{d} \xi, \quad x \in \mathcal{X}; \end{align}
-
(iii)
$g \in \textit{inc}^-(\Delta)$
if and only if (8)
\begin{align} v_{\leq}(x;\ X_1 \leq_{st} X_2) \leq \dfrac{1}{\Delta}\int_{\underline{x}}^x \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi) \bigr) \,\mathrm{d} \xi, \quad x \in \mathcal{X}. \end{align}
Proof of Theorem
1
. Part (i) is proved by Leshno and Levy [Reference Leshno and Levy7] and Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23]. We first show part (iii). Take
$g\in \textit{inc}^- (\Delta)$
. We derive
\begin{align*} & \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)] \\[5pt] &\quad = \int_{\xi\in\mathcal{X}} \bigl(\bar F_{X_2}(\xi)-\bar F_{X_1}(\xi)\bigr)^+\,\mathrm{d} g(\xi) - \int_{\xi\in\mathcal{X}} \bigl(\bar F_{X_1}(\xi)-\bar F_{X_2}(\xi)\bigr)^+\,\mathrm{d} g(\xi) \\[5pt] &\quad \geq \int_{\xi\in\mathcal{X}} g^{(1)}_{\min^-}(\xi) \bigl(\bar F_{X_2}(\xi)-\bar F_{X_1}(\xi)\bigr)^+\,\mathrm{d} \xi - (1+\Delta)\int_{\xi\in\mathcal{X}} g^{(1)}_{\min^-}(\xi) \bigl(\bar F_{X_1}(\xi)-\bar F_{X_2}(\xi)\bigr)^+\,\mathrm{d} \xi \\[5pt] &\quad = \int_{\xi\in\mathcal{X}} g^{(1)}_{\min^-}(\xi) \bigl(\bigl(\bar F_{X_2}(\xi)-\bar F_{X_1}(\xi)\bigr) - \Delta \bigl(\bar F_{X_1}(\xi)-\bar F_{X_2}(\xi)\bigr)^+\bigr) \,\mathrm{d} \xi. \end{align*}
Because
$g^{(1)}_{\min^-}$
is decreasing and bounded by definition, we further deduce
\begin{align*} & \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)] \\[5pt] &\quad\geq \int_{\xi\in\mathcal{X}}\biggl(b_1 - \int_{\xi}^{\overline{x}} \,\mathrm{d} g^{(1)}_{\min^-}(x) \biggr) \bigl(\bigl(\bar F_{X_2}(\xi)-\bar F_{X_1}(\xi)\bigr) - \Delta \bigl(\bar F_{X_1}(\xi)-\bar F_{X_2}(\xi)\bigr)^+\bigr) \,\mathrm{d} \xi\\[5pt] &\quad= b_1 \int_{\xi \in \mathcal{X}} \bigl(\bigl(\bar F_{X_2}(\xi)-\bar F_{X_1}(\xi)\bigr) - \Delta \bigl(\bar F_{X_1}(\xi)-\bar F_{X_2}(\xi)\bigr)^+\bigr) \,\mathrm{d} \xi \\[5pt] & \quad\quad + \int_{x\in\mathcal{X}} \int_{\underline{x}}^x \bigl(\bigl(\bar F_{X_2}(\xi)-\bar F_{X_1}(\xi)\bigr) - \Delta \bigl(\bar F_{X_1}(\xi)-\bar F_{X_2}(\xi)\bigr)^+\bigr) \,\mathrm{d} \xi \,\mathrm{d} \bigl(-g^{(1)}_{\min^-}(x)\bigr), \end{align*}
where
$b_1 = g^{(1)}_{\min}(\overline{x}) \geq 0$
. When (8) holds, the right-hand side is non-negative.
Now take
$X_1$
and
$X_2$
such that
$\mathbb{E}[g(X_1)]\leq E[g(X_2)]$
for any
$g\in \textit{inc}^-(\Delta)$
. Let
where
It is clear that
$g^{(1)}_y(x) \geq0$
for
$x\in\mathcal{X}$
, and thus
$g_y$
is increasing. Moreover, for
$x\leq y$
,
for
$x > y$
,
Thus
$g_y\in \textit{inc}^-(\Delta)$
. Moreover, for a random variable Z with distribution
$F_Z$
over
$\mathcal{X}$
,
\begin{align*} \mathbb{E}[g_y(Z)] &= -\int_{\xi\in\mathcal{X}}\int_{\underline{x}}^{\xi} g^{(1)}_y(x) \,\mathrm{d} x \,\mathrm{d} \bar F_Z(\xi)\\[5pt] & = -\int_{\xi\in\mathcal{X}}\int_{\underline{x}}^{\xi} \bigl(1+\Delta {\mathbb{I}}_{\left\{\bar F_{X_1} (x) > \bar F_{X_2}(x) \right\}}\bigr) {\mathbb{I}}_{\{x \leq y\}} \,\mathrm{d} x \,\mathrm{d} \bar F_Z(\xi) \\[5pt] &= \int_{x\in\mathcal{X}}{\mathbb{I}}_{\{x \leq y\}} \bar F_Z(x)\,\mathrm{d} x + \Delta \int_{x\in\mathcal{X}} {\mathbb{I}}_{\left\{\bar F_{X_1}(x)> \bar F_{X_2}(x)\right\}} {\mathbb{I}}_{\{x \leq y\}} \bar F_{Z}(x) \,\mathrm{d} x\\[5pt] &= \int_{\underline{x}}^y\bar F_Z(x)\,\mathrm{d} x + \Delta\int_{\underline{x}}^y {\mathbb{I}}_{\left\{\bar F_{X_1}(x)> \bar F_{X_2}(x)\right\}} \bar F_{Z}(x) \,\mathrm{d} x. \end{align*}
Because
$g_y \in \textit{inc}^-(\Delta)$
, we deduce
\begin{align*} 0&\leq \mathbb{E}[g_y(X_2)] - \mathbb{E}[g_y(X_1)] \\[5pt] &= \int_{\underline{x}}^y \bigl(\bar F_{X_2}(x) - \bar F_{X_1}(x)\bigr) \,\mathrm{d} x - \Delta \int_{\underline{x}}^y \bigl(\bar F_{X_1}(x) - \bar F_{X_2}(x)\bigr)^+ \,\mathrm{d} x, \end{align*}
yielding (8).
The proof of part (ii) can be carried out similarly. In particular, we can define
For
$x < y$
,
$g^{(1)}_y(x) = g^{(1)}_{y\min^+}(x) = 0$
. For
$x \geq y$
,
$ g^{(1)}_{y\min^+}(x) \geq 1$
and thus
suggesting
$g_y\in \textit{inc}^+(\Delta)$
. For any random variable Z with distribution
$F_Z$
, we derive
It follows that
Because
$g_y\in \textit{inc}^+(\Delta)$
, we conclude that (7) holds.
The proof of Theorem 1 directly utilizes the upper and lower bounds for the slope of function g defined (3)–(4) to evaluate, respectively, the positive part and the negative part of
$g(X_1)-g(X_2)$
, which gives rise to the distributional characterization of the relaxed usual stochastic orders.
The stochastic order associated with
$inc(\Delta)$
is the
$\epsilon$
-AFSD, first introduced in [Reference Leshno and Levy7] with
$\Delta = 1/\epsilon -2 \in [0,\infty)$
. Leshno and Levy [Reference Leshno and Levy7] derive Theorem 1(i) for differentiable g and Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23] analyze this order for strictly increasing functions. Müller et al. [Reference Müller, Scarsini, Tsetlin and Winkler18] analyze differentiable functions within
$inc^-(\Delta)$
and define the
$(1+\gamma)$
-SD order with
$1+\Delta=1/\gamma$
interpreted as the index of greediness in the utility theory. This order is sometimes referred to as the ‘fractional stochastic dominance’ [Reference Huang, Tzeng and Zhao5, Reference Mao, Wu and Hu16]. Müller et al. [Reference Müller, Scarsini, Tsetlin and Winkler18] recognize that this order is in between the usual stochastic order and the increasing concave order, and show that two discrete random variables are
$(1+\gamma)$
-SD ordered if and only if one can be obtained through a sequence of mean-preserving contraction mass transports from the other one. Klar and Müller [Reference Klar and Müller6] connect the
$(1+\gamma)$
-SD order to the
$\Omega$
ratio,
$\Omega(\xi) = \mathbb{E}[(X-\xi)^+]/\mathbb{E}[(X-\xi)^-]$
, used for fund performance. Yang et al. [Reference Yang, Zhou and Zhuang25], among others, explore useful properties of this parametric order.
The characterization in (6) for
$\epsilon$
-AFSD associated with
$inc(\Delta)$
functions suggests the possibility of violating the usual stochastic order with the amount of violation bounded by
$1/\Delta$
of the mean difference. It is easy to show that (6) implies
where
is the overall measure of
$X_1$
dominating
$X_2$
in the usual stochastic order. Thus
$(1+\Delta)$
is the minimum ratio of dominance violation. Similarly, (7) and (8), respectively, imply
3.2. Generalization with two-sided restrictions
We may generalize the discussion in the previous subsection by imposing asymmetric two-sided restrictions to linear functions and define the parametric order that allows for the asymmetric two-sided violation to the usual stochastic order. Specifically, define
\begin{align}& \textit{inc}^\pm(\Delta_+,\Delta_-) \notag \\[5pt] &\quad = \Bigl\{ g \in \overline{\mathcal{G}}\colon 0 \leq \delta_g(x_1,x_2) \leq \min\{(1+\Delta_+) g^{(1)}_{\min^+}(x_2), (1+\Delta_-) g^{(1)}_{\min^-}(x_1)\}, \underline{x}\leq x_1 < x_2 \leq \overline{x} \Bigr\}, \notag \\[5pt] &\hspace{80mm} \Delta_+ \in [0,\infty), \Delta_-\in [0, \infty). \end{align}
It is clear that
$inc^\pm(\Delta_+,\Delta_-)= \textit{inc}^+(\Delta_+)\cap \textit{inc}^-(\Delta_-)$
. The stochastic order associated with this class of functions is named as
$(1+\gamma_{\textit{cx}},1+\gamma_{\textit{cv}})$
-SD by Müller et al. [Reference Müller, Scarsini, Tsetlin and Winkler18, Reference Müller, Scarsini, Tsetlin and Winkler19] with
$\gamma_{\textit{cx}} = {{1}/{(1+\Delta_+)}}\in [0,1]$
and
$\gamma_{\textit{cv}} = {{1}/{(1+\Delta_-)}} \in [0,1]$
. The characterization in Theorem 2 below, using the left- and right-side violation measures, facilitates an intuitive proof.
Theorem 2. (Two-sided relaxation of usual stochastic order.) For some
$\Delta_+\in[0,\infty)$
and
$\Delta_-\in [0,\infty)$
, two random variables
$X_1$
and
$X_2$
defined on support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g\in \textit{inc}^\pm(\Delta_+,\Delta_-)$
, if and only if (where
$a\wedge b = \min\{a,b\}$
), for any
$x \in \mathcal{X}$
,
\begin{align} & \dfrac{1}{1+\Delta_-}\biggl(\int_{\underline{x}}^x \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_+\wedge \Delta_-) v_{\leq}(x;\ X_1 \leq_{st} X_2) \biggr) \nonumber\\[5pt] &\quad + \dfrac{1}{1+\Delta_+}\biggl(\int_x^{\overline{x}} \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_+\wedge \Delta_-) v_{\geq}(x;\ X_1 \leq_{st} X_2) \biggr) \geq 0 . \end{align}
Proof of Theorem
2
. We first prove the ‘if’ part of the theorem by showing that (10) implies that
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g\in \textit{inc}^\pm(\Delta_+,\Delta_-)$
. We derive the result for
$\Delta_+ \leq \Delta_-$
, as that for
$\Delta_+ \geq \Delta_-$
follows similarly. When
$\Delta_+ \leq \Delta_-$
, the condition in (10) becomes
\begin{align} \Xi(x)&\equiv \dfrac{1}{1+\Delta_-}\int_{\underline{x}}^x \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi)\bigr)^+ \,\mathrm{d} \xi - \dfrac{1+\Delta_+}{1+\Delta_-}\int_{\underline{x}}^x \bigl(\bar F_{X_1}(\xi) - \bar F_{X_2}(\xi)\bigr)^+ \,\mathrm{d} \xi \nonumber\\[5pt] &\quad + \dfrac{1}{1+\Delta_+}\int_x^{\overline{x}} \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi)\bigr)^+ \,\mathrm{d} \xi - \int_x^{\overline{x}} \bigl(\bar F_{X_1}(\xi) - \bar F_{X_2}(\xi)\bigr)^+ \,\mathrm{d} \xi \geq 0. \end{align}
Take
$g \in \textit{inc}^\pm(\Delta_+,\Delta_-) = \textit{inc}^+(\Delta_+) \cap \textit{inc}^-(\Delta_-)$
. Then,
Because
$g^{(1)}_{\max^+}(x)$
is decreasing and
$g^{(1)}_{\max^-}(x)$
is increasing, and both attain
$g^{(1)}_{\max}$
within
$[\underline{x},\overline{x}]$
, we can define
and
We have
$x_0 \leq x_{\max}$
as
${{1}/{(1+\Delta_-)}} \leq {{1}/{(1+\Delta_+)}}$
. We treat the situation where
The situation for
can be easily deduced from the argument below.
Note that
$g^{(1)}_{\max^-}$
must be strictly increasing at
$x_0$
. Because
$x_0 \leq x_{\max}$
, we have
$g^{(1)}_{\max^+}(x_0) = g^{(1)}_{\max}$
. It follows that
Define
\begin{align} \rho(x) = \begin{cases} g^{(1)}_0, & x \leq x_0, \nonumber\\[5pt] g^{(1)}_{\max^-}(x), & x_0 < x < x_{\max} ,\\[5pt] g^{(1)}_{\max}, & x \geq x_{\max}. \end{cases} \end{align}
We have two cases to consider. In the following derivation, we choose a sufficiently small
$\epsilon>0$
to compute the left- or right-side deviation of g at given point x.
-
(i) For
$x \leq x_0$
,
$\delta_g$
is lower than as
$g^{(1)}_{\max^-}(x)\leq g^{(1)}_{\max^-}(x_0)= g^{(1)}_0$
$ g^{(1)}_{\max^-}$
is increasing. Moreover,
$\delta_g$
is higher than Therefore,
$\frac{1}{1+\Delta_-} g_{\max^+}(x) =\frac{1}{1+\Delta_-}g^{(1)}_{\max} = \frac{1}{1+\Delta_+} g^{(1)}_0.$
\begin{align*} \dfrac{1}{1+\Delta_+} g^{(1)}_0 \leq \delta_{g} (x-\epsilon, x) \leq g^{(1)}_0. \end{align*}
-
(ii) For
$x> x_0$
, we deduce, because
$g\in \textit{inc}^+(\Delta_+)$
and
$g^{(1)}_{\max^-}$
is strictly increasing at
$x_0$
, In particular,
\begin{align*} \dfrac{1}{1+\Delta_+} g^{(1)}_{\max^-}(x) \leq \delta_{g} (x, x+\epsilon) \leq g^{(1)}_{\max^-}(x) . \end{align*}
$g^{(1)}_{\max^-}(x) = g^{(1)}_{\max}$
for
$x \geq x_{\max}$
.
Together with (12), we obtain
\begin{align*} \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)] &= \int_{x\in\mathcal{X}} \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi)\bigr)^+ \,\mathrm{d} g(\xi) - \int_{x\in\mathcal{X}} \bigl(\bar F_{X_1}(\xi) - \bar F_{X_2}(\xi)\bigr)^+ \,\mathrm{d} g(\xi) \nonumber\\[5pt] &\geq \int_{x\in\mathcal{X}} \Gamma (\xi)\rho(\xi) \,\mathrm{d} \xi, \end{align*}
where
Now define
\begin{align*} \mu(x) = \dfrac{g^{(1)}_{\max^-}(x) - g^{(1)}_0}{g^{(1)}_{\max}- g^{(1)}_0}\quad \text{for $ x_0 \leq x \leq x_{\max}$.} \end{align*}
It is clear that
$\mu$
is a probability distribution that increases from zero to one. From (11), we have
\begin{align*} 0 &\leq g^{(1)}_{\max} \int_{x_0}^{x_{\max}} \Xi(x) \,\mathrm{d} \mu(x) \\[5pt] &= \int_{x_0}^{x_{\max}} \biggl(\dfrac{1}{1+\Delta_+} g^{(1)}_0\int_{\underline{x}}^x \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi)\bigr)^+ \,\mathrm{d} \xi - g^{(1)}_0\int_{\underline{x}}^x \bigl(\bar F_{X_1}(\xi) - \bar F_{X_2}(\xi)\bigr)^+ \,\mathrm{d} \xi \nonumber\\[5pt] &\quad + \dfrac{1}{1+\Delta_+}g^{(1)}_{\max}\int_x^{\overline{x}} \bigl(\bar F_{X_2}(\xi) - \bar F_{X_1}(\xi)\bigr)^+ \,\mathrm{d} \xi - g^{(1)}_{\max} \int_x^{\overline{x}} \bigl(\bar F_{X_1}(\xi) - \bar F_{X_2}(\xi)\bigr)^+ \,\mathrm{d} \xi \biggr) \,\mathrm{d}\mu(x) \\[5pt] &= \int_{x_0}^{x_{\max}} \biggl( g^{(1)}_0 \int_{\underline{x}}^x \Gamma(\xi)\,\mathrm{d}\xi + g^{(1)}_{\max} \int_x^{\overline{x}} \Gamma(\xi)\,\mathrm{d}\xi\biggr)\,\mathrm{d} \mu(x) \\[5pt] &= \int_{\underline{x}}^{\overline{x}} \Gamma(\xi) \rho (\xi) \,\mathrm{d}\xi \leq \mathbb{E}[g(X_2)]-\mathbb{E}[g(X_1)]. \end{align*}
Thus we obtain
$\mathbb{E}[g(X_1)]\leq \mathbb{E}[g(X_2)]$
.
Now we prove the ‘only if’ part of the theorem by showing that
$\mathbb{E}[g(X_1)]\leq \mathbb{E}[g(X_2)]$
for any
$g \in \textit{inc}^\pm(\Delta_+,\Delta_-)$
implies (10). Define
where
We can easily verify that
$g_y(x) \in \textit{inc}^\pm(\Delta_+,\Delta_-)$
. We derive (following the proof of Theorem 1)
\begin{align*} 0 &\leq \mathbb{E}[g_y(X_2)] - \mathbb{E}[g_y(X_1)]\\[5pt] &= \dfrac{1}{1+\Delta_-} \biggl(\int_{\underline{x}}^y \bigl(\bar F_{X_2}(x) - \bar F_{X_1}(x)\bigr) \,\mathrm{d} x - (\Delta_+\wedge\Delta_-)\int_{\underline{x}}^y \bigl(\bar F_{X_1}(x) - \bar F_{X_2}(x)\bigr)^+ \,\mathrm{d} x \biggr)\\[5pt] &\quad +\dfrac{1}{1+\Delta_+} \biggl(\int_y^{\overline{x}} \bigl(\bar F_{X_2}(x) - \bar F_{X_1}(x)\bigr) \,\mathrm{d} x - (\Delta_+\wedge\Delta_-)\int_y^{\overline{x}} \bigl(\bar F_{X_1}(x) - \bar F_{X_2}(x)\bigr)^+ \,\mathrm{d} x \biggr), \end{align*}
which gives rise to (10).
It is clear that Theorem 2 generalizes Theorem 1. We recognize that the integral comparison
$\mathbb{E}[g(X_1)]-\mathbb{E}[g(X_2)]$
is determined by both the slope
$\delta_g$
and the distributional difference
$\bar F_{X_1}-\bar F_{X_2}$
over the entire support
$\mathcal{X}$
. The functional property described in (9) allows us to bound the positive and the negative part of
$g(X_1)-g(X_2)$
with a monotone upper bound of
$\delta_g$
. When we convert this upper bound into a probability measure for the distributional differences, the characterization in (10) can be easily obtained.
We name several special cases of the two-sided parametric functions.
-
• When
$\Delta_+ = \Delta_-=\Delta$
, we obtain
$\epsilon$
-AFSD, which corresponds to the symmetric two-sided restriction of increasing functions:
\begin{align*} \textit{inc}^{\pm}(\Delta,\Delta) = \textit{inc}(\Delta). \end{align*}
-
• When
$\Delta_- = \infty$
or
$\Delta_+ =\infty$
, the restriction of increasing function is only one-sided:
\begin{align*} \textit{inc}^{\pm}(\Delta,\infty) = \textit{inc}^+(\Delta) \quad \text{and} \quad \textit{inc}^{\pm}(\infty,\Delta) = \textit{inc}^-(\Delta). \end{align*}
-
• When
$\Delta_+ = 0$
or
$\Delta_-=0$
, the functions are increasing convex or increasing concave with the two-sided restriction on the increasingness (see our later discussion in Section 4):
\begin{align*} \textit{inc}^{\pm}(0,\Delta) = \textit{inc}(\Delta)\textit{-cx} \quad \text{and} \quad \textit{inc}^{\pm}(\Delta,0)= \textit{inc}(\Delta)\textit{-cv}. \end{align*}
4. Parametric convex and concave functions
In addition to the usual stochastic order, the convex and the concave orders are most studied and applied. These orders are related to the second-order properties of the functions (i.e. convexity and concavity). To name the associated parametric functions, let
where
The parametric subclasses of functions are defined as
We note that any
$g \in \textit{cv}(\Delta)$
if and only if
$-g \in \textit{cx}(\Delta)$
. Moreover, for
$\Delta\in(0,\infty)$
,
where
$\textit{quadcx}$
is the set of quadratically convex functions.
For two random variables
$X_1$
and
$X_2$
with support
$\mathcal{X}$
and
$\mathbb{E}[X_1]=\mathbb{E}[X_2]$
, define
\begin{align*} & v(x;\ X_1 \leq_{\textit{cx}} X_2) = \biggl(\int_x^{\overline{x}}\bar F_{X_1}(\xi)\,\mathrm{d} \xi - \int_x^{\overline{x}}\bar F_{X_2}(\xi) \,\mathrm{d} \xi \biggr)^+, \quad x \in \mathcal{X}, \\[5pt] & v(x;\ X_1 \leq_{\textit{cv}} X_2) = \biggl(\int_{\underline{x}}^x F_{X_2}(\xi)\,\mathrm{d} \xi - \int_{\underline{x}}^x F_{X_1}(\xi)\,\mathrm{d} \xi \biggr)^+, \quad x \in \mathcal{X}. \end{align*}
as the violations to the convex order and the concave order, respectively, evaluated at x. We first characterize the comparisons between
$X_1$
and
$X_2$
with bounded overall violations.
Theorem 3. Two random variables
$X_1$
and
$X_2$
with support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)]\leq \mathbb{E}[g(X_2)]$
for all
-
(i)
$g \in \textit{cx}(\Delta)$
if and only if
$\mathbb{E}[X_1]=\mathbb{E}[X_2]$
and (15)and
\begin{align} v_{O}( X_1 \leq_{\textit{cx}} X_2) = \int_{x\in\mathcal{X}}\biggl( \int_x^{\overline{x}} \bar F_{X_1}(\xi)\,\mathrm{d} \xi - \int_x^{\overline{x}} \bar F_{X_2}(\xi)\,\mathrm{d} \xi\biggr)^+ \,\mathrm{d} x \leq \dfrac{1}{2\Delta}(\mathbb{E}[X_2^2] - \mathbb{E}[X_1^2]); \end{align}
-
(ii)
$g \in \textit{cv}(\Delta)$
if and only if
$\mathbb{E}[X_1]=\mathbb{E}[X_2]$
and
\begin{align*} v_{O}(X_1 \leq_{\textit{cv}} X_2) = \int_{x\in\mathcal{X}}\biggl( \int_{\underline{x}}^x F_{X_2}(\xi)\,\mathrm{d} \xi - \int_{\underline{x}}^x F_{X_1}(\xi)\,\mathrm{d} \xi\biggr)^+ \,\mathrm{d} x \leq \dfrac{1}{2\Delta} \bigl(\mathbb{E}[X_1^2] - \mathbb{E}[X_2^2]\bigr). \end{align*}
Proof of Theorem
3
. To show part (i), we take
$g \in \textit{cx}(\Delta)$
. Because g is convex and thus almost surely continuous, there exists
$g^{(1)} \in \textit{inc}$
such that
where
$a_0 = g(\underline{x})$
. Also define
$a_1 = g^{(1)}(\underline{x})$
. By (13), we must have
For any random variable X with support
$\mathcal{X}$
,
\begin{align*} \mathbb{E}[g(X)] &= a_0 + \int_{\xi \in \mathcal{X}} g^{(1)}(\xi) \bar F_{X}(\xi) \,\mathrm{d} \xi \\[5pt] & = a_0 + \int_{\xi \in \mathcal{X}}\biggl(a_1 + \int_{\underline{x}}^{\xi} \,\mathrm{d} g^{(1)}(\xi_0) \biggr) \bar F_{X}(\xi) \,\mathrm{d} \xi \\[5pt] &= a_0 + a_1(\mathbb{E}[X]-\underline{x}) + \int_{\xi_0\in\mathcal{X}} \biggl( \int_{\xi_0}^{\overline{x}}\bar F_X(\xi) \,\mathrm{d} \xi\biggr) \,\mathrm{d} g^{(1)}(\xi_0) . \end{align*}
We further note the relation
\begin{align*} \int_{\xi \in \mathcal{X}} \int_{\xi}^{\overline{x}} \bar F_X(x) \,\mathrm{d} x \,\mathrm{d} \xi &= \int_{x\in\mathcal{X}} (x-\underline{x}) \bar F_X(x) \,\mathrm{d} x\\[5pt] & = \dfrac{1}{2}\int_{x\in\mathcal{X}}\bar F_X(x) \,\mathrm{d} (x-\underline{x})^2\\[5pt] &= \dfrac{1}{2} \int_{x\in\mathcal{X}} (x-\underline{x})^2 f_X(x) \,\mathrm{d} \\[5pt] &= \dfrac{1}{2}\mathbb{E}[(X-\underline{x})^2]. \end{align*}
Then, for
$X_1$
and
$X_2$
defined on support
$\mathcal{X}$
, we have
\begin{align*} \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)] &= a_1\bigl(\mathbb{E}[X_2]-\mathbb{E}[X_1]) + \int_{\xi\in\mathcal{X}} \biggl( \int_{\xi}^{\overline{x}}\bigl(\bar F_{X_2}(x)-\bar F_{X_1}(x)\bigr)\,\mathrm{d} x \biggr) \,\mathrm{d} g^{(1)}(\xi) \\[3pt] &= a_1\bigl(\mathbb{E}[X_2]-\mathbb{E}[X_1]) +\int_{\xi\in\mathcal{X}} \biggl(\int_{\xi}^{\overline{x}}\bigl(\bar F_{X_2}(x)-\bar F_{X_1}(y)\bigr)\,\mathrm{d} x \biggr)^+ \,\mathrm{d} g^{(1)}(\xi) \\[3pt] & \quad -\int_{\xi\in\mathcal{X}} \biggl(\int_{\xi}^{\overline{x}}\bigl(\bar F_{X_1}(x)-\bar F_{X_2}(y)\bigr)\,\mathrm{d} x \biggr)^+ \,\mathrm{d} g^{(1)}(\xi) \\[3pt] &\geq a_1\bigl(\mathbb{E}[X_2]-\mathbb{E}[X_1]) + \int_{\xi\in\mathcal{X}} g^{(2)}_{\min} \biggl( \int_{\xi}^{\overline{x}}\bigl(\bar F_{X_2}(x)-\bar F_{X_1}(x)\bigr)\,\mathrm{d} x \biggr)^+ \,\mathrm{d} \xi \\[3pt] & \quad\ -\int_{\xi\in\mathcal{X}} (1+\Delta) g^{(2)}_{\min} \biggl(\int_{\xi}^{\overline{x}}\bigl(\bar F_{X_1}(x)-\bar F_{X_2}(x)\bigr)\,\mathrm{d} x \biggr)^+ \,\mathrm{d} \xi \\[3pt] &= a_1\bigl(\mathbb{E}[X_2]-\mathbb{E}[X_1]) + g^{(2)}_{\min} \biggl( \int_{x\in\mathcal{X}} (x-\underline{x}) \bigl(\bar F_{X_2}(x)- \bar F_{X_1}(x)\bigr) \,\mathrm{d} x \\[3pt] & \quad -\Delta \int_{\xi\in\mathcal{X}}\biggl( \int_{\xi}^{\overline{x}} \bigl(\bar F_{X_1}(x)- \bar F_{X_2}(x)\bigr) \,\mathrm{d} x \biggr)^+ \,\mathrm{d} \xi \biggr) \\[3pt] &= a_1\bigl(\mathbb{E}[X_2]-\mathbb{E}[X_1]) + g^{(2)}_{\min} \biggl(\dfrac{1}{2}\bigl(\mathbb{E}[(X_2-\underline{x})^2] -\mathbb{E}[(X_1-\underline{x})^2] \bigr) \\[3pt] & \quad -\Delta \int_{\xi\in\mathcal{X}}\biggl( \int_{\xi}^{\overline{x}} \bigl(\bar F_{X_1}(x)- \bar F_{X_2}(x)\bigr) \,\mathrm{d} x \biggr)^+ \,\mathrm{d} \xi \biggr). \end{align*}
When
$\mathbb{E}[X_1]=\mathbb{E}[X_2]$
and (15) holds, the right-hand side is non-negative.
To show the reverse relation, define
It is easy to verify that
as defined in (13). We deduce
\begin{align*} 0&\leq \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)] \\[3pt] &= \int_{\xi\in\mathcal{X}} \int_{\xi}^{\overline{x}} \bigl(\bar F_{X_2}(\xi_0)-\bar F_{X_1}(\xi_0)\bigr) \,\mathrm{d} \xi_0 g^{(2)}(\xi) \,\mathrm{d} \xi \\[3pt] &= \int_{\xi\in\mathcal{X}} \Bigl(1 + \Delta {\mathbb{I}}_{\bigl\{\int_{\xi}^{\overline{x}}\bar F_{X_1}(x) \,\mathrm{d} x >\int_{\xi}^{\overline{x}} \bar F_{X_2}(x)\,\mathrm{d} x\bigr\} }\Bigr) \int_{\xi}^{\overline{x}}\bigl(\bar F_{X_2}(\xi_0)-\bar F_{X_1}(\xi_0)\bigr) \,\mathrm{d} \xi_0 \,\mathrm{d} \xi \\[3pt] &= \dfrac{1}{2}\bigl(\mathbb{E}[(X_2-\underline{x})^2] -\mathbb{E}[(X_1-\underline{x})^2] \bigr)-\Delta \int_{\xi\in\mathcal{X}}\biggl( \int_{\underline{x}}^{\xi} \bigl(\bar F_{X_1}(\xi_0)- \bar F_{X_2}(\xi_0)\bigr) \,\mathrm{d} \xi_0 \biggr)^+ \,\mathrm{d} \xi . \end{align*}
Because
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g \in \textit{cx}(\Delta)$
, we have
$\mathbb{E}[X_1]=\mathbb{E}[X_2]$
, suggesting (15).
The proof of part (ii) follows a similar argument.
By Theorem 3, the parametric convex (concave) functions lead to a bounded violation to the convex (concave) order. The bound is determined by the difference in the second moment or the variance (given the equal mean). Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23] consider
$\textit{cv}(\Delta)$
functions that are strictly concave and name the associated order as the ‘almost second degree risk’ (
$\epsilon$
-ASR) with
$\Delta = 1/\epsilon - 2$
.
We can also define the left-side and right-side violations to the convex and concave orders, and show a result similar to Theorem 2. We will not state this result, but postpone the discussion to the general mth-order functions in Section 5.
4.1. Parametric increasing convex or concave functions
In Section 3 we discussed two subclasses of increasing functions,
$inc^+(\Delta)$
and
$inc^-(\Delta)$
, which are superclasses of increasing convex (
$\textit{icx}$
) and increasing concave (
$\textit{icv}$
) functions, respectively. In this subsection we analyze subclasses of
$\textit{icx}$
and
$\textit{icv}$
, and characterize the corresponding orders. For that, we can parameterize either the increasingness or the convexity (concavity) of the functions. Specifically, define
\begin{align*} \textit{inc-cx} (\Delta) &= \textit{inc} \cap \textit{cx}(\Delta)\\[5pt] &= \bigl\{ g\in\overline{\mathcal{G}}\colon \delta_g(x_1,x_2) \geq 0, 0 \leq \delta^2_g(x_1,x_2) \leq (1+\Delta) g^{(2)}_{\min}, \ \underline{x}\leq x_1\lt x_2\leq\overline{x} \bigr\},\\[5pt] \textit{inc-cv} (\Delta)&= \textit{inc} \cap \textit{cv}(\Delta)\\[5pt] &= \bigl\{ g\in \overline{\mathcal{G}}\colon \delta_g(x_1,x_2) \geq 0,\ 0 \geq \delta^2_g(x_1,x_2) \geq (1+\Delta) g^{(2)}_{\max}, \ \underline{x}\leq x_1\lt x_2\leq\overline{x} \bigr\}, \\[5pt] \textit{inc}(\Delta)\textit{-cx} &= \textit{inc}(\Delta) \cap \textit{cx}\\[5pt] &= \bigl\{ g\in \overline{\mathcal{G}}\colon 0\leq \delta_g(x_1,x_2) \leq (1+\Delta) g^{(1)}_{\min},\, \delta^2_g(x_1,x_2)\geq 0 ,\ \underline{x}\leq x_1\lt x_2\leq\overline{x} \bigr\},\\[5pt] \textit{inc}(\Delta)\textit{-cv} &= \textit{inc}(\Delta) \cap \textit{cv}\\[5pt] &= \bigl\{ g\in \overline{\mathcal{G}}\colon 0\leq \delta_g(x_1,x_2) \leq (1+\Delta) g^{(1)}_{\min},\, \delta^2_g(x_1,x_2) \leq 0, \ \underline{x}\leq x_1\lt x_2\leq\overline{x}\bigr\}. \end{align*}
Certainly
$\textit{inc-cx}(\Delta)$
(
$\textit{inc-cv}(\Delta)$
) and
$\textit{inc}(\Delta)$
$\textit{-cx}$
(
$\textit{inc}(\Delta)$
$\textit{-cv}$
) are different, though their intersection is non-empty. In particular, for
$\Delta\in(0,\infty)$
,
\begin{align*} \begin{array}{r} \textit{iquadcx} = \textit{inc-cx}(0) \subset \textit{inc-cx}(\Delta) \subset \textit{inc-cx}(\infty)\\[5pt] \textit{ilin} = \textit{inc(0)-cx} \subset \textit{inc}(\Delta)\textit{-cx} \subset \textit{inc}(\infty)\textit{-cx} \end{array} = \textit{icx},\\[5pt] \begin{array}{r} \textit{iquadcv} = \textit{inc-cv}(0) \subset \textit{inc-cv}(\Delta) \subset \textit{inc-cv}(\infty)\\[5pt] \textit{ilin}= \textit{inc(0)-cv} \subset \textit{inc}(\Delta)\textit{-cv} \subset \textit{inc}(\infty)\textit{-cv} \end{array} = \textit{icv}, \end{align*}
where
$\textit{iquadcx}$
and
$\textit{iquadcv}$
are the sets of increasing functions that are quadratically convex and concave, respectively. Theorem 4 below provides the distributional characterizations for the orders associated with the above function classes.
Theorem 4. Two random variables
$X_1$
and
$X_2$
on support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)]\leq \mathbb{E}[g(X_2)]$
for all
-
(i) (increasing and
$\Delta$
-convex (convex) function)-
(i-a)
$g\in \textit{inc-cx}(\Delta)$
if and only if
$\mathbb{E}[X_1]\leq \mathbb{E}[X_2]$
and (16)
\begin{align} v_{O}( X_1 \leq_{\textit{cx}} X_2) \leq \dfrac{1}{2\Delta}(\mathbb{E}[(X_2-\underline{x})^2] - \mathbb{E}[(X_1-\underline{x})^2]), \end{align}
-
(i-b)
$g\in \textit{inc-cv}(\Delta)$
if and only if
$\mathbb{E}[X_1]\leq \mathbb{E}[X_2]$
and and
\begin{align*} v_{O}( X_1 \leq_{\textit{cv}} X_2) \leq \dfrac{1}{2\Delta}(\mathbb{E}[(\overline{x}-X_1)^2] - \mathbb{E}[(\overline{x}-X_2)^2]); \end{align*}
-
-
(ii) (
$\Delta$
-increasing and convex (concave) function)-
(ii-a)
$g\in \textit{inc}(\Delta)$
$\textit{-cx}$
if and only if
\begin{align*} \int_x^{\overline{x}} \bigl(\bar F_{X_1}(\xi) - \bar F_{X_2}(\xi)\bigr)\,\mathrm{d} \xi \leq \dfrac{1}{\Delta} \bigl(\mathbb{E}[X_2] - \mathbb{E}[X_1]\bigr) \quad \textit{{for all} $ x \in \mathcal{X}$,} \end{align*}
-
(ii-b)
$g\in \textit{inc}(\Delta)$
$\textit{-cv}$
if and only if
\begin{align*} \int_{\underline{x}}^x \bigl(\bar F_{X_1}(\xi) - \bar F_{X_2}(\xi)\bigr)\,\mathrm{d} \xi \leq \dfrac{1}{\Delta} \bigl(\mathbb{E}[X_2] - \mathbb{E}[X_1]\bigr) \quad\textit{{for all} $ x \in \mathcal{X}$.} \end{align*}
-
Proof of Theorem
4
. Part (i-a) follows from the proof of Theorem 3(i) by imposing the condition
$\mathbb{E}[X_1]\leq \mathbb{E}[X_2]$
and using the fact that
is increasing. Part (i-b) can be derived similarly, and it is shown for twice-differentiable g by Tzeng et al. [Reference Tzeng, Huang and Shih24]. We remark that the result can be generalized when the supports of
$X_1$
and
$X_2$
are different in view of the relations
and
where
$\underline{x}_i$
and
$\overline{x}_i$
, respectively, are the minimum and maximum realizations of
$X_i,i=1,2$
.
To see part (ii-a), we note from (10) that
$ \textit{inc}(\Delta)$
$\textit{-cx}=\textit{inc}(\Delta) \cap \textit{cx} = \textit{inc}^\pm(0,\Delta)$
. The relation in (16) can be derived from Theorem 2, which also implies
$\mathbb{E}[X_2] \geq \mathbb{E}[X_1]$
. Part (ii-b) follows from an argument similar to that of part (ii-a).
The
$\textit{inc-cv}(\Delta)$
functions lead to the ‘almost second-degree stochastic dominance’ (
$\epsilon$
-ASSD) introduced in [Reference Leshno and Levy7] and further discussed in [Reference Levy, Leshno and Leibovitch10] with the relation
$\Delta=1/\epsilon-2$
.
4.2. Other integral orders
There are other ways of defining integral stochastic orders and parameterizing the orders, depending on the applications. For example, Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23] consider restrictions on both the first-order and the second-order properties by considering the following function class:
They name the corresponding order as the ‘generalized almost second-degree stochastic dominance’ (GASSD). They show that
$X_1 \leq_{\rm GASSD} X_2$
if and only if
$\mathbb{E}[(\overline{x} -X_1)^2]\geq \mathbb{E}[(\overline{x}-X_2)^2]$
and
where
This characterization can be derived by modifying the proof of Theorem 3(ii). Specifically, we recognize that the function class is closed under scaling, i.e.
$g\in \textit{inc}(\Delta_1)$
$\textit{-cv}(\Delta_2)$
implies
$\alpha g \in \textit{inc}(\Delta_1)$
$\textit{-cv}(\Delta_2)$
. Moreover, any increasing concave function with bounded first- and second-order deviations can be approximated by twice-differentiable functions. For twice-differentiable
$g\in \textit{inc}(\Delta_1)$
$\textit{-cv}(\Delta_2)$
with appropriate scaling such that
$g^{(1)}_{\max}-g^{(1)}_{\min}=1$
, we have
implying
Moreover,
One may similarly define the
$\textit{inc}(\Delta_1)$
$\textit{-cx}(\Delta_2)$
class, and obtain, by modifying the proof of Theorem 3(i), the characterization that
$\mathbb{E}[(X_1-\underline{x})^2]\leq \mathbb{E}[(X_2-\underline{x})^2]$
and
where
As we can see, the distributional characterizations for
$\textit{inc}(\Delta_1)$
$\textit{-cv}(\Delta_2)$
and
$\textit{inc}(\Delta_1)$
$\textit{-cx}(\Delta_2)$
, involving maximization over a functional space, may not be easy to verify in general.
A similar analysis can be used to characterize the ‘almost risk-averse stochastic dominance’ (ARSD) defined by Luo and Tan [Reference Luo and Tan14]. They consider the following subclass of increasing concave (twice-differentiable) functions parameterized by
$\kappa \geq 0$
:
and obtain the characterization of
$\mathbb{E}[X_1] \leq \mathbb{E}[X_2]$
and
In a related discussion, Huang et al. [Reference Huang, Tzeng and Zhao5] define the ‘fractional degree stochastic dominance’, denoted by
$(1+c)$
-SD, for the following class of functions:
They reach the characterization
where
\begin{align*} h_c(\xi) = \begin{cases} {\mathrm{e}}^{(1/c-1)\xi}, & 0 \lt c\lt 1,\\[5pt] x, & c = 1. \end{cases} \end{align*}
They show that
$\epsilon$
-AFSD implies
$(1+c)$
-SD order for appropriately chosen c (as a function of
$\epsilon$
). Thus this order is more general than
$\epsilon$
-AFSD.
5. Parametric higher-orders functions
In this section we generalize the discussion to higher orders. We refer the reader to Table 3 for the relationships between the orders analyzed in the previous sections and the higher orders introduced below. To analyze the higher orders, we consider the class of functions
$\overline{\mathcal{G}}^m$
with well-defined mth derivative over the support
$\mathcal{X}$
. Specifically, we have
with
$g^{(0)} = g$
. For any function
$h\colon \mathcal{X} \rightarrow \mathbb{R}$
, define
\begin{align*} \begin{aligned} h_{\min} &= \inf\{ h(\xi )\colon \xi \in\mathcal{X}\}, & h_{\max} &= \sup\{ h(\xi )\colon \xi \in\mathcal{X}\}, \\[5pt] h_{\min^+}(x) &= \inf\{ h(\xi)\colon \xi \geq x\}, & h_{\max^+}(x) &= \sup\{ h(\xi)\colon \xi \geq x\},\\[5pt] h_{\min^-}(x) &= \inf\{ h(\xi )\colon \xi \leq x\}, & h_{\max^-}(x) &= \sup\{ h(\xi )\colon \xi \leq x\}.\end{aligned} \end{align*}
In our discussion below, h may be the mth-order local deviation of some function g of interest.
Table 3. Relationship to higher-order functions.

5.1. Parametric m-convex and m-concave functions Parametric m-convex and m-concave functions
The m-convex and m-concave functions, respectively, are defined by (e.g. [Reference Shaked and Shanthikumar21])
\begin{align*} \textit{m-cx} &= \bigl\{g\in\overline{\mathcal{G}}^{m} \colon g^{(m)}(x) \geq 0,\, x\in\mathcal{X}\bigr\} , \\[5pt] \textit{m-cv} &= \bigl\{g\in\overline{\mathcal{G}}^{m} \colon (\!-\!1)^{m-1} g^{(m)}(x) \geq 0, \, x\in\mathcal{X}\bigr\}. \end{align*}
A random variable
$X_1$
is smaller than another random variable
$X_2$
in the m-convex (m-concave) order, i.e.
$X_1 \leq_{\textit{m-cx}}\ (\leq_{\textit{m-cv}})\ X_2$
, if and only if
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g \in \textit{m-cx}$
(
$\textit{m-cv}$
). An equivalent condition is that
$\mathbb{E}[X_1^n] = \mathbb{E}[X_2^n],n=1,2,\ldots, m-1$
, and
where
with
It is straightforward to verify that
We can define the restricted function classes as
\begin{align*} \textit{m-cx}(\Delta) &= \bigl\{g \in \overline{\mathcal{G}}^m\colon 0 \leq g^{(m)}(x) \leq (1+\Delta) g_{\min}^{(m)}, x\in\mathcal{X}\bigr\} , \\[5pt] \textit{m-cv}(\Delta) &= \bigl\{g \in \overline{\mathcal{G}}^m\colon g= (\!-\!1)^{m-1}\hat g,\, 0 \leq \hat g^{(m)}(x) \leq (1+\Delta) \hat g_{\min}^{(m)},\, x\in\mathcal{X}\bigr\}. \end{align*}
For an odd m,
$\textit{m-cx}(\Delta)=\textit{m-cv}(\Delta)$
, e.g. 1
$\textit{-cx}(\Delta)=1$
$\textit{-cv}(\Delta)=\textit{inc}(\Delta)$
. For an even m,
$g \in \textit{m-cx}(\Delta)$
is equivalent to
$-g \in \textit{m-cv}(\Delta)$
, e.g.
$g \in \textit{2-cx}(\Delta)=\textit{cx}(\Delta)$
, if and only if
$-g \in \textit{2-cv}(\Delta)=\textit{cv}(\Delta)$
.
The violations to the
$\textit{m-cx}$
and
$\textit{m-cv}$
orders, respectively, are defined as
\begin{align*} v(x;\ X_1 \leq_{\textit{m-cx}} X_2)&= \bigl(\bar F^{(m-1)}_{X_1}(x) - \bar F^{(m-1)}_{X_2}(x)\bigr)^+ = \bigl((\!-\!1)^m \bigl( F^{(m-1)}_{X_1}(x) - F^{(m-1)}_{X_2}(x)\bigr)\bigr)^+ , \\[5pt] v(x;\ X_1 \leq_{\textit{m-cv}} X_2) &= \bigl(F^{(m-1)}_{X_2}(x) - F^{(m-1)}_{X_1}(x)\bigr)^+ . \end{align*}
With these measures, we characterize the stochastic orders associated with
$\textit{m-cx}(\Delta)$
and
$\textit{m-cv}(\Delta)$
functions in the next theorem.
Theorem 5. (Higher-order functions.) Two random variables
$X_1$
and
$X_2$
with support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]\,$
for all
-
(i)
$g \in \textit{m-cx}(\Delta)$
if and only if and
$\mathbb{E}[X_1^k]=\mathbb{E}[X_2^k],\quad k=1,2,\ldots, m-1,$
(17)and
\begin{align} v_{O}( X_1 \leq_{\textit{m-cx}} X_2)=\int_{x\in\mathcal{X}}\bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x)\bigr)^+\,\mathrm{d} x \leq \dfrac{1}{\Delta m!}(\mathbb{E}[X_2^m] - \mathbb{E}[X_1^m]); \end{align}
-
(ii)
$g \in \textit{m-cv}(\Delta)$
if and only if and
$\mathbb{E}[X_1^k]=\mathbb{E}[X_2^k],\quad k=1,2,\ldots, m-1,$
(18)
\begin{align} v_{O}( X_1 \leq_{\textit{m-cv}} X_2) = \int_{x\in\mathcal{X}}\bigl(F^{(m-1)}_{X_2}(x) - F^{(m-1)}_{X_1}(x)\bigr)^+ \,\mathrm{d} x \leq \dfrac{(\!-\!1)^{m}}{\Delta m!}(\mathbb{E}[X_1^m] - \mathbb{E}[X_2^m]). \end{align}
Proof of Theorem 5 . We prove part (i), and the proof of part (ii) follows similarly. Take an integer n and an nth differentiable function g. We have
\begin{align*} \mathbb{E}[g(X)] &= a_0 - \int_{x\in \mathcal{X}} \int_{\underline{x}}^x g^{(1)} (\xi) \,\mathrm{d} \xi \,\mathrm{d} \bar F_X(x)\\[5pt] & = a_0 + \int_{\xi\in\mathcal{X}} g^{(1)}(\xi) \bar F_X(\xi) \,\mathrm{d} \xi \\[5pt] &= a_0 + a_1\bigl(\mathbb{E}[X] - \underline{x}\bigr) + \int_{\xi \in\mathcal{X}} \bar F_X^{(1)}(\xi) g^{(2)}(\xi) \,\mathrm{d} \xi \\[5pt] &= a_0 + a_1\bigl(\mathbb{E}[X] - \underline{x}\bigr) + \int_{x \in\mathcal{X}} \bar F_X^{(1)}(x)\biggl(a_2+\int_{\underline{x}}^x g^{(3)}(\xi) \,\mathrm{d} \xi \biggr) \,\mathrm{d} x\\[5pt] &= a_0 + a_1\bigl(\mathbb{E}[X] - \underline{x}\bigr) + \dfrac{a_2}{2}\mathbb{E}[(X-\underline{x})^2] + \int_{x\in\mathcal{X}} \bar F_X^{(2)}(x) g^{(3)}(x) \,\mathrm{d} x\\[5pt] &= \cdots \\[5pt] &= \sum_{k=0}^{n} \dfrac{a_k}{k!} \mathbb{E}[(X-\underline{x})^k] + \int_{x \in \mathcal{X}} \bar F_X^{(n)}(x) g^{(n+1)}(x) \,\mathrm{d} x, \end{align*}
where
$a_k = g^{(k)}(\underline{x})$
for
$k = 1,2, \ldots, n$
. We deduce
\begin{align} & \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)] \notag \\[5pt] &\quad = \sum_{k=0}^n \dfrac{a_k}{k!} \mathbb{E}[(X_2-\underline{x})^k - (X_1-\underline{x})^k ] + \int_{x \in \mathcal{X}} \bigl(\bar F_{X_2}^{(n)}(x) - \bar F_{X_1}^{(n)}(x) \bigr) g^{(n+1)}(x) \,\mathrm{d} x. \end{align}
For any
$g \in \textit{m-cx}(\Delta)$
, and two random variables
$X_1$
and
$X_2$
with
we can rewrite (19) as
\begin{align*} & \mathbb{E}[g(X_2)]-\mathbb{E}[g(X_1)]\\[5pt] &\quad = \int_{x \in \mathcal{X}}\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr)^+ g^{(m)}(x) \,\mathrm{d} x - \int_{x \in \mathcal{X}}\bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x) \bigr)^+ g^{(m)}(x) \,\mathrm{d} x \\[5pt] &\quad \geq g^{(m)}_{\min} \int_{x \in \mathcal{X}}\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr)^+ \,\mathrm{d} x - (1+\Delta) g^{(m)}_{\min} \int_{x \in \mathcal{X}}\bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x) \bigr)^+ \\[5pt] &\quad = \dfrac{g^{(m)}_{\min}}{m!}(\mathbb{E}[(X_2-\underline{x})^{m}] - \mathbb{E}[(X_1-\underline{x})^{m}]) - g^{(m)}_{\min} \Delta v_O(x;\ X_1 \leq_{\textit{m-cx}} X_2). \end{align*}
Thus, when
$X_1$
and
$X_2$
satisfy (17), we must have
$\mathbb{E}[g(X_2)] \geq \mathbb{E}[g(X_1)]$
.
Now take
$X_1$
and
$X_2$
such that
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g\in \textit{m-cx}(\Delta)$
. We first note that
$\mathbb{E}[X_1^k] = \mathbb{E}[X_2^k]$
for
$k=1,2,\ldots,m-1$
because
$g(x) = x^k$
and
$g(x) = - x^k$
both have
$g^{(m)}(x) =0$
and thus both belong to
$ \textit{m-cx}(\Delta)$
. Now take
and
It is easy to verify
$g \in \textit{m-cx}(\Delta)$
. We deduce from (19) that
\begin{align*} 0 &\leq \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)]\\[5pt] & = \int_{x \in \mathcal{X}}\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr) g^{(m)}(x) \,\mathrm{d} x \\[5pt] &= \int_{x \in \mathcal{X}}\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr) \Bigl(1 + \Delta {\mathbb{I}}_{\{\bar F_{X_1}^{(m-1)}(x) > \bar F_{X_2}^{(m-1)}(x)\}}\Bigr) \,\mathrm{d} x \\[5pt] &= \dfrac{1}{m!}(\mathbb{E}[X_2^m] - \mathbb{E}[X_1^m] ) - \Delta \int_{x \in \mathcal{X}} \bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x)\bigr)^+ \,\mathrm{d} x, \end{align*}
which gives (17).
Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23] name the ‘almost nth-degree risk’ (
$\epsilon$
-AnR) order corresponding to
$\textit{m-cv}(\Delta)$
functions. They derive (18) and
$F_{X_1}^{(k)}(\overline{x}) = F_{X_2}^{(k)}(\overline{x})$
for
$k=1,2,\ldots, m-1$
. The latter condition leads to equal kth moments as the random variables have the same upper support. Mao et al. [Reference Mao, Wang and Zhao15] identify several invariant properties the order associated with
$\textit{m-cv}$
functions and view the
$\textit{m-cv}(\Delta)$
functions as an interpolation between the non-parametric function classes.
To extend the analysis to functions with asymmetric two-sided restrictions, we define
\begin{align*} & \textit{m-cx}^\pm(\Delta_+, \Delta_-) \\[5pt] &\quad = \bigl\{ g \in \overline{\mathcal{G}}^m\colon 0 \leq g^{(m)}(x) \leq \min\bigl\{(1+\Delta_+) g^{(m)}_{\min^+}(x), (1+\Delta_-) g^{(m)}_{\min^-}(x)\bigr\}, \, x \in \mathcal{X} \bigr\}, \nonumber\\[5pt] & \textit{m-cv}^\pm(\Delta_+, \Delta_-) \\[5pt] &\quad = \bigl\{ g \in \overline{\mathcal{G}}^m\colon \hat g = (\!-\!1)^m g, 0 \leq \hat g^{(m)}(x) \leq \min\bigl\{(1+\Delta_+) \hat g^{(m)}_{\min^+}(x), (1+\Delta_-) \hat g^{(m)}_{\min^-}(x)\bigr\}, \, x \in \mathcal{X} \bigr\}. \nonumber \end{align*}
We can characterize the distributional properties of orders associated with the
$(\Delta+,\Delta-)$
m-convex (concave) functions using an argument similar to that of Theorem 2.
Theorem 6. (Two-sided relaxation of m-convex and m-concave order.) Given
$\Delta_+>0$
and
$\Delta_->0$
, two random variables
$X_1$
and
$X_2$
defined on support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
-
(i)
$g \in \textit{m-cx}^\pm(\Delta_+,\Delta_-)$
if and only if and for any
$\mathbb{E}[X_1^k] = \mathbb{E}[X_2^k]\quad\textit{for $k=1,2, \ldots, m-1$,}$
$x \in \mathcal{X}$
, (20)and
\begin{align} & \dfrac{1}{1+\Delta_-}\biggl(\int_{\underline{x}}^x \bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\leq}(x;\ X_1 \leq_{\textit{m-cx}} X_2) \biggr) \nonumber\\[5pt] &\quad + \dfrac{1}{1+\Delta_+}\biggl(\int_x^{\overline{x}} \bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\geq}(x;\ X_1 \leq_{\textit{m-cx}} X_2) \biggr) \geq 0 ; \end{align}
-
(ii)
$g\in \textit{m-cv}^\pm(\Delta_+,\Delta_-)$
if and only if and for any
$\mathbb{E}[X_1^k] = \mathbb{E}[X_2^k]\quad\textit{for $k=1,2, \ldots, m-1$,}$
$x \in \mathcal{X}$
, (21)
\begin{align} & \dfrac{1}{1+\Delta_-}\biggl(\int_{\underline{x}}^x \bigl( F^{(m-1)}_{X_1}(\xi) - F^{(m-1)}_{X_2}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\leq}(x;\ X_1 \leq_{\textit{m-cv}} X_2) \biggr) \nonumber\\[5pt] & \quad + \dfrac{1}{1+\Delta_+}\biggl(\int_x^{\overline{x}} \bigl( F^{(m-1)}_{X_1}(\xi) - F^{(m-1)}_{X_2}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\geq}(x;\ X_1 \leq_{\textit{m-cv}} X_2) \biggr) \geq 0 . \end{align}
With Theorem 6, we can easily deduce the violation conditions for functions with one-sided restrictions that lead to the
$\textit{m-cx}^+(\Delta)$
,
$\textit{m-cx}^-(\Delta)$
,
$\textit{m-cv}^+(\Delta)$
, and
$\textit{m-cv}^-(\Delta)$
functions. We note two other special cases (see Table 3).
-
• When
$\Delta_+ = 0$
for m-convex functions, we obtain the function class
$\textit{m-cx}^\pm(0,\Delta) = \textit{m-cx} (\Delta)\ \cap\ $
(m+1)
$\textit{-cx}$
. The characterization in (20) becomes For
\begin{align*} \int_x^{\overline{x}} \bigl(\bar F^{(m-1)}_{X_1}(\xi) - \bar F^{(m-1)}_{X_2}(\xi) \bigr)\,\mathrm{d} \xi \leq \dfrac{1}{m! \Delta}\bigl( \mathbb{E}[X_2^m] - \mathbb{E}[X_1^m] \bigr), \quad x \in \mathcal{X}. \end{align*}
$m=1$
, we obtain the
$\textit{inc}(\Delta)$
$\textit{-cx}$
order discussed in Theorem 4(ii-a).
-
• When
$\Delta_- = 0$
for m-concave functions, we obtain the function class
$\textit{m-cv}^\pm(\Delta,0) = \textit{m-cv} (\Delta)\ \cap\ $
(m+1)
$\textit{-cv}$
. The characterization in (21) becomes For
\begin{align*} \int_{\underline{x}}^x \bigl( F^{(m-1)}_{X_2}(\xi) - F^{(m-1)}_{X_1}(\xi)\bigr) \,\mathrm{d} \xi \leq \dfrac{(\!-\!1)^{m-1}}{m! \Delta} \bigl(\mathbb{E}[X_2^m] - \mathbb{E}[X_1^m]\bigr), \quad x \in \mathcal{X}. \end{align*}
$m=1$
, we obtain the
$\textit{inc}(\Delta)$
$\textit{-cv}$
order discussed in Theorem 4(ii-b).
5.2. Parametric monotone higher orders
The monotone higher-order function classes are defined as
and
A random variable
$X_1$
is smaller than another random variable
$X_2$
in the m-increasing convex (concave) order, i.e.
$X_1 \leq_{\textit{m-icx}} [\!\leq_{\textit{m-icv}}] X_2$
, if and only if
We define the subclasses of functions
\begin{align*} \textit{m-icx}(\Delta) &= \left\{g\in\mathcal{G}^{m}\colon \begin{array}{l} g^{(k)}(\underline{x})=a_k \geq 0, k=1,2,\ldots, m-1; \\[5pt] 0\leq g^{(m)}(x)\leq (1+\Delta) g^{(m)}_{\min}, \ x \in\mathcal{X} \end{array}\right\} , \\[5pt] \textit{m-icv}(\Delta) &= \left\{ g\in\mathcal{G}^{m}\colon \begin{array}{l} (\!-\!1)^{k-1} g^{(k)}(\overline{x})=(\!-\!1)^{k-1} b_k \geq 0, k=1,2,\ldots, m-1; \\[5pt] \hat g (x) = (\!-\!1)^{m-1} g(x),\\[5pt] 0 \leq \hat g^{(m)}(x_2) \leq (1+\Delta) \hat g^{(m)}_{\min}, x\in\mathcal{X} \end{array}\right\}. \end{align*}
Theorem 7. (Symmetric relaxation of
$\textit{m-icx}$
and
$\textit{m-icv}$
order.) Two random variables
$X_1$
and
$X_2$
on support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)]\leq \mathbb{E}[g(X_2)]$
for all
-
(i)
$g \in \textit{m-icx}(\Delta)$
if and only if and
$\mathbb{E}[(X_1-\underline{x})^k]\leq \mathbb{E}[(X_2-\underline{x})^k],\quad k=1,2,\ldots, m-1,$
and
\begin{align*} v_{O}( X_1 \leq_{\textit{m-cx}} X_2) &= \int_{x\in\mathcal{X}}\bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x)\bigr)^+\,\mathrm{d} x \nonumber\\[5pt] &\leq \dfrac{1}{\Delta m!}(\mathbb{E}[(X_2-\underline{x})^m] - \mathbb{E}[(X_1-\underline{x})^m]); \end{align*}
-
(ii)
$g \in \textit{m-icv}(\Delta)$
if and only if and
$\mathbb{E}[(\overline{x}-X_1)^k]\geq \mathbb{E}[(\overline{x}-X_2)^k],\quad k=1,2,\ldots, m-1,$
\begin{align*} v_{O}( X_1 \leq_{\textit{m-cv}} X_2) &= \int_{x\in\mathcal{X}}\bigl(F^{(m-1)}_{X_2}(x) - F^{(m-1)}_{X_1}(x)\bigr)^+ \,\mathrm{d} x \nonumber\\[5pt] &\leq \dfrac{1}{\Delta m!}(\mathbb{E}[(\overline{x}-X_1)^m] - \mathbb{E}[(\overline{x}-X_2)^m]). \end{align*}
Proof of Theorem
7
. The result follows with a slight modification of the proof of Theorem 5 by including the condition
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]$
in part (i) and
$\mathbb{E}[(\overline{x}-X_1)^k] \geq \mathbb{E}[(\overline{x}-X_2)^k]$
in part (ii) for
$k=1,2,\ldots, m-1$
.
The
$\textit{m-icv}(\Delta)$
function leads to the ‘almost Nth-degree stochastic dominance’ (ANSD) named by Tzeng et al. [Reference Tzeng, Huang and Shih24] and the ‘generalized almost nth-degree stochastic dominance’ (
$\epsilon$
-GAnSD) named by Tsetlin et al. [Reference Tsetlin, Winkler, Huang and Tzeng23]. Lu et al. [Reference Lu, Zhang, Xiao and Dhesi13] show the correspondence of this order to the higher-order
$\Omega$
ratio.
It is clear that the monotone orders have the same violation measure as the non-monotone orders. The main difference lies in the additional conditions on the ordered moments of the random variables. The moment conditions may or may not be needed for functions with one-sided restrictions, depending on whether the restriction is on the left or the right side. Specifically, for m-convex functions, we define the subclasses with one-sided restrictions as
\begin{align*} \textit{m-icx}^+(\Delta) &= \left\{g\in\overline{\mathcal{G}}^{m}\colon \begin{array}{l} g^{(k)}(\underline{x})=a_k \geq 0, k=1,2,\ldots, m-1;\\[5pt] 0 \leq g^{(m)}(x)\leq (1+\Delta) g^{(m)}_{\min^+}(x), x \in\mathcal{X} \end{array}\right\} ,\\[5pt] \textit{m-icx}^-(\Delta) &= \left\{g\in\overline{\mathcal{G}}^{m}\colon \begin{array}{l} g^{(k)}(\underline{x})=a_k \geq 0, k=1,2,\ldots, m-1;\\[5pt] 0 \leq g^{(m)}(x)\leq (1+\Delta) g^{(m)}_{\min^-}(x), x \in\mathcal{X} \end{array}\right\}. \end{align*}
We also have
$ \textit{m-icx}^\pm(\Delta_+,\Delta_-) = \textit{m-icx}^+(\Delta_+) \cap \textit{m-icx}^-(\Delta_-)$
.
Theorem 8. (Asymmetric relaxations of
$\textit{m-icx}$
order.) Two random variables
$X_1$
and
$X_2$
defined on support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
-
(i)
$g \in \textit{m-icx}^+(\Delta)$
for some
$\Delta\in[0,\infty)$
if and only if (22)
\begin{align} v_{\geq}(x;\ X_1 \leq_{\textit{m-cx}} X_2) \leq \dfrac{1}{\Delta}\int_x^{\overline{x}} \bigl(\bar F_{X_2}^{(m-1)}(\xi) - \bar F^{(m-1)}_{X_1}(\xi) \bigr)\,\mathrm{d} \xi, \quad x \in \mathcal{X}; \end{align}
-
(ii)
$g \in \textit{m-icx}^-(\Delta)$
for some
$\Delta\in[0,\infty)$
if and only if and
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]\quad\textit{for $k=1,2, \ldots, m-1$,}$
(23)and
\begin{align} v_{\leq}(x;\ X_1 \leq_{\textit{m-cx}} X_2) \leq \dfrac{1}{\Delta}\int_{\underline{x}}^x \bigl(\bar F_{X_2}^{(m-1)}(\xi) - \bar F^{(m-1)}_{X_1}(\xi) \bigr) \,\mathrm{d} \xi, \quad x \in \mathcal{X}; \end{align}
-
(iii)
$g \in \textit{m-icx}^\pm(\Delta_+,\Delta_-)$
for some
$\Delta_+,\Delta_-\in[0,\infty)$
if and only if and for any
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]\quad\textit{for $k=1,2, \ldots, m-1$,}$
$x \in \mathcal{X}$
, (24)
\begin{align} & \dfrac{1}{1+\Delta_-}\biggl(\int_{\underline{x}}^x \bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\leq}(x;\ X_1 \leq_{\textit{m-cx}} X_2) \biggr) \nonumber\\[5pt] & \quad + \dfrac{1}{1+\Delta_+}\biggl(\int_x^{\overline{x}} \bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\geq}(x;\ X_1 \leq_{\textit{m-cx}} X_2) \biggr) \geq 0 . \end{align}
Proof of Theorem
8
. To show part (i), we first note that the left-hand side of (22) is non-negative, implying
$\bar F^{(m)}_{X_1}(x) \leq \bar F^{(m)}_{X_2}(x)$
for all
$x \in \mathcal{X}$
. We deduce
$X_1 \leq_{(m+1)\textit{-icx}} X_2$
. It follows that
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for any
$g\in $
(m+1)
$\textit{-icx}$
, and
$g(x)=(x -\underline{x})^k \in $
(m+1)
$\textit{-icx}$
for
$k=1,2,\ldots, m$
, we must have
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]$
.
Since
$g^{(m)}$
has bounded variation, we derive, using (19) in the proof of Theorem 5,
\begin{align*} & \mathbb{E}[g(X_2)]-\mathbb{E}[g(X_1)]\\[6pt] &\quad = \sum_{k=0}^{m-1} \dfrac{a_k}{k!} \mathbb{E}[(X_2-\underline{x})^k - (X_1-\underline{x})^k] + \int_{x \in \mathcal{X}}\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr) g^{(m)}(x) \,\mathrm{d} x \\[6pt] &\quad \geq \int_{x \in \mathcal{X}} g^{(m)}_{\min^+} (x) \Bigl(\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr) -\Delta \bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x) \bigr)^+\Bigr)\,\mathrm{d} x \\[6pt] &\quad = \int_{x \in \mathcal{X}} \biggl(g^{(m)}_{\min^+}(\underline{x}) + \int_{\underline{x}}^x \,\mathrm{d} g^{(m)}_{\min^+}(\xi) \biggr) \\[6pt] &\quad \quad \times \Bigl(\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr) -\Delta \bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x) \bigr)^+\Bigr)\,\mathrm{d} x \\[6pt] &\quad = g^{(m)}_{\min^+}(\underline{x}) \int_{\xi \in \mathcal{X}} \Bigl(\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr) - \Delta \bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x) \bigr)^+\Bigr) \,\mathrm{d} x \\[6pt] &\quad \quad + \int_{\xi \in \mathcal{X}} \int_{\xi}^{\overline{x}} \Bigl(\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x) \bigr) -\Delta \bigl(\bar F_{X_1}^{(m-1)}(x) - \bar F_{X_2}^{(m-1)}(x) \bigr)^+\Bigr) \,\mathrm{d} x \,\mathrm{d} g^{(m)}_{\min^+} (\xi) . \end{align*}
Because
$g^{(m)}_{\min^+}$
is increasing and
$g_{\min^+}^{(m)}(\underline{x}) \geq 0$
, the right-hand side is non-negative when (22) holds.
Next we show that
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g \in \textit{m-icx}^+(\Delta)$
implies (22). Take
$g_y$
such that
and
It is clear that for
$x \geq y$
,
$g^{(m)}_{y\min^+} (x) \geq 1$
and thus
For
$x < y$
,
$g^{(m)}_{y\min^+} (x)=g_y(x) = 0$
. Thus
$g_y \in \textit{m-icx}^+(\Delta)$
. Using (19) in the proof of Theorem 5 with
$a_k = g^{(k)}_y(\underline{x}) =0$
for
$k=0,1,\ldots, m-1$
, we derive
\begin{align*}& \mathbb{E}[g_y(X_2)] - \mathbb{E}[g_y(X_1)]\\[5pt] &\quad = \int_{x\in \mathcal{X}}\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x)\bigr) g_y^{(m)}(x) \,\mathrm{d} x \\[5pt] &\quad = \int_y^{\overline{x}} \bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F_{X_1}^{(m-1)}(x)\bigr)\,\mathrm{d} x + \Delta \int_y^{\overline{x}}\bigl(\bar F^{(m-1)}_{X_2}(x) - \bar F^{(m-1)}_{X_1}(x)\bigr) {\mathbb{I}}_{\bigl\{\bar F_{X_1}^{(m-1)}(x)> \bar F_{X_2}^{(m-1)}(x)\bigr\}} \,\mathrm{d} x\\[5pt] &\quad = \int_y^{\overline{x}}\bigl(\bar F_{X_2}^{(m-1)}(x) - \bar F^{(m-1)}_{X_1}(x)\bigr) \,\mathrm{d} x - \Delta\int_y^{\overline{x}} \bigl(\bar F^{(m-1)}_{X_1}(x) - \bar F^{(m-1)}_{X_2}(x)\bigr)^+ \,\mathrm{d} x. \end{align*}
We deduce that
$\mathbb{E}[g_y(X_1)] \leq \mathbb{E}[g_y(X_2)]$
implies (22). We obtain part (i).
To see part (ii), we note that the left-hand side of (23) is non-negative, which implies (by setting
$x=\overline{x}$
)
$\mathbb{E}[(X_1-\underline{x})^m] \leq \mathbb{E}[(X_2-\underline{x})^m]$
. Using an argument similar to that for part (i), we can show that (23), together with the condition
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]$
for
$k=1,2, \ldots, m-1$
, implies
$\mathbb{E}[g(X_1)]\leq\mathbb{E}[g(X_2)]$
for all
$g \in \textit{m-icx}^-(\Delta)$
. To show the reverse relation, we note that
$g(x)=(x-\underline{x})^k \in \textit{m-icx}^-(\Delta)$
for any
$k =1,2,\ldots, m-1$
, leading to
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]$
for
$k=1,2,\ldots, m-1$
. Then we take
and apply an argument similar to that for part (i).
Next we prove part (iii). For the ‘if’ part, we show that (24) and the condition
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]$
for
$k=1,2, \ldots, m-1,$
imply
$\mathbb{E}[g(X_1)] \leq \mathbb{E}[g(X_2)]$
for all
$g \in \textit{m-icx}^\pm(\Delta_+,\Delta_-)$
. We focus on the case where
$\Delta_+ \leq \Delta_-$
. Then (24) becomes
\begin{align*} \Xi(x)&\equiv \dfrac{1}{1+\Delta_-}\int_{\underline{x}}^x \bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(\xi)\bigr)^+ \,\mathrm{d} \xi - \dfrac{1+\Delta_+}{1+\Delta_-}\int_{\underline{x}}^x \bigl(\bar F^{(m-1)}_{X_1}(\xi) - \bar F^{(m-1)}_{X_2}(\xi)\bigr)^+ \,\mathrm{d} \xi \nonumber\\[5pt] & \quad + \dfrac{1}{1+\Delta_+}\int_x^{\overline{x}} \bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(\xi)\bigr)^+ \,\mathrm{d} \xi - \int_x^{\overline{x}} \bigl(\bar F^{(m-1)}_{X_1}(\xi) - \bar F^{(m-1)}_{X_2}(\xi)\bigr)^+ \,\mathrm{d} \xi \geq 0. \end{align*}
We derive
\begin{align*} & \mathbb{E}[g(X_2)] - \mathbb{E}[g(X_1)]\\[5pt] &\quad = \sum_{k=0}^m \dfrac{a_k}{k!}\bigl(\mathbb{E}[(X_2-\underline{x})^k] - \mathbb{E}[(X_1-\underline{x})^k]\bigr) + \int_{\underline{x}}^{\overline{x}}\bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(\xi)\bigr) g^{(m)}(\xi)\,\mathrm{d} \xi \\[5pt] &\quad \geq \int_{\underline{x}}^{\overline{x}}\bigl(\bar F^{(m-1)}_{X_2}(\xi) - \bar F^{(m-1)}_{X_1}(x)\bigr) g^{(m)}(\xi)\,\mathrm{d} \xi \\[5pt] &\quad \geq \int_{x\in\mathcal{X}} \Gamma (\xi)\rho(\xi) \,\mathrm{d} \xi\\[5pt] &\quad = g^{(m)}_{\max} \int_{x_0}^{x_{\max}} \Xi(x) \,\mathrm{d} \mu(x)\\[5pt] &\quad \geq 0, \end{align*}
where
and
$\mu(x) = \frac{g^{(m)}_{\max^-}(x) - g^{(m)}_0}{g^{(m)}_{\max}- g^{(m)}_0}\quad\text{for $ x_0 \leq x \leq x_{\max}$,}$
with
$x_{\max}$
and
$x_0$
defined similarly to those in the proof of Theorem 6.
To prove the ‘only if’ part, we first note that
$g(x)=(x-\underline{x})^k \in \textit{m-icx}^\pm(\Delta_+,\Delta_-)$
and thus
$\mathbb{E}[(X_1-\underline{x})^k] \leq \mathbb{E}[(X_2-\underline{x})^k]$
for
$k=1,2,\ldots, m-1$
. Define
\begin{align*} g_y^{(m)}(x) &= \dfrac{\bigg(1+(\Delta_+\wedge\Delta_-){\mathbb{I}}_{\big\{\bar F^{(m-1)}_{X_1}(x) \geq \bar F^{(m-1)}_{X_2}(x)\big\}}\bigg){\mathbb{I}}_{\{x \leq y\}}}{1+\Delta_-}\\[5pt] & \quad + \dfrac{\bigg(1+(\Delta_+\wedge\Delta_-){\mathbb{I}}_{\big\{\bar F^{(m-1)}_{X_1}(x) \geq \bar F^{(m-1)}_{X_2}(x)\big\}}\bigg){\mathbb{I}}_{\{x \geq y\}}}{1+\Delta_+}, \end{align*}
and
We can easily verify that
$g_y \in \textit{m-icx}^\pm(\Delta_+,\Delta_-)$
. We can derive (25) with the relation
$\mathbb{E}[g_y(X_2)] \geq \mathbb{E}[g_y(X_1)]$
.
By Theorem 8, the condition of ordered moments, that is,
is not needed for the parametric order associated with the right-side violation. This is because the violation measure in (22) is non-negative, and thus
$X_1\leq_{(m+1)\textit{-icx}} X_2$
, which implies the order of the moments.
In the special case of
$\Delta_- = 0$
, we obtain the function class
$\textit{m-icx}^\pm(\Delta,0)=\textit{m-icx}(\Delta) \cap (m+1)\textit{-icx}$
. By (24), the distributional characterization of this order is
For
$m=1$
,
$1\textit{-icx}^\pm(\Delta,0) =1$
$\textit{-cx}^\pm(\Delta,0)$
, and we recover the
$\textit{inc}(\Delta)$
$\textit{-cx}$
function discussed in Theorem 4(i-a). For higher orders, however,
$\textit{m-icx}^\pm(\Delta,0) \neq \textit{m-cx}^\pm(\Delta,0)$
in general due to the different conditions on the moments.
For m-concave functions, we define the subclasses with one-sided restrictions as
\begin{align*} \textit{m-icv}^-(\Delta) = \left\{ g\in\overline{\mathcal{G}}^{m}\colon \begin{array}{l} (\!-\!1)^{k-1} g^{(k)}(\overline{x})=(\!-\!1)^{k-1} b_k \geq 0, k=1,2,\ldots, m-1, \\[5pt] \hat g (x) = (\!-\!1)^{m-1} g(x),\\[5pt] 0 \leq \hat g^{(m)}_{\min^-}(x_1) \leq \hat g^{(m)}(x_2) \leq (1+\Delta) \hat g^{(m)}_{\min^-}(x_1), \underline{x}\leq x_1\lt x_2\leq \overline{x} \end{array}\right\},\\[5pt] \textit{m-icv}^+(\Delta) = \left\{ g\in\overline{\mathcal{G}}^{m}\colon \begin{array}{l} (\!-\!1)^{k-1} g^{(k)}(\overline{x})=(\!-\!1)^{k-1} b_k \geq 0, k=1,2,\ldots, m-1, \\[5pt] \hat g (x) = (\!-\!1)^{m-1} g(x),\\[5pt] 0 \leq \hat g^{(m)}_{\min^+}(x_2) \leq \hat g^{(m)}(x_1) \leq (1+\Delta) \hat g^{(m)}_{\min^+}(x_2), \underline{x}\leq x_1\lt x_2\leq \overline{x} \end{array}\right\}. \end{align*}
We also define
$ \textit{m-icv}^\pm(\Delta_+,\Delta_-) = \textit{m-icv}^+(\Delta_+) \cap \textit{m-icv}^-(\Delta_-)$
. We can obtain the following characterization using an argument similar to Theorem 8.
Theorem 9. (Asymmetric relaxation of
$\textit{m-icv}$
order.) Two random variables
$X_1$
and
$X_2$
defined on support
$\mathcal{X}$
satisfy
$\mathbb{E}[g(X_1)]\leq \mathbb{E}[g(X_2)]$
for all
-
(i)
$g \in \textit{m-icv}^+(\Delta)$
for some
$\Delta\in[0,\infty)$
if and only if and
$\mathbb{E}[(\overline{x}-X_1)^k] \geq \mathbb{E}[(\overline{x}-X_2)^k]\quad\textit{for $k=1,2,\ldots, m-1$,}$
\begin{align*} v_{\geq}(x;\ X_1 \leq_{\textit{m-cv}} X_2) \leq \dfrac{1}{\Delta}\int_x^{\overline{x}} \bigl( F_{X_1}^{(m-1)}(\xi) - F^{(m-1)}_{X_2}(\xi) \bigr)\,\mathrm{d} \xi, \quad x \in \mathcal{X}; \end{align*}
-
(ii)
$g \in \textit{m-icv}^-(\Delta)$
for some
$\Delta\in[0,\infty)$
if and only if and
\begin{align*} v_{\leq}(x;\ X_1 \leq_{\textit{m-cv}} X_2) \leq \dfrac{1}{\Delta}\int_{\underline{x}}^x \bigl( F_{X_1}^{(m-1)}(\xi) - F^{(m-1)}_{X_2}(\xi) \bigr) \,\mathrm{d} \xi, \quad x \in \mathcal{X}; \end{align*}
-
(iii)
$g \in \textit{m-icv}^\pm(\Delta_+,\Delta_-)$
for some
$\Delta_+, \Delta_- \in[0,\infty)$
if and only if and for any
$\mathbb{E}[(\overline{x}-X_1)^k] \geq \mathbb{E}[(\overline{x}-X_1)^k]\quad\textit{for $k=1,2, \ldots, m-1$,}$
$x \in \mathcal{X}$
, (25)
\begin{align} & \dfrac{1}{1+\Delta_-}\biggl(\int_{\underline{x}}^x \bigl(F^{(m-1)}_{X_2}(\xi) - F^{(m-1)}_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\leq}(x;\ X_1 \leq_{\textit{m-cv}} X_2) \biggr) \nonumber\\[5pt] & \quad + \dfrac{1}{1+\Delta_+}\biggl(\int_x^{\overline{x}} \bigl( F^{(m-1)}_{X_2}(\xi) - F^{(m-1)}_{X_1}(\xi)\bigr) \,\mathrm{d} \xi - (\Delta_-\wedge \Delta_+) v_{\geq}(x;\ X_1 \leq_{\textit{m-cv}} X_2) \biggr) \geq 0 . \end{align}
6. Concluding remark
We present a unified approach to parameterizing functions to characterize parametric integral stochastic orders. The analysis of first-order (i.e. increasingness) and second-order (i.e. convexity and concavity) functional properties is generalized to higher-order ones to characterize various stochastic orders. The parameter used to restrict functional properties also quantifies the violation to the distributional properties of the associated non-parametric orders. Among the functions discussed,
$inc(\Delta)$
and
$inc^-(\Delta)$
are most analyzed with their applications to utility functions in decision analysis. Closure properties and applications of many parametric stochastic orders are yet to be explored.
Although we compare random variables
$X_1$
and
$X_2$
with a common support, this assumption is not essential to our analysis. In particular, Theorems 4, 7, and 8 can be easily modified for random variables with different supports using the relations
where
$\underline{x}_i$
and
$\overline{x}_i$
, respectively, are the minimum and maximum realizations of
$X_i,i=1,2$
.
There are certainly other ways of defining integral stochastic orders as discussed in Section 4.2. For example, Azmoodeh and Hür [Reference Azmoodeh and Hür1] consider a form of
$\textit{icv}^-(\Delta)$
with
$\Delta(x), x\in\mathcal{X}$
. Light and Perlroth [Reference Light and Perlroth11] name the
$\alpha,[\underline{x},\overline{x}]$
-concave order by considering
$g \in \textit{inc}$
with
$(g(\overline{x}) - g(x))^{1/\alpha}$
being convex. One may also look into alternative ways of parameterizing other function classes (e.g. interpolation between linear and log-concave functions).
There can be ways to parameterize an order that does not result in an integral form. For example, Stoyanov et al. [Reference Stoyanov, Rachev and Fabozzi22] propose metric (distance) representation to measure the degree of dominance or violation of the usual stochastic orders and the increasing concave order. Lizyayev and Ruszczyński [Reference Lizyayev and Ruszczyński12] analyze almost ordered random variables with one being the sum of the other and some random variable with a bounded mean. Methods for systematically parameterizing non-integral orders are important directions for future research.
Funding information
There are no funding bodies to thank relating to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.


