On the dynamic residual measure of inaccuracy based on extropy in order statistics

In this paper, we introduce a novel way to quantify the remaining inaccuracy of order statistics by utilizing the concept of extropy. We explore various properties and characteristics of this new measure. Additionally, we expand the notion of inaccuracy for ordered random variables to a dynamic version and demonstrate that this dynamic information measure provides a unique determination of the distribution function. Moreover, we investigate specific lifetime distributions by analyzing the residual inaccuracy of the first-order statistics. Nonparametric kernel estimation of the proposed measure is suggested. Simulation results show that the kernel estimator with band-width selection using the cross-validation method has the best performance. Finally, an application of the proposed measure on the model selection is provided.


Introduction
Consider a set of nonnegative continuous random variables represented as X 1 , X 2 , . . ., X n .These random variables are independent and identically distributed (i.i.d.) and follow a distribution characterized by the cumulative distribution function (CDF) F X (x), the probability density function (PDF) f X (x), and the survival function (sf) FX (x) = 1 − F X (x).The support interval, denoted as S X , represents the range of values for which this distribution is defined.The order statistics (OS) refer to arranging these random variables X i in nondecreasing order.More specifically, we denote this arrangement as X 1:n ≤ X 2:n ≤ • • • ≤ X n:n .For additional information and more detailed explanations, please refer to the references provided such as [2,5].OS has found applications in various fields, such as strength of materials, statistical inference, reliability theory, goodness-of-fit tests, quality control, outlier detection, and the characterization of probability distributions.These statistics have been utilized in diverse problem domains to address a range of issues and provide valuable insights.For example, in reliability theory, OS is used for statistical modelings.The ith OS in a sample of size n represents the lifetime of an (n − i + 1)-out-of-n system.
Suppose that X and Y are two nonnegative random variables representing time to failure of two systems with PDFs f (x) and g(x), respectively.Let F(x) and G(x) be failure distributions, and let F (x) and Ḡ(x) be sfs of X and Y, respectively.Shannon's [31] measure of uncertainty associated with the random variable X and Kerridge's [14] measure of inaccuracy are given by: and respectively, where "log" means the natural logarithm.In the case where g(x) = f (x), Equation (2) reduces to Equation (1).
Recently, many researchers have been considering the importance of inaccuracy measure in information theory.As a result, several generalizations of this measure have been introduced.According to Kerridge [14], it is important to consider the measure of inaccuracy for several reasons.When an experimenter provides probabilities for different outcomes, their statement can lack precision in two ways: it may be vague due to insufficient information, or some of the provided probabilities may be incorrect.Statistical estimations and inference problems involve making statements that can be inaccurate in either or both of these ways.The communication theory of Shannon and Weaver [30] offers a framework for dealing with the vagueness aspect of inaccuracy, as demonstrated by authors like Kullback and Leibler [16] and Lindley [19].However, this theory has been limited in its ability to address inaccuracy in a broader sense.Kerridge [14] argues that the introduction of an inaccuracy measure removes this limitation.He also highlights the duality between information and entropy in communication theory, where uncertainty can be measured by the amount of knowledge needed to achieve certainty.Inaccuracy, therefore, can be seen as a measure of missing information.For more details, refer to [11][12][13].
The measure of information and inaccuracy is associated as H (X, Y) = H (X) + H (X |Y), where H (X |Y) represents the Kullback [15] relative information measure of X about Y, defined as: In the fields of reliability, life testing, and survival analysis, it is important to consider the current age of the system being studied.Therefore, when determining the remaining uncertainty of a system that has survived up to a specific time point, the measures described in Equations ( 1) and ( 2) are not appropriate.Ebrahimi [6] considered a random variable X t = (X − t|X > t), t ≥ 0, and defined uncertainty and discrimination of such a system, given by: and respectively.Clearly when t = 0, Equations ( 4) and (5), respectively, reduce to Equations ( 1) and (3).Taneja et al. [33] defined a dynamic measure of inaccuracy associated with two residual lifetime distributions F and G corresponding to the Kerridge measure of inaccuracy given by: Clearly for t = 0, it reduces to Equation (2).Shannon's measure of uncertainty associated with ith OS X i:n is given by: https://doi.org/10.1017/S0269964823000268Published online by Cambridge University Press where is the PDF of ith OS, for i = 1, 2, . . ., n.Here is the beta function with parameters i and n − i + 1, we refer the interested reader to [2].Note that for n = 1, Equation ( 6) reduces to Equation (1).Recently, Lad et al. [17] proposed an alternative measure of uncertainty of a random variable called extropy.The extropy of the random variable X is defined by Lad et al. [17] to be: Extropy is a term coined to represent the opposite of entropy.It refers to the extent of order, organization, and complexity in a system.Extropy is associated with the tendency of systems to increase in complexity, organization, and information.While entropy represents the natural tendency toward disorder and randomness, extropy represents the drive towards order, complexity, and organization.These concepts are often used in different fields, such as physics, information theory, and philosophy, to describe and understand the behavior of systems.The relationship between entropy and extropy can be compared to positive and negative images on a photographic film, as they are related but opposite.Similar to entropy, the maximum extropy occurs when the distribution is uniform.However, they evaluate the refinement of a distribution differently.
Extropy is utilized in scoring forecasting distributions and in speech recognition.One major advantage of extropy is its ease of computation, making it highly valuable for exploring potential applications, such as developing goodness-of-fit tests and inferential methods.Extropy can also be employed to compare the uncertainties associated with two random variables.If we have two random variables X and Y where J (X) ≤ J (Y), then it indicates that X has a greater degree of uncertainty compared to Y, in other words, if the extropy of random variable X is lower than the extropy of random variable Y, it implies that X contains more information than Y.
Qiu [24] derived the characterization results and symmetric properties of the extropy of OS and record values.Kullback [15] presented the properties of this measure, including the maximum extropy distribution and its statistical applications.Two estimators for the extropy of a continuous random variable were introduced by Qiu and Jia [26].Qiu, Wang, and Wang [27] explored an expression of the extropy of a mixed system's lifetime.Also, for more details, see [8-10, 20, 22].
The organization of the paper is as follows: In Section 2, we introduce a new method to quantify the discrepancy between the distribution of the ith OS and the parent random variable X.This method is based on a residual measure of inaccuracy known as extropy.Additionally, our study investigates a dynamic residual measure that captures the discrepancy between the distributions of the ith OS and the parent random variable X.We also establish bounds for these measures of inaccuracy.In Section 3, our research focuses on the analysis of the residual inaccuracy of the OS and its implications in terms of characterization results.In Section 4, a nonparametric estimator for the proposed measure is obtained.We evaluate the proposed estimator using a simulation study in Section 5.In Section 6, we consider the real data set to show the behavior of the estimators in real cases.

Dynamic residual measure of inaccuracy
In this section, we introduce some new measures of uncertainty based on extropy.Suppose that X and Y are two nonnegative continuous random variables with PDFs f and g, respectively.The measure of uncertainty associated with X and the measure of discrimination of X about Y are, respectively, given by: and according to Equation (3), Adding Equations ( 8) and ( 9), we obtain: If we consider F as the actual CDF, then G can be interpreted as a reference CDF.For calculating the remaining uncertainty of a system, which has survived up to time t, the measures defined in Equations ( 8)- (10) are not suitable.Qiu and Jia [25] considered a random variable X t = [X − t|X > t], t ≥ 0, and defined the uncertainty of such system based on extropy as: We define the dynamic measure of inaccuracy associated with two residual lifetime distributions F and G corresponding to the measure of inaccuracy given by: Also, the defined uncertainty discrimination of X about Y are given by: Clearly when t = 0, then Equations ( 11)-( 13) reduce, respectively, to Equations ( 8)- (10).In the following, we study some information theoretic measures based on OS using the probability integral transformation and define extropy and relative information measures.Theorem 2.1.Suppose that X is a nonnegative continuous random variable with PDF f(x) and CDF F(x).Then, the measure of inaccuracy of the distributions X i:n based on extropy is given by: where W i:n is the ith OS of uniformly distributed random variables U 1 , . . ., Un and .
In the following, we define the measure of inaccuracy the ith OS and the parent random variable.
https://doi.org/10.1017/S0269964823000268Published online by Cambridge University Press Definition 2.2.Let X be a nonnegative continuous random variable with PDF f(x) and CDF F(x).Then, we defined the measure of inaccuracy for the ith OS and the parent random variable as: Using Equations ( 7), we have where is the density function of w i .Also, we obtain the measure of uncertainty discrimination for the distributions X i:n and X based on extropy as:

Dynamic residual measure of inaccuracy for OS
In this section, we propose the dynamic version of inaccuracy measure in Equations ( 14).
Definition 2.3.The dynamic residual measure of inaccuracy associated with two residual lifetime distributions F i:n and F based on extropy is defined as (DRJOS-inaccuracy measure): where Fi:n (t) = 1 − F i:n (t) is the sf corresponding to X i:n given by: here is the incomplete beta function; for more details, see [5].
https://doi.org/10.1017/S0269964823000268Published online by Cambridge University Press Note that, when t = 0, Equations (15) reduces to the measure of inaccuracy as defined in Equations (14).
The "DRJOS-inaccuracy" measure can be viewed as a generalization of the idea of extropy.This measure is a useful tool for the measurement of error in experimental results.In fact, the extropy inaccuracy measure can be expressed as the sum of an uncertainty measure and discrimination measure between two distributions.When an experimenter states the probability of various events in an experiment, the statement can lack precision in two ways: one results from incorrect information (e.g.mids-specifying the model) and the other from vagueness in the statement (e.g.missing observation or insufficient data).All estimation and inference problems are concerned with making statements, which may be inaccurate in either or both of these ways.The DRJOS-inaccuracy measure can account for these two types of errors.
This measure has its application in statistical inference and estimation.Also, some concepts in reliability studies for modeling lifetime data such as failure rate and weighted mean past life function can describe using DRJOS-inaccuracy measure.In life time studies, the data is generally truncated.Hence there is scope for extending information theoretic concepts to ordered situations and record values.Motivated by this, we extend the definition of inaccuracy, to the DRJOS-inaccuracy measure.Further, we also look into the problem of characterization of probability distributions using the functional form of these measures.Also, the identification of an appropriate probability distribution for lifetimes is one of the basic problems encountered in reliability theory.Although several methods such as the goodness of fit procedures, probability plots, etc. are available in the literature to find an appropriate model followed by the observations, they fail to provide an exact model.A method to attain this goal can be to utilize an DRJOS-inaccuracy measure.
The DROS-inaccuracy and DRJOS-inaccuracy measures are not competing but are rather complementary.However, the properties of symmetry, finiteness, and simplicity in calculations can be considered as the advantages of DRJOS-inaccuracy measure over DROS-inaccuracy measure.The most important advantage of extropy is that it is easy to compute, and it will therefore be of great interest to explore its important potential applications in developing goodness-of-fit tests and inferential methods.
The inaccuracy and extropy-inaccuracy measures are complementary.The proposed measure is symmetric and has an upper bound (non-positive).Another important advantage of the proposed criterion is that it is easy to calculate.Therefore it will be exciting to investigate its potentially essential applications in the development of goodness-of-fit tests and inferential methods.Some concepts in reliability studies for modeling lifetime data such as failure rate and mean residual life function can be described using extropy-inaccuracy measure.In lifetime studies, the data is generally truncated.Hence there is scope for extending information theoretic concepts to order statistics.Motivated by this, we extend the definition of inaccuracy, to the extropy-inaccuracy measure based on order statistics.Also, the identification of an appropriate probability distribution for lifetimes is one of the basic problems encountered in reliability theory.Although several methods such as probability plots, the goodness of fit procedures, etc., are available in the literature to find an appropriate model followed by the observations, they fail to provide an exact model.A method to attain this goal can be to utilize an extropy-inaccuracy proposed measure.
In the following, we evaluate the residual inaccuracy measure for X 1:n for some specific lifetime distributions, which are applied widely in sf, life testing, and the reliability of system.Corollary 2.4.In general, for i = 1, we get:   some algebraic manipulations, we have: .
The left panel of Figure 1 shows that J (X 1:n , X; t) is constant for different values of times (t) on a fixed value of n.The right panel of Figure 1 shows that J (X 1:n , X; t) tends to be − 2 with increasing sample size n.Also, we can observe that the inaccuracy of sample minimum is decreasing with respect to parameter .
Example 2.6.Assume that the random variable X is a random variable from the beta distribution with PDF f Figure 2 shows a decrease in inaccuracy for different values of a.The left panel of Figure 2 shows that J (X 1:n , X; t) decreases with increasing time (t) for a fixed value of n.The right panel of Figure 2 shows that J (X 1:n , X; t) tends to be a 2(t−1) with increasing sample size n.
The left panel of Figure 3 shows that J (X 1:n , X; t) is nonincreasing with respect to time (t).The right panel of Figure 3 shows that J (X 1:n , X; t) is constant for different values of sample size n.Also, we can observe that the inaccuracy of sample minimum is increasing with respect to parameter b.
Proof.From Equations ( 15), we have The proof is completed.
In the following, we express a lower bound for J (X i:n , f ; t) in terms of the extropy.Proposition 2.9.A lower bound for the dynamic measure of inaccuracy between the distributions X i:n and X based on extropy is obtained by: Proof.We have In what follows, we will investigate the relationship between J (X i:n , X; t) and J (X i:n , X).
Corollary 2.10.Suppose that X is a nonnegative continuous random variable with PDF f(x) and CDF F(x).Then, where Proof.The proof is obtained from the following equation:

Stochastic order
We want to prove a property of dynamic inaccuracy measure using some properties of stochastic ordering.We present the following definitions: (I) A random variable X is said to be less than Y in the stochastic ordering (denoted by X ≤ st Y) if F (x) ≤ Ḡ(x) for all x, where F (x) and Ḡ(x) are the reliability functions of X and Y, respectively.
(II) A random variable X is said to be less than Y in likelihood ratio ordering (denoted by Theorem 2.11.Suppose that X 1 , . . ., X n are i.i.d.nonnegative random variables representing the life length of a series system.If f (•) is decreasing in its support, then the corresponding inaccuracy is the increasing function of n. (i,n) , where

Proof. Let the random variable
As f is decreasing in its support for i = 1 (that is, for a series system), hence is a decreasing function.This implies that X n+1 ≤ lr X n , which implies X n+1 ≤ st X n ; for more details, see [29].Also, it is given that f (F −1 (x)) is the decreasing function of x.Therefore, for i = 1, ) .Also, it follows from Equations ( 15) that the dynamic residual inaccuracy of the ith OS is: using the probability integral transformation, F (X) = U.Hence, for i = 1 and n ≥ 1, we have:

Some results on characterization
In this section, we demonstrate that the measure of dynamic residual inaccuracy of OS can also determine the underlying distribution uniquely.The subject of characterizing the underlying distribution of a sample based on measures like extropy or its generalized versions of OS has been explored by a number of authors in recent studies.Characterization property on the measure of dynamic residual inaccuracy between the ith OS and parent random OS is studied by using the sufficient condition for the uniqueness of the solution of initial value problem given by dy/dx = f (x, y), y(x 0 ) = y 0 , where f is a function of two variables whose domain is a region S ⊂ R 2 , (x 0 , y 0 ) is a point in S, and y is an unknown function.
By the solution of the initial value problem on an interval L ⊂ R, we mean a function (x) such that: (i) is differentiable on L, (ii) the growth of lies in S, (iii) (x 0 ) = y 0 , and (iv) (x) = f (x, (x)), for all x ∈ L.
The following proposition together with other results will help in proving our characterization result.
Proposition 3.1.Let f be a continuous function defined in a domain S ⊂ R 2 and let |f (x, y 1 )−f (x, y 2 )| ≤ k|y 1 − y 2 |, k > 0, for every point (x, y 1 ) and (x, y 2 ) in S; that is, f satisfies the Lipschitz condition in S.
Then, the function y = (x) satisfying the initial value problem y = f (x, y) and (x 0 ) = y 0 , x ∈ L, is unique.
We will utilize the lemma provided by Gupta and Kirmani [7] to present a condition that is sufficient to guarantee the fulfillment of the Lipschitz condition within the set S. Lemma 3.2.Suppose that f is a continuous function in a convex region S ⊂ R 2 .Assume that f / y exists and it is continuous in S.Then, f satisfies the Lipschitz condition in S. Theorem 3.3.Assume that X is a nonnegative continuous random variable with CDF F. Suppose that J (X i:n , X; t) is the dynamic residual inaccuracy of the ith OS based on a random sample of size n.Then J (X i:n , X; t) characterizes the distribution.
Taking derivative of both sides with respect to t, we have: where r F (t) and r F i:n (t) are the hazard rates (HRs) of X and X i:n , respectively.Again, taking derivative with respect to t and using the relation r F i:n (t) = k(t)r F (t), we have: After some algebraic manipulations, we have: Suppose that there are two functions F and F * such that J (X i:n , f ; t) = J (X * i:n , X; t) = r F (t).Then, for all t, we get from Equations (17) that r F (t) = (t, r F (t)) and r F * (t) = (t, r F * (t)), where .
By using Lemma 3.2 and Proposition 3.1, we have r F * (t) = r F (t), for all t.Using the fact that the HRF characterizes the distribution function uniquely, we get the desired result.
In the following, we characterize some specific life length distributions.
https://doi.org/10.1017/S0269964823000268Published online by Cambridge University Press Theorem 3.4.Suppose that X is a nonnegative continuous random variable with CDF F. Relation between dynamic residual inaccuracy of the X 1:n and HRF is given by: where k is a constant.Then X has Proof.We consider sufficiency.Let us assume that: Taking derivative with respect to t on both sides of the above equation, we have: where r Fi:n (t) and r F (t) are the HRs of X 1:n and X, respectively.It is easy to see that r Fi:n (t) = nr F (t).
, then p = 0, and r F (t) turns out to be a constant, which is just the condition under which X has an exponential distribution.II) If k > n 2(n+1) , then p < 0, and Equations ( 20) becomes the HRF of the finite range distribution.III) If k < n 2(n+1) , then p > 0, which is just the condition under which X has a Pareto distribution.
In the following, the necessities of Parts ( 1)-( 3) can be verified by using examples in section 2. This completes the proof by noting that the CDF is determined uniquely by its failure rate.(19) that J (X 1:n , X; t) is decreasing (increasing) in t if and only if:

Corollary 3.5. It follows from Equation
.

Nonparametric estimation
In this section, we propose a nonparametric estimation of J (X i:n , X; t).Assume that X 1 , . . ., X n is a random sample obtained from a population, where f (•) and F (•) are its PDF and CDF, respectively.Following that, a nonparametric estimator of the dynamic extropy-based measure of residual inaccuracy between the distributions X i:n and X can be obtained by: where and Fi:n (•) can be obtained by replacing F (•) and f (•) in Equations ( 7) and ( 16).Now, we consider kernel methods for the estimation of PDF f (•) and CDF F (•) to use Equation ( 21).The kernel method for the estimation of PDF f (•) was defined by Silverman [32] as: where h f is a bandwidth or smoothing parameter and K (•) is a kernel function.Some commonly used kernels are normal (Gaussian), Epanechnikov, and tricube.The asymptotic relative efficiencies of a kernel defined by Silverman [32, p. 43] show that there is not much difference among the kernels if the mean integrated squared error criterion is used.Also, estimates obtained by using different kernels are usually numerically very similar.So, the choice of kernel K is not too important and the standard normal density function is used as the kernel function K (•) in Equation (22).What does matter much more is the choice of bandwidth which controls the amount of smoothing.Small bandwidths give very rough estimates while larger bandwidths give smoother estimates.Therefore, we only focus on the selection of the bandwidth parameter.
Minimizing the mean integrated squared error (MISE) defined as 2 dx is a common method for bandwidth selection.Normal reference (NR) and cross-validation (CV) methods are two common methods of bandwidth selection in kernel PDF estimation based on minimizing the MISE.Under the assumption that the data density is normal, the best h f by minimizing MISE called the NR or rule of thumb method yields: where is estimated by min{S, Q/1.34} in which S is the sample standard deviation and Q is the interquartile range.The CV is another method for bandwidth selection.The leave-one-out CV method for bandwidth selection can be considered as: where f h f ,−i (X i ) denotes the kernel estimator obtained by omitting X i .The NR method is found under the assumption that the underlying density is normal.When the data is not normal, they still provide reasonable bandwidth choices.However, the CV bandwidth selection method is data-driven rather than dependent on the assumption of normality.There is no simple and universal answer to the question that which bandwidth selector is the most adequate for a given dataset.Trying several selectors and inspecting the results may help to determine which one is estimating the density better.However, there are a series of useful facts and suggestions.The NR method is a quick, simple, and inexpensive bandwidth selector.However, it tends to give bandwidths that are too large Table 1.Estimation of AB and RMSE of J (X i:n , X; t) for Exponential distribution with mean 1/ on sample size n = 50 for non-normal-like data.Also, the CV method may be better suited for highly non-normal and rough densities, in which NR method may end up over-smoothing.
For CDF estimation, kernel and empirical methods are the two main approaches.The empirical method will be a step function even in the case that the CDF is a continuous function, and so, it has less accurate than the kernel method; see [21].The kernel estimation of CDF was proposed by Nadaraya [21] as: where h F is a bandwidth or smoothness parameter and dt is a CDF of a positive kernel function K (•).When applying F h F , one needs to choose the kernel and the bandwidth.It was shown by  Lejeune and Sarda [18], that the choice of the kernel is less important than the choice of the bandwidth for the performance of the estimation of CDF.In general, the idea underlying bandwidth selection is the minimization of the MISE, defined as: We focus on the bandwidth selection-based plug-in (PI) and CV approaches.In the PI approach, bandwidth is selected by minimizing an asymptotic approximation of MISE.In this paper, we use the PI approach provided by Polansky and Baker [23], which developed the previous ideas by Altman and Leger [1].They showed that h PI F = Cn −1/3 , where C is estimated through the data sample.A well-known method on the bandwidth selection for the CDF estimation is the CV method that is initially proposed  by Sarda [28].Altman and Leger [1] showed that this method basically requires large sample sizes to ensure good results.Therefore, Bowman et al. [3] proposed a modified version of the CV method that is asymptotically optimal and works well in simulation studies and real cases.Here, we use the CV approach proposed by Bowman et al. [3].They considered a CV bandwidth selection as: where F h F ,−i (X i ) denotes the kernel estimator constructed from the data with observation X i omitted.Bowman et al. [3] performed a simulation study comparing the CV method with the PI method.Their study showed that the PI method did not behave well in simulation studies and better results were obtained, in general, with CV method.A drawback of the CV method is the weak performance in terms  of computational time in simulation study because this method involves the minimization of a function of n 2 terms that is necessary to evaluate over a large enough grid of bandwidths.Obviously, this is not really a drawback for a real data situation, because the minimization process is carried out only once.

Simulation study
In this section, we evaluate the accuracy of the nonparametric estimation of J (X i:n , X; t) in Equation ( 21) using a simulation study.We use a Monte Carlo simulation study for comparing the proposed estimators in terms of the absolute bias (AB) and root mean square error (RMSE).For the estimation of J (X orders (k = 1, 5, 10), and sample sizes (n = 50, 200) for each of these distributions.The kernel estimations of PDF by bandwidth selection using the NR method and the CV method are indicated by f NR h and f CV h , respectively.Also, the kernel estimations of CDF by bandwidth selection using the PI method and the CV method are indicated by F PI h and F CV h , respectively.The estimated values of AB and RMSE are reported in Tables 1-6.
We consider four methods to estimate J (X i:n , X; t) based on the type of bandwidth selection as follows: 1-bandwidth selection for the estimations of PDF with NR method and CDF with PI method, denoted by ( f NR h , F PI h ), 2-bandwidth selection for the estimations of PDF with CV method and CDF with PI method, denoted by ( f CV h , F PI h ),  The simulation results in Tables 1-6 show that the estimation of J (X i:n , X; t) with bandwidth selection using the CV method for the kernel PDF and CDF estimations, that is, ( f CV h , F CV h ), has the best performance.In general, the kernel estimation of PDF in J (X i:n , X; t) using the CV method is more accurate than the NR method.The estimated AB and RMSE of the proposed estimators decrease as the sample size increases.In general, the estimated AB and RMSE of J (X i:n , X; t) decrease by increasing time (t) or order (k).Also, the estimated AB and RMSE of J (X i:n , X; t) increase with the increase of the considered parameters for three distributions.The CV method for bandwidth selection is data-driven rather than dependent on the assumption of normality.Since the lifetime distributions considered in the simulation study do not have a normal distribution, it was expected that the CV method for bandwidth selection would perform well in estimating the PDF and CDF.This result was also obtained from the comparison of AB and RMSE of the proposed estimators of J (X i:n , X; t) in Tables 1-6.

Real data
In this section, we consider a real data set to show the behavior of the estimators in real cases and illustrate the application of the suggested measure for a model selection.We consider the following data set from [4]  In Table 7, the values of the Log-Likelihood, Akaike information criterion (AIC), and Bayesian information criterion (BIC), as well as the Kolmogorov-Smirnov (K-S) goodness of fit test, are presented for choosing the best model among exponential, Weibull, Log-Normal, Gamma, and Log-Logistic distributions.The results of this table show that the Log-Normal distribution is closer to the real data distribution.The maximum likelihood estimation of the location and scale parameters of the Log-Normal distribution are 2.30 and 1.12, respectively.
In Figure 4, the nonparametric estimation of J (X 1:n , X; t) using Equation ( 21) is plotted for different values of time (t).For this estimation, we use the CV method for bandwidth selection in the kernel estimations of PDF and CDF.Also, in this figure, the theoretical values of J (X 1:n , X; t) are drawn based on exponential, Weibull, log-normal, gamma, and log-logistic distributions.It can be seen that the nonparametric estimation of the J (X 1:n , X; t) is close to its theoretical value based on the log-normal distribution.Therefore, the log-normal distribution is a better choice than other distributions, which is consistent with the results of Table 7 based on AIC and BIC criteria.

Conclusion
This paper introduced a fresh approach to measuring the residual inaccuracy of Additionally, we investigated some different lifetime distributions by analyzing the residual inaccuracy of the X 1:n statistic.Furthermore, we explored various properties associated with this new measure.Our study also focuses on examining the dynamic measure of inaccuracy for both the first and ith OS, demonstrating their ability to accurately determine the distribution function in a unique manner.The nonparametric kernel estimation of J (X 1:n , X; t) was provided.The NR and CV methods to select the bandwidth in PDF kernel estimation and PI and CV methods to select the bandwidth in CDF kernel estimation were considered.The simulation results showed that the estimation of J (X i:n , X; t) with bandwidth selection using the CV method for the kernel PDF and CDF estimations has the best performance.Finally, an application was given to demonstrate how the suggested measure can be applied in model selection.

Figure 1 .
Figure 1.Graphs of J (X 1:n , X; t) for different values of times (left panel) and sample sizes (right panel) on several values of parameter in Example 2.5.

Figure 2 .
Figure 2. Graphs of J (X 1:n , X; t) for different values of times (left panel) and sample sizes (right panel) on several values of parameter a in Example 2.6.
Graphs of J (X1:n , X; t) for different values of times (left panel) and sample sizes (right panel) on several values of parameter b in Example 2.7.

Table 2 .
Estimation of AB and RMSE of J (X i:n , X; t) for Exponential distribution with mean 1/ on sample size n = 200

Table 3 .
Estimation of AB and RMSE of J (X i:n , X; t) for Beta distribution in Example 2.6 on sample size n = 50

Table 4 .
Estimation of AB and RMSE of J (X i:n , X; t) for beta distribution in Example 2.6 on sample size n = 200

Table 5 .
Estimation of AB and RMSE of J (X i:n , X; t) for Uniform distribution in Example 2.7 on sample size n = 50 i:n , X; t), we generate random samples from Exponential distribution in Example 2.5 with parameter = 0.1, 0.2, 0.5, Beta distribution in Example 2.6 with parameter a = 2, 3, 5, and Uniform distribution in Example 2.7 with parameter b = 5, 10, 20.Also, we considered different times (t),

Table 6 .
Estimation of AB and RMSE of J (X i:n , X; t) for Uniform distribution in Example 2.7 on sample size n = 200