To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we define weighted failure rate and their means from the stand point of an application. We begin by emphasizing that the formation of n independent component series system having weighted failure rates with sum of weight functions being unity is same as a mixture of n distributions. We derive some parametric and non-parametric characterization results. We discuss on the form invariance property of baseline failure rate for a specific choice of weight function. Some bounds on means of aging functions are obtained. Here, we establish that weighted increasing failure rate average (IFRA) class is not closed under formation of coherent systems unlike the IFRA class. An interesting application of the present work is credited to the fact that the quantile version of means of failure rate is obtained as a special case of weighted means of failure rate.
This paper investigates the precise large deviations of the net loss process in a two-dimensional risk model with consistently varying tails and dependence structures, and gives some asymptotic formulas which hold uniformly for all x varying in t-intervals. The study is among the initial efforts to analyze potential risk via large deviation results for the net loss process of the two-dimensional risk model, and can provide a novel insight to assess the operation risk in a long run by fully considering the premium income factors of the insurance company.
In this paper, a new multivariate counting process model (called Multivariate Poisson Generalized Gamma Process) is developed and its main properties are studied. Some basic stochastic properties of the number of events in the new multivariate counting process are initially derived. It is shown that this new multivariate counting process model includes the multivariate generalized Pólya process as a special case. The dependence structure of the multivariate counting process model is discussed. Some results on multivariate stochastic comparisons are also obtained.
Aggregate implements an efficient fast Fourier transform (FFT)-based algorithm to approximate compound probability distributions. Leveraging FFT-based methods offers advantages over recursion and simulation-based approaches, providing speed and accuracy to otherwise time-consuming calculations. Combining user-friendly features and an expressive domain-specific language called DecL, Aggregate enables practitioners and nonprogrammers to work with complex distributions effortlessly. The software verifies the accuracy of its FFT-based numerical approximations by comparing their first three moments to those calculated analytically from the specified frequency and severity. This moment-based validation, combined with carefully chosen default parameters, allows users without in-depth knowledge of the underlying algorithm to be confident in the results. Aggregate supports a wide range of frequency and severity distributions, policy limits and deductibles, and reinsurance structures and has applications in pricing, reserving, risk management, teaching, and research. It is written in Python.
Applied econometrics uses the tools of theoretical econometrics and real-word data to develop predictive models and assess economic theories. Due to the complex nature of such analysis, various assumptions are often not understood by those people who rely on it. The danger of this is that economic policies can be assessed favourably to suit a particular political agenda and forecasts can be generated to match the needs of a particular customer. Ethics in Econometrics argues that econometricians need to be aware of potential ethical pitfalls when carrying out their analysis and that they need to be encouraged to avoid them. Using a range of empirical examples and detailed discussions of real cases, this book provides a guide for research practices in econometrics, illustrating why it is imperative that econometricians act ethically in terms of the way they conduct their analysis and treat their data.
In Chapter 3 we learned how to do basic probability calculations and even put them to use solving some fairly complicated probability problems. In this chapter and the next two, we generalize how we do probability calculations, where we will transition from working with sets and events to working with random variables.
To do statistics you must first be able to “speak probability.” In this chapter we are going to concentrate on the basic ideas of probability. In probability, the mechanism that generates outcomes is assumed known and the problems focus on calculating the chance of observing particular types or sets of outcomes. Classical problems include flipping “fair” coins (where fair means that on one flip of the coin the chance it comes up heads is equal to the chance it comes up tails) and “fair” dice (where fair now means the chance of landing on any side of the die is equal to that of landing on any other side).
In Chapter 5 we learned about a number of discrete distributions. In this chapter we focus on continuous distributions, which are useful as models of various real-world events. By the end of this chapter you will know nine continuous and eight discrete distributions. There are many more continuous distributions, but these nine will suffice for our purposes. These continuous distributions are useful for modeling various types of processes and phenomena that are encountered in the real world.
Sampling joke: “If you don’t believe in random sampling, the next time you have a blood test, tell the doctor to take it all.” At the beginning of Chapter 7 we introduced the ideas of population vs. sample and parameter vs. statistic. We build on this in the current chapter. The key concept in this chapter is that if we were to take different samples from a distribution and compute some statistic, such as the sample mean, then we would get different results.
The last two chapters have covered the basic concepts of estimation. In Chapter 9 we studied the problem of giving a single number to estimate a parameter. In Chapter 10 we looked at ways to give an interval that we believe will include the true parameter. In many applications, we want to ask some very specific questions about the parameter(s).
We begin this chapter with a review of hypothesis testing from Chapter 12. A hypothesis is a statement about one or more parameters of a model. The null hypothesis is usually a specific statement that encapsulates “no effect.” For example, if we apply one of the two treatments, A or B, to volunteers we may be interested in testing whether the population mean outcomes are equal.
Up to this point we have been talking about what are often called frequentist methods, because a statistical method is based on properties of its long-run relative frequency. With this approach, the probability of an event is defined as the proportion of times the event occurs in the long run. Parameters, that is values that characterize a distribution, such as the mean and variance of a normal distribution, are considered fixed but unknown.
Generalized additive models (GAMs) are a leading model class for interpretable machine learning. GAMs were originally defined with smooth shape functions of the predictor variables and trained using smoothing splines. Recently, tree-based GAMs where shape functions are gradient-boosted ensembles of bagged trees were proposed, leaving the door open for the estimation of a broader class of shape functions (e.g. Explainable Boosting Machine (EBM)). In this paper, we introduce a competing three-step GAM learning approach where we combine (i) the knowledge of the way to split the covariates space brought by an additive tree model (ATM), (ii) an ensemble of predictive linear scores derived from generalized linear models (GLMs) using a binning strategy based on the ATM, and (iii) a final GLM to have a prediction model that ensures auto-calibration. Numerical experiments illustrate the competitive performances of our approach on several datasets compared to GAM with splines, EBM, or GLM with binarsity penalization. A case study in trade credit insurance is also provided.
We investigate bottom-up risk aggregation applied by insurance companies facing reserve risk from multiple lines of business. Since risk capitals should be calculated in different time horizons and calendar years, depending on the regulatory or reporting regime (Solvency II vs IFRS 17), we study correlations of ultimate losses and correlations of one-year losses in future calendar years in lines of business. We consider a multivariate version of a Hertig’s lognormal model and we derive analytical formulas for the ultimate correlation and the one-year correlations in future calendar years. Our main conclusion is that the correlation coefficients that should be used in a bottom-up aggregation formula depend on the time horizon and the future calendar year where the risk emerges. We investigate analytically and numerically properties of the ultimate and the one-year correlations, their possible values observed in practice, and the impact of misspecified correlations on the diversified risk capital.