To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The main focus of this chapter is the estimation of the distribution function and probability (density) function of duration and loss variables. The methods used depend on whether the data are for individual or grouped observations, and whether the observations are complete or incomplete.
Some problems arising from loss modeling may be analytically intractable. Many of these problems, however, can be formulated in a stochastic framework, with a solution that can be estimated empirically. This approach is called Monte Carlo simulation. It involves drawing samples of observations randomly according to the distribution required, in a manner determined by the analytic problem.
In this chapter, we discuss some applications of Monte Carlo methods to the analysis of actuarial and financial data. We first revisit the tests of model misspecification introduced in Chapter 13.
Some models assume that the failure-time or loss variables follow a certain family of distributions, specified up to a number of unknown parameters. To compute quantities such as average loss or VaR, the parameters of the distributions have to be estimated. This chapter discusses various methods of estimating the parameters of a failure-time or loss distribution.
As insurance companies hold portfolios of insurance policies that may result in claims, it is a good management practice to assess the exposure of the company to such risks. A risk measure, which summarizes the overall risk exposures of the company, helps the company evaluate if there is sufficient capital to overcome adverse events.
Credibility models were first proposed in the beginning of the twentieth century to update predictions of insurance losses in light of recently available data of insurance claims. The oldest approach is the limited-fluctuation credibility method, also called the classical approach, which proposes to update the loss prediction as a weighted average of the prediction based purely on the recent data and the rate in the insurance manual. Full credibility is achieved if the amount of recent data is sufficient, in which case the updated prediction will be based on the recent data only. If, however, the amount of recent data is insufficient, only partial credibility is attributed to the data, and the updated prediction depends on the manual rate as well.
Survival analysis studies the time-to-event for various subjects. In the biological and medical sciences, interest can focus on patient time to death due to various (competing) causes. In engineering reliability, one may study the time to component failure due to analogous factors or stimuli. Cure rate models serve a particular interest because, with advancements in associated disciplines, subjects can be viewed as “cured meaning that they do not show any recurrence of a disease (in biomedical studies) or subsequent manufacturing error (in engineering) following a treatment. This chapter generalizes two classical cure-rate models via the development of a COM–Poisson cure rate model. The chapter first describes the COM–Poisson cure rate model framework and general notation, and then details the model framework assuming right and interval censoring, respectively. The chapter then describes the broader destructive COM–Poisson cure rate model which allows for the number of competing risks to diminish via damage or eradication. Finally, the chapter details the various lifetime distributions considered in the literature to date for COM–Poisson-based cure rate modeling.
This chapter defines the COM–Poisson distribution in greater detail, discussing its associated attributes and computing tools available for analysis. This chapter first details how the COM–Poisson distribution was derived, and then describes the probability distribution, and introduces computing functions available in R that can be used to determine various probabilistic quantities of interest, including the normalizing constant, probability and cumulative distribution functions, random number generation, mean, and variance. The chapter then outlines the distributional and statistical properties associated with this model, and discusses parameter estimation and statistical inference associated with the COM–Poisson model. Various processes for generating random data are then discussed, along with associated available R computing tools. Continued discussion provides reparametrizations of the density function that serve as alternative forms for statistical analyses and model development, considers the COM–Poisson as a weighted Poisson distribution, and details discussion describing the various ways to approximate the COM–Poisson normalizing function.