To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Now in its second edition, this accessible text presents a unified Bayesian treatment of state-of-the-art filtering, smoothing, and parameter estimation algorithms for non-linear state space models. The book focuses on discrete-time state space models and carefully introduces fundamental aspects related to optimal filtering and smoothing. In particular, it covers a range of efficient non-linear Gaussian filtering and smoothing algorithms, as well as Monte Carlo-based algorithms. This updated edition features new chapters on constructing state space models of practical systems, the discretization of continuous-time state space models, Gaussian filtering by enabling approximations, posterior linearization filtering, and the corresponding smoothers. Coverage of key topics is expanded, including extended Kalman filtering and smoothing, and parameter estimation. The book's practical, algorithmic approach assumes only modest mathematical prerequisites, suitable for graduate and advanced undergraduate students. Many examples are included, with Matlab and Python code available online, enabling readers to implement algorithms in their own projects.
Given the assumption that a loss random variable has a certain parametric distribution, the empirical analysis of the properties of the loss requires the parameters to be estimated. In this chapter, we review the theory of parametric estimation, including the properties of an estimator and the concepts of point estimation, interval estimation, unbiasedness, consistency and efficiency. Apart from the parametric approach, we may also estimate the distribution functions and the probability (density) functions of the loss random variables directly without assuming a certain parametric form.
Ratemaking refers to the determination of the premium rates to cover the potential loss payments incurred under an insurance policy. In addition to the losses, the premium should also cover all the expenses as well as the profit margin. As past losses are used to project future losses, care must be taken to adjust for potential increases in the lost costs. There are two methods to determine the premium rates: the loss cost method and the loss ratio method.
Having discussed models for claim frequency and claim severity separately, we now turn our attention to modeling the aggregate loss of a block of insurance policies. Much of the time we shall use the terms aggregate loss and aggregate claim interchangeably, although we recognize the difference between them as discussed in the last chapter. There are two major approaches in modeling aggregate loss: the individual risk model and the collective risk model.
After a model has been estimated, we have to evaluate it to ascertain that the assumptions applied are acceptable and supported by the data. This should be done prior to using the model for prediction and pricing. Model evaluation can be done using graphical methods, as well as formal misspecification tests and diagnostic checks.
Claim severity refers to the monetary loss of an insurance claim. Unlike claim frequency, which is a nonnegative integer-valued random variable, claim severity is usually modeled as a nonnegative continuous random variable. Depending on the definition of loss, however, it may also be modeled as a mixed distribution, i.e., a random variable consisting of probability masses at some points and continuous otherwise.
In this chapter we consider the Bayesian approach in updating the prediction for future losses. We consider the derivation of the posterior distribution of the risk parameters based on the prior distribution of the risk parameters and the likelihood function of the data. The Bayesian estimate of the risk parameter under the squared-error loss function is the mean of the posterior distribution. Likewise, the Bayesian estimate of the mean of the random loss is the posterior mean of the loss conditional on the data.
This book is about modeling the claim losses of insurance policies. Our main interest is nonlife insurance policies covering a fixed period of time, such as vehicle insurance, workers compensation insurance and health insurance. An important measure of claim losses is the claim frequency, which is the number of claims in a block of insurance policies over a period of time. Though claim frequency does not directly show the monetary losses of insurance claims, it is an important variable in modeling the losses.
We consider models for analyzing the surplus of an insurance portfolio. Suppose an insurance business begins with a start-up capital, called the initial surplus. The insurance company receives premium payments and pays claim losses. The premium payments are assumed to be coming in at a constant rate. When there are claims, losses are paid out to policyholders. Unlike the constant premium payments, losses are random and uncertain, in both timing and amount. The net surplus through time is the excess of the initial capital and aggregate premiums received over the losses paid out. The insurance business is in ruin if the surplus falls to or below zero. The main purpose of this chapter is to consider the probability of ruin as a function of time, the initial surplus and the claim distribution. Ultimate ruin refers to the situation where in ruin occurs at finite time, irrespective of the time of occurrence.
Short-term insurance policies often take multiple years before the final settlement of all losses incurred is completed. This is especially true for liability insurance policies, which may drag on for a long time due to legal proceedings. To set the premiums of insurance policies appropriately so that the premiums are competitive and yet sufficient to cover the losses and expenses with a reasonable profit margin, accurate projection of losses cannot be overemphasized. Loss reserving refers to the techniques to project future payments of insurance losses based on policies in the past.
We have discussed the limited-fluctuation credibility method, the Bühlmann and Bühlmann–Straub credibility methods, as well as the Bayesian method for future loss prediction. The implementation of these methods requires the knowledge or assumptions of some unknown parameters of the model. For the limited-fluctuation credibility method, Poisson distribution is usually assumed for claim frequency.
While the classical credibility theory addresses the important problem of combining claim experience and prior information to update the prediction for loss, it does not provide a very satisfactory solution. The method is based on arbitrary selection of the coverage probability and the accuracy parameter. Furthermore, for tractability some restrictive assumptions about the loss distribution have to be imposed.