To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The superposition of data sets with internal parametric self-similarity is a longstanding and widespread technique for the analysis of many types of experimental data across the physical sciences. Typically, this superposition is performed manually, or recently through the application of one of a few automated algorithms. However, these methods are often heuristic in nature, are prone to user bias via manual data shifting or parameterization, and lack a native framework for handling uncertainty in both the data and the resulting model of the superposed data. In this work, we develop a data-driven, nonparametric method for superposing experimental data with arbitrary coordinate transformations, which employs Gaussian process regression to learn statistical models that describe the data, and then uses maximum a posteriori estimation to optimally superpose the data sets. This statistical framework is robust to experimental noise and automatically produces uncertainty estimates for the learned coordinate transformations. Moreover, it is distinguished from black-box machine learning in its interpretability—specifically, it produces a model that may itself be interrogated to gain insight into the system under study. We demonstrate these salient features of our method through its application to four representative data sets characterizing the mechanics of soft materials. In every case, our method replicates results obtained using other approaches, but with reduced bias and the addition of uncertainty estimates. This method enables a standardized, statistical treatment of self-similar data across many fields, producing interpretable data-driven models that may inform applications such as materials classification, design, and discovery.
Claim severity refers to the monetary loss of an insurance claim. Unlike claim frequency, which is a nonnegative integer-valued random variable, claim severity is usually modeled as a nonnegative continuous random variable. Depending on the definition of loss, however, it may also be modeled as a mixed distribution, i.e., a random variable consisting of probability masses at some points and continuous otherwise.
Large gatherings of people on cruise ships and warships are often at high risk of COVID-19 infections. To assess the transmissibility of SARS-CoV-2 on warships and cruise ships and to quantify the effectiveness of the containment measures, the transmission coefficient (β), basic reproductive number (R0), and time to deploy containment measures were estimated by the Bayesian Susceptible-Exposed-Infected-Recovered model. A meta-analysis was conducted to predict vaccine protection with or without non-pharmaceutical interventions (NPIs). The analysis showed that implementing NPIs during voyages could reduce the transmission coefficients of SARS-CoV-2 by 50%. Two weeks into the voyage of a cruise that begins with 1 infected passenger out of a total of 3,711 passengers, we estimate there would be 45 (95% CI:25-71), 33 (95% CI:20-52), 18 (95% CI:11-26), 9 (95% CI:6-12), 4 (95% CI:3-5), and 2 (95% CI:2-2) final cases under 0%, 10%, 30%, 50%, 70%, and 90% vaccine protection, respectively, without NPIs. The timeliness of strict NPIs along with implementing strict quarantine and isolation measures is imperative to contain COVID-19 cases in cruise ships. The spread of COVID-19 on ships was predicted to be limited in scenarios corresponding to at least 70% protection from prior vaccination, across all passengers and crew.
In this chapter we consider the Bayesian approach in updating the prediction for future losses. We consider the derivation of the posterior distribution of the risk parameters based on the prior distribution of the risk parameters and the likelihood function of the data. The Bayesian estimate of the risk parameter under the squared-error loss function is the mean of the posterior distribution. Likewise, the Bayesian estimate of the mean of the random loss is the posterior mean of the loss conditional on the data.
This book is about modeling the claim losses of insurance policies. Our main interest is nonlife insurance policies covering a fixed period of time, such as vehicle insurance, workers compensation insurance and health insurance. An important measure of claim losses is the claim frequency, which is the number of claims in a block of insurance policies over a period of time. Though claim frequency does not directly show the monetary losses of insurance claims, it is an important variable in modeling the losses.
We consider models for analyzing the surplus of an insurance portfolio. Suppose an insurance business begins with a start-up capital, called the initial surplus. The insurance company receives premium payments and pays claim losses. The premium payments are assumed to be coming in at a constant rate. When there are claims, losses are paid out to policyholders. Unlike the constant premium payments, losses are random and uncertain, in both timing and amount. The net surplus through time is the excess of the initial capital and aggregate premiums received over the losses paid out. The insurance business is in ruin if the surplus falls to or below zero. The main purpose of this chapter is to consider the probability of ruin as a function of time, the initial surplus and the claim distribution. Ultimate ruin refers to the situation where in ruin occurs at finite time, irrespective of the time of occurrence.
Short-term insurance policies often take multiple years before the final settlement of all losses incurred is completed. This is especially true for liability insurance policies, which may drag on for a long time due to legal proceedings. To set the premiums of insurance policies appropriately so that the premiums are competitive and yet sufficient to cover the losses and expenses with a reasonable profit margin, accurate projection of losses cannot be overemphasized. Loss reserving refers to the techniques to project future payments of insurance losses based on policies in the past.
We have discussed the limited-fluctuation credibility method, the Bühlmann and Bühlmann–Straub credibility methods, as well as the Bayesian method for future loss prediction. The implementation of these methods requires the knowledge or assumptions of some unknown parameters of the model. For the limited-fluctuation credibility method, Poisson distribution is usually assumed for claim frequency.
While the classical credibility theory addresses the important problem of combining claim experience and prior information to update the prediction for loss, it does not provide a very satisfactory solution. The method is based on arbitrary selection of the coverage probability and the accuracy parameter. Furthermore, for tractability some restrictive assumptions about the loss distribution have to be imposed.
The main focus of this chapter is the estimation of the distribution function and probability (density) function of duration and loss variables. The methods used depend on whether the data are for individual or grouped observations, and whether the observations are complete or incomplete.
In this paper, we explore potential surplus modelling improvements by investigating how well the available models describe an insurance risk process. To this end, we obtain and analyse a real-life data set that is provided by an anonymous insurer. Based on our analysis, we discover that both the purchasing process and the corresponding claim process have seasonal fluctuations. Some special events, such as public holidays, also have impact on these processes. In the existing literature, the seasonality is often stressed in the claim process, while the cash inflow usually assumes simple forms. We further suggest a possible way of modelling the dependence between these two processes. A preliminary analysis of the impact of these patterns on the surplus process is also conducted. As a result, we propose a surplus process model which utilises a non-homogeneous Poisson process for premium counts and a Cox process for claim counts that reflect the specific features of the data.
Some problems arising from loss modeling may be analytically intractable. Many of these problems, however, can be formulated in a stochastic framework, with a solution that can be estimated empirically. This approach is called Monte Carlo simulation. It involves drawing samples of observations randomly according to the distribution required, in a manner determined by the analytic problem.
In this chapter, we discuss some applications of Monte Carlo methods to the analysis of actuarial and financial data. We first revisit the tests of model misspecification introduced in Chapter 13.
Some models assume that the failure-time or loss variables follow a certain family of distributions, specified up to a number of unknown parameters. To compute quantities such as average loss or VaR, the parameters of the distributions have to be estimated. This chapter discusses various methods of estimating the parameters of a failure-time or loss distribution.