To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The evidence for the incubation period of Legionnaires’ disease is based on data from a small number of outbreaks. An incubation period of 2–10 days is commonly used for the definition and investigation of cases. In the German LeTriWa study, we collaborated with public health departments to identify evidence-based sources of exposure among cases of Legionnaires’ disease within 1–14 days before symptom onset. For each individual, we assigned weights to the numbered days of exposure before symptom onset, giving the highest weight to exposure days of cases with only one possible day of exposure. We then calculated an incubation period distribution where the median was 5 days and the mode was 6 days. The cumulative distribution reached 89% by the 10th day before symptom onset. One case-patient with immunosuppression had a single day of exposure to the likely infection source only 1 day before symptom onset. Overall, our results support the 2- to 10-day incubation period used in case definition, investigation, and surveillance of cases with Legionnaires’ disease.
The myopic strategy is one of the most important strategies when studying bandit problems. In 2018, Nouiehed and Ross put forward a conjecture about Feldman’s bandit problem (J. Appl. Prob. (2018) 55, 318–324). They proposed that for Bernoulli two-armed bandit problems, the myopic strategy stochastically maximizes the number of wins. In this paper we consider the two-armed bandit problem with more general distributions and utility functions. We confirm this conjecture by proving a stronger result: if the agent playing the bandit has a general utility function, the myopic strategy is still optimal if and only if this utility function satisfies reasonable conditions.
We study competing first passage percolation on graphs generated by the configuration model with infinite-mean degrees. Initially, two uniformly chosen vertices are infected with a type 1 and type 2 infection, respectively, and the infection then spreads via nearest neighbors in the graph. The time it takes for the type 1 (resp. 2) infection to traverse an edge e is given by a random variable $X_1(e)$ (resp. $X_2(e)$) and, if the vertex at the other end of the edge is still uninfected, it then becomes type 1 (resp. 2) infected and immune to the other type. Assuming that the degrees follow a power-law distribution with exponent $\tau \in (1,2)$, we show that with high probability as the number of vertices tends to infinity, one of the infection types occupies all vertices except for the starting point of the other type. Moreover, both infections have a positive probability of winning regardless of the passage-time distribution. The result is also shown to hold for the erased configuration model, where self-loops are erased and multiple edges are merged, and when the degrees are conditioned to be smaller than $n^\alpha$ for some $\alpha\gt 0$.
Given the assumption that a loss random variable has a certain parametric distribution, the empirical analysis of the properties of the loss requires the parameters to be estimated. In this chapter, we review the theory of parametric estimation, including the properties of an estimator and the concepts of point estimation, interval estimation, unbiasedness, consistency and efficiency. Apart from the parametric approach, we may also estimate the distribution functions and the probability (density) functions of the loss random variables directly without assuming a certain parametric form.
Ratemaking refers to the determination of the premium rates to cover the potential loss payments incurred under an insurance policy. In addition to the losses, the premium should also cover all the expenses as well as the profit margin. As past losses are used to project future losses, care must be taken to adjust for potential increases in the lost costs. There are two methods to determine the premium rates: the loss cost method and the loss ratio method.
Having discussed models for claim frequency and claim severity separately, we now turn our attention to modeling the aggregate loss of a block of insurance policies. Much of the time we shall use the terms aggregate loss and aggregate claim interchangeably, although we recognize the difference between them as discussed in the last chapter. There are two major approaches in modeling aggregate loss: the individual risk model and the collective risk model.
After a model has been estimated, we have to evaluate it to ascertain that the assumptions applied are acceptable and supported by the data. This should be done prior to using the model for prediction and pricing. Model evaluation can be done using graphical methods, as well as formal misspecification tests and diagnostic checks.
The superposition of data sets with internal parametric self-similarity is a longstanding and widespread technique for the analysis of many types of experimental data across the physical sciences. Typically, this superposition is performed manually, or recently through the application of one of a few automated algorithms. However, these methods are often heuristic in nature, are prone to user bias via manual data shifting or parameterization, and lack a native framework for handling uncertainty in both the data and the resulting model of the superposed data. In this work, we develop a data-driven, nonparametric method for superposing experimental data with arbitrary coordinate transformations, which employs Gaussian process regression to learn statistical models that describe the data, and then uses maximum a posteriori estimation to optimally superpose the data sets. This statistical framework is robust to experimental noise and automatically produces uncertainty estimates for the learned coordinate transformations. Moreover, it is distinguished from black-box machine learning in its interpretability—specifically, it produces a model that may itself be interrogated to gain insight into the system under study. We demonstrate these salient features of our method through its application to four representative data sets characterizing the mechanics of soft materials. In every case, our method replicates results obtained using other approaches, but with reduced bias and the addition of uncertainty estimates. This method enables a standardized, statistical treatment of self-similar data across many fields, producing interpretable data-driven models that may inform applications such as materials classification, design, and discovery.
Claim severity refers to the monetary loss of an insurance claim. Unlike claim frequency, which is a nonnegative integer-valued random variable, claim severity is usually modeled as a nonnegative continuous random variable. Depending on the definition of loss, however, it may also be modeled as a mixed distribution, i.e., a random variable consisting of probability masses at some points and continuous otherwise.
Large gatherings of people on cruise ships and warships are often at high risk of COVID-19 infections. To assess the transmissibility of SARS-CoV-2 on warships and cruise ships and to quantify the effectiveness of the containment measures, the transmission coefficient (β), basic reproductive number (R0), and time to deploy containment measures were estimated by the Bayesian Susceptible-Exposed-Infected-Recovered model. A meta-analysis was conducted to predict vaccine protection with or without non-pharmaceutical interventions (NPIs). The analysis showed that implementing NPIs during voyages could reduce the transmission coefficients of SARS-CoV-2 by 50%. Two weeks into the voyage of a cruise that begins with 1 infected passenger out of a total of 3,711 passengers, we estimate there would be 45 (95% CI:25-71), 33 (95% CI:20-52), 18 (95% CI:11-26), 9 (95% CI:6-12), 4 (95% CI:3-5), and 2 (95% CI:2-2) final cases under 0%, 10%, 30%, 50%, 70%, and 90% vaccine protection, respectively, without NPIs. The timeliness of strict NPIs along with implementing strict quarantine and isolation measures is imperative to contain COVID-19 cases in cruise ships. The spread of COVID-19 on ships was predicted to be limited in scenarios corresponding to at least 70% protection from prior vaccination, across all passengers and crew.
In this chapter we consider the Bayesian approach in updating the prediction for future losses. We consider the derivation of the posterior distribution of the risk parameters based on the prior distribution of the risk parameters and the likelihood function of the data. The Bayesian estimate of the risk parameter under the squared-error loss function is the mean of the posterior distribution. Likewise, the Bayesian estimate of the mean of the random loss is the posterior mean of the loss conditional on the data.
This book is about modeling the claim losses of insurance policies. Our main interest is nonlife insurance policies covering a fixed period of time, such as vehicle insurance, workers compensation insurance and health insurance. An important measure of claim losses is the claim frequency, which is the number of claims in a block of insurance policies over a period of time. Though claim frequency does not directly show the monetary losses of insurance claims, it is an important variable in modeling the losses.
We consider models for analyzing the surplus of an insurance portfolio. Suppose an insurance business begins with a start-up capital, called the initial surplus. The insurance company receives premium payments and pays claim losses. The premium payments are assumed to be coming in at a constant rate. When there are claims, losses are paid out to policyholders. Unlike the constant premium payments, losses are random and uncertain, in both timing and amount. The net surplus through time is the excess of the initial capital and aggregate premiums received over the losses paid out. The insurance business is in ruin if the surplus falls to or below zero. The main purpose of this chapter is to consider the probability of ruin as a function of time, the initial surplus and the claim distribution. Ultimate ruin refers to the situation where in ruin occurs at finite time, irrespective of the time of occurrence.
Short-term insurance policies often take multiple years before the final settlement of all losses incurred is completed. This is especially true for liability insurance policies, which may drag on for a long time due to legal proceedings. To set the premiums of insurance policies appropriately so that the premiums are competitive and yet sufficient to cover the losses and expenses with a reasonable profit margin, accurate projection of losses cannot be overemphasized. Loss reserving refers to the techniques to project future payments of insurance losses based on policies in the past.
We have discussed the limited-fluctuation credibility method, the Bühlmann and Bühlmann–Straub credibility methods, as well as the Bayesian method for future loss prediction. The implementation of these methods requires the knowledge or assumptions of some unknown parameters of the model. For the limited-fluctuation credibility method, Poisson distribution is usually assumed for claim frequency.
While the classical credibility theory addresses the important problem of combining claim experience and prior information to update the prediction for loss, it does not provide a very satisfactory solution. The method is based on arbitrary selection of the coverage probability and the accuracy parameter. Furthermore, for tractability some restrictive assumptions about the loss distribution have to be imposed.