Hostname: page-component-89b8bd64d-dvtzq Total loading time: 0 Render date: 2026-05-06T03:35:39.208Z Has data issue: false hasContentIssue false

Including parameter uncertainty in loss distributions using calibrating priors

Published online by Cambridge University Press:  09 January 2026

Stephen Jewson*
Affiliation:
Lambda Climate Research Ltd, London, UK
Trevor Sweeting
Affiliation:
UCL, London, UK
*
Corresponding author: Stephen Jewson; Email: stephen.jewson@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

We revisit the question of how to include parameter uncertainty in univariate parametric models of losses and loss ratios. We first review the statistical theory for including parameter uncertainty based on right Haar priors (RHPs), which applies to many commonly used models. In this theory, the prior is chosen in such a way as to ensure matching between predicted probabilities and the relative frequencies of future outcomes in repeated tests. This property is known as reliability, or calibration. We then test priors for including parameter uncertainty in a number of models not covered by RHP theory. For these models, we find priors that generate predictions that are more reliable than predictions based on maximum likelihood, although they are not perfectly reliable. We discuss numerical schemes that can be used to generate Bayesian predictions, including a novel use of asymptotic expansions, and we include an example in which we show the impact of including parameter uncertainty in the modeling of extreme hurricane losses. The tail loss estimates show material increases due to the inclusion of parameter uncertainty. Finally, we describe a new software library that makes it straightforward to apply the methods we describe.

Information

Type
Original Research Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of The Institute and Faculty of Actuaries
Figure 0

Figure 1 Monte Carlo estimates of predictive coverage probabilities (PCPs) versus nominal probabilities for five commonly used distributions. The axes are linearly spaced in inverse exceedance probability. The Pareto is a one-parameter distribution with known scale parameter. The Fréchet is a two-parameter distribution with known location parameter. In each case, predictions are made using maximum likelihood (3 gray lines) and right Haar prior predictions (3 black lines). For exponential, Pareto, and log-normal distributions, the RHP predictions are made using closed-form evaluation of the Bayesian prediction equation. For Fréchet and Weibull distributions, the RHP predictions are made using the DMGS asymptotic approximation for predictive quantiles. The training data consists of 20 values in each case, repeated 10,000 times to estimate the PCPs. Each test is then repeated 3 times to assess convergence. The sets of 3 lines look like a single line in most cases, indicating adequate convergence.

Figure 1

Figure 2 As Figure 1, but now for training data sample sizes of 50.

Figure 2

Figure 3 As Figure 1, but now for the gamma distribution, for predictions based on maximum likelihood (3 gray lines) and test priors $\pi _1$ (red) and $\pi _2$ (blue). For the test priors, predictions are evaluated using the DMGS approximation for predictive quantiles. Each column represents one sample size, for sample sizes of 20, 50, and 80 from left to right. Each row represents one value of the true gamma shape parameter $k$, with values from top to bottom of $\xi =0.1, 0.5, 1, 5, 10$.

Figure 3

Figure 4 As Figure 1, but now for the generalized Pareto distribution (GPD) with known location parameter, for predictions based on maximum likelihood (3 gray lines) and the prior given in the text (3 black lines). The Bayesian predictions are evaluated using the DMGS approximation for predictive quantiles. Each column represents one sample size, for sample sizes of 20, 50, and 80 from left to right. Each row represents one value of the true GPD shape parameter $\xi$, with values from top to bottom of $\xi =0, 0.2, 0.4, 0.6, 0.8$.

Figure 4

Figure 5 QQ plots showing the goodness of fit of maximum-likelihood fitted distributions to 51 hurricane losses. The Pareto has known scale parameter, the Fréchet has known location parameter, and the GPD has known location parameter. The values on the axes are losses in USD on a log scale.

Figure 5

Figure 6 Predictions of future hurricane losses, based on statistical models trained on 51 historical hurricane losses. The training data are shown as black crosses for those values that fall within the range shown in each plot. The top row shows predictions made using a log-normal distribution, and the bottom row shows predictions made using a GPD. For both distributions, two predictions are shown. For the log-normal, the two predictions are maximum likelihood (gray) and RHP prediction (black). For the GPD, the two predictions are maximum likelihood (gray) and a calibrating prior prediction. The left column shows the exceedance probabilities of the two predictions. The middle column shows the ratio of the exceedance probabilities from the Bayesian prediction to those from the maximum likelihood prediction. The right column shows the loss quantiles versus return period.

Figure 6

Table A.1. Inverse coverage probabilities from a simulation study of maximum likelihood and RHP predictions, using 100,000 training datasets each of a sample size of 20. The exponential, Pareto, and log-normal distribution predictions were calculated using exact expressions, while the Frećhet and Weibull predictions were calculated using the DMGS asymptotic approximation

Figure 7

Table A.2. As Table A.1, but for a sample size of 50

Figure 8

Figure B.1 As Figure 3, but now for the inverse gamma distribution.

Figure 9

Figure C.1 As Figure 3, but now for the inverse Gaussian distribution.