Consider the problem of determining the Bayesian credibility mean whenever the random claims given parameter vector are sampled from the K-component mixture family of distributions, whose members are the union of different families of distributions. This article begins by deriving a recursive formula for such a Bayesian credibility mean. Moreover, under the assumption that using additional information one may probabilistically determine a random claim belongs to a given population (or a distribution), the above recursive formula simplifies to an exact Bayesian credibility mean whenever all components of the mixture distribution belong to the exponential families of distributions. For a situation where a 2-component mixture family of distributions is an appropriate choice for data modelling, using the logistic regression model, it shows that: how one may employ such additional information to derive the Bayesian credibility model, say Logistic Regression Credibility model, for a finite mixture of distributions. A comparison between the Logistic Regression Credibility (LRC) model and its competitor, the Regression Tree Credibility (RTC) model, has been given. More precisely, it shows that under the squared error loss function, it shows the LRC’s risk function dominates the RTC’s risk function at least in an interval which about Several examples have been given to illustrate the practical application of our findings.
]]>In this paper, we discuss the estimation of conditional quantiles of aggregate claim amounts for non-life insurance embedding the problem in a quantile regression framework using the neural network approach. As the first step, we consider the quantile regression neural networks (QRNN) procedure to compute quantiles for the insurance ratemaking framework. As the second step, we propose a new quantile regression combined actuarial neural network (Quantile-CANN) combining the traditional quantile regression approach with a QRNN. In both cases, we adopt a two-part model scheme where we fit a logistic regression to estimate the probability of positive claims and the QRNN model or the Quantile-CANN for the positive outcomes. Through a case study based on a health insurance dataset, we highlight the overall better performances of the proposed models with respect to the classical quantile regression one. We then use the estimated quantiles to calculate a loaded premium following the quantile premium principle, showing that the proposed models provide a better risk differentiation.
]]>We investigate jointly modelling age–year-specific rates of various causes of death in a multinational setting. We apply multi-output Gaussian processes (MOGPs), a spatial machine learning method, to smooth and extrapolate multiple cause-of-death mortality rates across several countries and both genders. To maintain flexibility and scalability, we investigate MOGPs with Kronecker-structured kernels and latent factors. In particular, we develop a custom multi-level MOGP that leverages the gridded structure of mortality tables to efficiently capture heterogeneity and dependence across different factor inputs. Results are illustrated with datasets from the Human Cause-of-Death Database (HCD). We discuss a case study involving cancer variations in three European nations and a US-based study that considers eight top-level causes and includes comparison to all-cause analysis. Our models provide insights into the commonality of cause-specific mortality trends and demonstrate the opportunities for respective data fusion.
]]>Insurers and pension funds face the challenges of historically low-interest rates and high volatility in equity markets, that have been accentuated due to the COVID-19 pandemic. Recent advances in equity portfolio management with a target volatility have been shown to deliver improved on average risk-adjusted return, after transaction costs. This paper studies these targeted volatility portfolios in applications to equity, balanced, and target-date funds with varying constraints on leverage. Conservative leverage constraints are particularly relevant to pension funds and insurance companies, with more aggressive leverage levels appropriate for alternative investments. We show substantial improvements in fund performance for differing leverage levels, and of most interest to insurers and pension funds, we show that the highest Sharpe ratios and smallest drawdowns are in targeted volatility-balanced portfolios with equity and bond allocations. Furthermore, we demonstrate the outperformance of targeted volatility portfolios during major stock market crashes, including the crash from the COVID-19 pandemic.
]]>Traditional techniques for calculating outstanding claim liabilities such as the chain-ladder are notoriously at risk of being distorted by outliers in past claims data. Unfortunately, the literature in robust methods of reserving is scant, with notable exceptions such as Verdonck & Debruyne (2011, Insurance: Mathematics and Economics, 48, 85–98) and Verdonck & Van Wouwe (2011, Insurance: Mathematics and Economics, 49, 188–193). In this paper, we put forward two alternative robust bivariate chain-ladder techniques to extend the approach of Verdonck & Van Wouwe (2011, Insurance: Mathematics and Economics, 49, 188–193). The first technique is based on Adjusted Outlyingness (Hubert & Van der Veeken, 2008. Journal of Chemometrics, 22, 235–246) and explicitly incorporates skewness into the analysis while providing a unique measure of outlyingness for each observation. The second technique is based on bagdistance (Hubert et al., 2016. Statistics: Methodology, 1–23) which is derived from the bagplot; however; it is able to provide a unique measure of outlyingness and a means to adjust outlying observations based on this measure.
Furthermore, we extend our robust bivariate chain-ladder approach to an N-dimensional framework. The implementation of the methods, especially beyond bivariate, is not trivial. This is illustrated on a trivariate data set from Australian general insurers and results under the different outlier detection and treatment mechanisms are compared.
]]>This paper focuses on modeling surrender time for policyholders in the context of life insurance. In this setup, a large lapse rate at the first months of a contract is often observed, with a decrease in this rate after some months. The modeling of the time to cancelation must account for this specific behavior. Another stylized fact is that policies which are not canceled in the study period are considered censored. To account for both censoring and heterogeneous lapse rates, this work assumes a Bayesian survival model with a mixture of regressions. The inference is based on data augmentation allowing for fast computations even for datasets of over millions of clients. Moreover, frequentist point estimation based on Expectation–Maximization algorithm is also presented. An illustrative example emulates a typical behavior for life insurance contracts, and a simulated study investigates the properties of the proposed model. A case study is considered and illustrates the flexibility of our proposed model allowing different specifications of mixture components. In particular, the observed censoring in the insurance context might be up to of the data, which is very unusual for survival models in other fields such as epidemiology. This aspect is exploited in our simulated study.
]]>The coronavirus pandemic has created a new awareness of epidemics, and insurance companies have been reminded to consider the risk related to infectious diseases. This paper extends the traditional multi-state models to include epidemic effects. The main idea is to specify the transition intensities in a Markov model such that the impact of contagion is explicitly present in the same way as in epidemiological models. Since we can study the Markov model with contagious effects at an individual level, we consider individual risk and reserves relating to insurance products, conforming with the standard multi-state approach in life insurance mathematics. We compare our notions with other but related notions in the literature and perform numerical illustrations.
]]>Calculation of loss scenarios is a fundamental requirement of simulation-based capital models and these are commonly approximated. Within a life insurance setting, a loss scenario may involve an asset-liability optimization. When cashflows and asset values are dependent on only a small number of risk factor components, low-dimensional approximations may be used as inputs into the optimization and resulting in loss approximation. By considering these loss approximations as perturbations of linear optimization problems, approximation errors in loss scenarios can be bounded to first order and attributed to specific proxies. This attribution creates a mechanism for approximation improvements and for the eventual elimination of approximation errors in capital estimates through targeted exact computation. The results are demonstrated through a stylized worked example and corresponding numerical study. Advances in error analysis of proxy models enhance confidence in capital estimates. Beyond error analysis, the presented methods can be applied to general sensitivity analysis and the calculation of risk.
]]>For some time now, Solvency II requires that insurance companies calculate minimum capital requirements to face the risk of insolvency, either in accordance with the Standard Formula or using a full or partial Internal Model. An Internal Model must be based on a market-consistent valuation of assets and liabilities at a 1-year time span, where a real-world probabilistic structure is used for the first year of projection. In this paper, we describe the major risks of a non-life insurance company, i.e. the non-life underwriting risk and market risk, and their interactions, focusing on the non-life premium risk, equity risk, and interest rate risk. This analysis is made using some well-known stochastic models in the financial-actuarial literature and practical insurance business, i.e. the Collective Risk Model for non-life premium risk, the Geometric Brownian Motion for equity risk, and a real-world version of the G2++ Model for interest rate risk, where parameters are calibrated on current and real market data. Finally, we illustrate a case study on a single-line and a multi-line insurance company in order to see how the risk drivers behave in both a stand-alone and an aggregate framework.
]]>