To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this study, we propose a nonlinear Bayesian extension of the Lee–Carter (LC) model using a single-stage procedure with a dimensionality reduction neural network (NN). LC is originally estimated using a two-stage procedure: dimensionality reduction of data by singular value decomposition followed by a time series model fitting. To address the limitations of LC, which are attributed to the two-stage estimation and insufficient model fitness to data, single-stage procedures using the Bayesian state-space (BSS) approaches and extensions of flexibility in modeling by NNs have been proposed. As a fusion of these two approaches, we propose a NN extension of LC with a variational autoencoder that performs the variational Bayesian estimation of a state-space model and dimensionality reduction by autoencoding. Despite being a NN model that performs single-stage estimation of parameters, our model has excellent interpretability and the ability to forecast with confidence intervals, as with the BSS models, without using Markov chain Monte Carlo methods.
In the context of life insurance with profit participation, the future discretionary benefits (FDB), which are a central item for Solvency II reporting, are generally calculated by computationally expensive Monte Carlo algorithms. We derive analytic formulas to estimate lower and upper bounds for the FDB. This yields an estimation interval for the FDB, and the average of lower and upper bound is a simple estimator. These formulae are designed for real world applications, and we compare the results to publicly available reporting data.
We propose a generalized Cramér–Lundberg model of the risk theory of non-life insurance and study its ruin probability. Our model is an extension of that of Dubey (1977) to the case of multiple insureds, where the counting process is a mixed Poisson process and the continuously varying premium rate is determined by a Bayesian rule on the number of claims. We use two proofs to show that, for each fixed value of the safety loading, the ruin probability is the same as that of the classical Cramér–Lundberg model and does not depend on either the distribution of the mixing variable of the driving mixed Poisson process or the number of claim contracts.
Customer churn, which insurance companies use to describe the non-renewal of existing customers, is a widespread and expensive problem in general insurance, particularly because contracts are usually short-term and are renewed periodically. Traditionally, customer churn analyses have employed models which utilise only a binary outcome (churn or not churn) in one period. However, real business relationships are multi-period, and policyholders may reside and transition between a wider range of states beyond that of the simply churn/not churn throughout this relationship. To better encapsulate the richness of policyholder behaviours through time, we propose multi-state customer churn analysis, which aims to model behaviour over a larger number of states (defined by different combinations of insurance coverage taken) and across multiple periods (thereby making use of readily available longitudinal data). Using multinomial logistic regression (MLR) with a second-order Markov assumption, we demonstrate how multi-state customer churn analysis offers deeper insights into how a policyholder’s transition history is associated with their decision making, whether that be to retain the current set of policies, churn, or add/drop a coverage. Applying this model to commercial insurance data from the Wisconsin Local Government Property Insurance Fund, we illustrate how transition probabilities between states are affected by differing sets of explanatory variables and that a multi-state analysis can potentially offer stronger predictive performance and more accurate calculations of customer lifetime value (say), compared to the traditional customer churn analysis techniques.
Modelling loss reserve data in run-off triangles is challenging due to the complex but unknown dynamics in the claim/loss process. Popular loss reserve models describe the mean process through development year, accident year, and calendar year effects using the analysis of variance and covariance (ANCOVA) models. We propose to include in the mean function the persistence terms in the conditional autoregressive range model for modelling the persistence of claim across development years. In the ANCOVA model, we adopt linear trends for the accident and calendar year effects and a quadratic trend for the development year effect. We investigate linear or log-transformed mean functions and four distributions, namely generalised beta type 2, generalised gamma, Weibull, and exponential extension, with positive support to enhance the model flexibility. The proposed models are implemented using the Bayesian user-friendly package Stan running in the R environment. Results show that the models with log-transformed mean function and persistence terms provide better model fits. Lastly, the best model is applied to forecast partial loss reserve and calendar year reserve for three years.
Motivated by recent studies of big samples, this work aims to construct a parametric model which is characterized by the following features: (i) a ‘local’ reinforcement, i.e. a reinforcement mechanism mainly based on the last observations, (ii) a random persistent fluctuation of the predictive mean, and (iii) a long-term almost sure convergence of the empirical mean to a deterministic limit, together with a chi-squared goodness-of-fit result for the limit probabilities. This triple purpose is achieved by the introduction of a new variant of the Eggenberger–Pólya urn, which we call the rescaled Pólya urn. We provide a complete asymptotic characterization of this model, pointing out that, for a certain choice of the parameters, it has properties different from the ones typically exhibited by the other urn models in the literature. Therefore, beyond the possible statistical application, this work could be interesting for those who are concerned with stochastic processes with reinforcement.
We study a stochastic compartmental susceptible–infected (SI) epidemic process on a configuration model random graph with a given degree distribution over a finite time interval. We split the population of graph vertices into two compartments, namely, S and I, denoting susceptible and infected vertices, respectively. In addition to the sizes of these two compartments, we keep track of the counts of SI-edges (those connecting a susceptible and an infected vertex) and SS-edges (those connecting two susceptible vertices). We describe the dynamical process in terms of these counts and present a functional central limit theorem (FCLT) for them as the number of vertices in the random graph grows to infinity. The FCLT asserts that the counts, when appropriately scaled, converge weakly to a continuous Gaussian vector semimartingale process in the space of vector-valued càdlàg functions endowed with the Skorokhod topology. We discuss applications of the FCLT in percolation theory and in modelling the spread of computer viruses. We also provide simulation results illustrating the FCLT for some common degree distributions.
We analyse an additive-increase and multiplicative-decrease (also known as growth–collapse) process that grows linearly in time and that, at Poisson epochs, experiences downward jumps that are (deterministically) proportional to its present position. For this process, and also for its reflected versions, we consider one- and two-sided exit problems that concern the identification of the laws of exit times from fixed intervals and half-lines. All proofs are based on a unified first-step analysis approach at the first jump epoch, which allows us to give explicit, yet involved, formulas for their Laplace transforms. All eight Laplace transforms can be described in terms of two so-called scale functions associated with the upward one-sided exit time and with the upward two-sided exit time. All other Laplace transforms can be obtained from the above scale functions by taking limits, derivatives, integrals, and combinations of these.
We consider the simultaneous propagation of two contagions over a social network. We assume a threshold model for the propagation of the two contagions and use the formal framework of discrete dynamical systems. In particular, we study an optimization problem where the goal is to minimize the total number of new infections subject to a budget constraint on the total number of available vaccinations for the contagions. While this problem has been considered in the literature for a single contagion, our work considers the simultaneous propagation of two contagions. This optimization problem is NP-hard. We present two main solution approaches for the problem, namely an integer linear programming (ILP) formulation to obtain optimal solutions and a heuristic based on a generalization of the set cover problem. We carry out a comprehensive experimental evaluation of our solution approaches using many real-world networks. The experimental results show that our heuristic algorithm produces solutions that are close to the optimal solution and runs several orders of magnitude faster than the ILP-based approach for obtaining optimal solutions. We also carry out sensitivity studies of our heuristic algorithm.
Beyond quantifying the amount of association between two variables, as was the goal in a previous chapter, regression analysis aims at describing that association and/or at predicting one of the variables based on the other ones. Examples of applications where this is needed abound in engineering and a broad range of industries. For example, in the insurance industry, when pricing a policy, the predictor variable encapsulates the available information about what is being insured, and the response variable is a measure of risk that the insurance company would take if underwriting the policy. In this context, a procedure is solely evaluated based on its performance at predicting that risk, and can otherwise be very complicated and have no simple interpretation. The chapter covers both local methods such as kernel regression (e.g., local averaging) and empirical risk minimization over a parametric model (e.g., linear models fitted by least squares). Cross-validation is introduced as a method for estimating the prediction power of a certain regression or classification metod.