Hostname: page-component-77f85d65b8-2tv5m Total loading time: 0 Render date: 2026-03-29T09:29:21.298Z Has data issue: false hasContentIssue false

On contemporary mortality models for actuarial use II: principles

Published online by Cambridge University Press:  23 June 2025

Angus S. Macdonald*
Affiliation:
Department of Actuarial Mathematics and Statistics, Heriot-Watt University, Edinburgh, UK The Maxwell Institute for Mathematical Sciences, Edinburgh, UK
Stephen J. Richards
Affiliation:
Longevitas Ltd., Edinburgh, UK
*
Corresponding author: Angus S. Macdonald; Email: A.S.Macdonald@hw.ac.uk.
Rights & Permissions [Opens in a new window]

Abstract

We reprise some common statistical models for actuarial mortality analysis using grouped counts. We then discuss the benefits of building mortality models from the most elementary items. This has two facets. First, models are better based on the mortality of individuals, rather than groups. Second, models are better defined in continuous time, rather than over fixed intervals like a year. We show how Poisson-like likelihoods at the “macro” level are built up by product integration of sequences of infinitesimal Bernoulli trials at the “micro” level. Observed data is represented through a stochastic mortality hazard rate, and counting processes provide the natural notation for left-truncated and right-censored actuarial data, individual or age-grouped. Together these explain the “pseudo-Poisson” behaviour of survival model likelihoods.

Information

Type
Sessional Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Institute and Faculty of Actuaries
Figure 0

Table 1. Definitions of quantities based on ${T_x}$, the random future lifetime at age $x$. The consistency condition assumes that ${x_0} = 0$, and ensures that calculations based on the distribution of ${T_x}$ will never contradict calculations based on the distribution of ${T_y}$ ($y \ne x$)

Figure 1

Table 2. Contributions to likelihoods of the $i$th individual, under observation from age 47 until right-censored at age 50, and the $j$th individual, under observation from age 47.6 until death at age 49.3, under three observational plans and assumptions: (a) annual rate interval ${{\rm{\Delta }}_k}$, piecewise-constant hazard rates; (b) annual rate interval ${{\rm{\Delta }}_k}$, smooth hazard rate parametrized by $\theta $; and (c) observation of complete lifetime age interval ${{\rm{\Delta }}_i}$, smooth hazard rate parametrized by $\theta $

Figure 2

Table 3. Three stages in adding structure to a hazard rate that is a function $\mu (x,{{\rm z}^i})$ of age $x$ and a vector ${{\rm z}^i}$ of covariates for the $i$th individual. Each stage is increasingly restrictive, from the most flexible model in Stage 0 to the most restrictive in Stage 3

Figure 3

Table 4. Contributions to the probability function of the infinitesimal Bernoulli trials (equation (29)) from elements of the observed life history, in the absence of left-truncation and right-censoring, and in their presence. Technical point: ${N^i}\left( s \right)$ has right-continuous sample paths and ${Y^i}\left( s \right)$ has left-continuous sample paths (Section 4.3), so at the time of an observed death ${N^i}\left( s \right) = {Y^i}\left( s \right) = 1$

Figure 4

Figure 1. Sample counting process representations of lifetimes. The first row is a life that enters observation at age 0 and is observed until dying at age 81. The second row is a life that enters observation at age 35 (left-truncation) and leaves at age 45 (right-censoring), enters observation again at age 55 (left-truncation) and is then observed until death at age 72.

Figure 5

Figure 2. Family tree of models (see Section 4.8) showing the derivation of individual models, and collective models for individual lives and age-grouped data, (in bold) from the basic Bernoulli trial. $M$ is the number of individuals observed, “$M$ unrestricted” means that $M$ can be either random or deterministic. The tree has two branches, one at the top leading to continuous-time models, including pseudo-Poisson and Poisson models, and one at the bottom leading to the discrete-time binomial model. The dotted arrow indicates that the Poisson model requires the imposition of an observational plan that ensures $E_x^{c}$ is deterministic, which is unlikely to be realized in practice.

Figure 6

Figure 3. Multiple-decrement model. Note that states $2,3, \ldots, K$ are absorbing states with no transitions out once entered.

Figure 7

Figure A1. Rate intervals, individual observations and exposure intervals. Rate-interval boundaries, $\left\{ {{r_k}} \right\}$, are set by the analyst, and here are not necessarily integers, nor evenly spaced. The data are paired ages of the start and end of individual observations, $\left( {{x_i},{y_i}} \right)$. The set of exposure interval boundaries, $\left\{ {{w_j}} \right\}$, is defined as the ordered union of $\left\{ {{r_k}} \right\}$, $\left\{ {{x_i}} \right\}$ and $\left\{ {{y_i}} \right\}$. One consequence is that each exposure interval ${{\rm{\Delta }}_j}$ is always completely contained within a corresponding rate interval ${{\rm{\Delta }}_k}$.