To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A scoping review was conducted to map out sources, types, characteristics of evidence that substantiate the existence of a community dividend arising from testing and treating hepatitis C virus (HCV) infection in people living in detention – where community dividend is defined as the benefit of prison-related intervention for general population health. Joanna Briggs Institute methodology guidance was used. Literature search was done in EMBASE, Scopus, ASSIA, UWE library, CINAHL Plus, and Medline to find studies published in any country, any language between January 1991 and June 2022. PRISMA ScR flow chart mapped out the number of records identified, included, and reasons for exclusion. Data were extracted and charted in Excel. The findings were systematically reported by charting table headings then synthesized in the discussion. Quality assessment was carried out. The descriptive analysis demonstrated economic, clinical, and epidemiological domains to the community dividend in long-term health expenditure savings, reduction in HCV-related disease sequelae, increase in survival, improvement in quality of life, and reduction in infection transmission, most of which are realized in the community following release. Therefore, targeting marginalized populations affected by HCV could expedite the elimination effort, reduce inequalities, and have a positive impact on the wider population.
This paper proposes a nonparametric approach to identify and estimate the generalized additive model with a flexible additive structure and with possibly discrete variables when the link function is unknown. Our approach allows for a flexible additive structure which provides applied researchers the flexibility to specify their model according to economic theory or practical experience. Motivated by the concerns from empirical research, our method also allows for multiple discrete variables in the covariates. By transforming our model into a generalized additive model with univariate component functions, our identification and estimation thereby follows a procedure adapted from the case with univariate components. The estimators converge to normal distributions in large sample with a one-dimensional convergence rate for the link function and a $d_k$-dimensional convergence rate for the component function $f_k(\cdot )$ defined on ${\mathbb R}^{d_k}$ for all k.
A common narrative among insurance actuaries and business economists is that national or regional pension systems can be finetuned, optimized, and improved simply by tinkering with demographic and financial parameters; all within the context of the “right” mathematical model. Indeed, recent papers in the actuarial literature have offered technical fixes around savings rates, retirement ages, decumulation strategies as well as more refined mortality and interest rate models. But alas, not everything in the world of pensions and retirement can be optimized, in particular as it relates to the history, background culture, or religion of the underlying population.
This paper documents a statistically significant relationship between a region’s pension plan “health status” and the fraction of the region’s population identifying as Protestant Christians (PC). We begin the analysis at the national level using a well-known pension quality index and then obtain similar results for the actuarial funded status of U.S. state pension plans.
Overall, this work is within the sphere of recent literature that indicates historical religious beliefs, values, and culture matter for financial economic outcomes; a factor which obviously can’t be optimized within a mathematical Hamilton–Jacobi–Bellman (HJB) equation. In other words, some things in retirement are truly beyond control.
We solve the non-discounted, finite-horizon optimal stopping problem of a Gauss–Markov bridge by using a time-space transformation approach. The associated optimal stopping boundary is proved to be Lipschitz continuous on any closed interval that excludes the horizon, and it is characterized by the unique solution of an integral equation. A Picard iteration algorithm is discussed and implemented to exemplify the numerical computation and geometry of the optimal stopping boundary for some illustrative cases.
Stochastic mortality models are important for a variety of actuarial tasks, from best-estimate forecasting to assessment of risk capital requirements. However, the mortality shock associated with the Covid-19 pandemic of 2020 distorts forecasts by (i) biasing parameter estimates, (ii) biasing starting points, and (iii) inflating variance. Stochastic mortality models therefore require outlier-robust methods for forecasting. Objective methods are required, as outliers are not always obvious on visual inspection. In this paper we look at the robustification of three broad classes of forecast: univariate time indices (such as in the Lee-Carter and APC models); multivariate time indices (such as in the Cairns-Blake-Dowd and newer Tang-Li-Tickle model families); and penalty projections (such as with the 2D P-spline model). In each case we identify outliers using quantitative methods, then co-estimate outlier effects along with other parameters. Doing so removes the bias and distortion to the forecast caused by a mortality shock, while providing a robust starting point for projections. Illustrations are given for various models in common use.
Improved health data governance is urgently needed due to the increasing use of digital technologies that facilitate the collection of health data and growing demand to use that data in artificial intelligence (AI) models that contribute to improving health outcomes. While most of the discussion around health data governance is focused on policy and regulation, we present a practical perspective. We focus on the context of low-resource government health systems, using first-hand experience of the Zanzibar health system as a specific case study, and examine three aspects of data governance: informed consent, data access and security, and data quality. We discuss the barriers to obtaining meaningful informed consent, highlighting the need for more research to determine how to effectively communicate about data and AI and to design effective consent processes. We then report on the process of introducing data access management and information security guidelines into the Zanzibar health system, demonstrating the gaps in capacity and resources that must be addressed during the implementation of a health data governance policy in a low-resource government system. Finally, we discuss the quality of service delivery data in low-resource health systems such as Zanzibar’s, highlighting that a large quantity of data does not necessarily ensure its suitability for AI development. Poor data quality can be addressed to some extent through improved data governance, but the problem is inextricably linked to the weakness of a health system, and therefore AI-quality data cannot be obtained through technological or data governance measures alone.
In the literature, there are polarized views regarding the capabilities of technology to embed societal values. One aisle of the debate contends that technical artifacts are value-neutral since values are not peculiar to inanimate objects. Scholars on the other side of the aisle argue that technologies tend to be value-laden. With the call to embed ethical values in technology, this article explores how AI and other adjacent technologies are designed and developed to foster social justice. Drawing insights from prior studies, this paper identifies seven African moral values considered central to actualizing social justice; of these, two stand out—respect for diversity and ethnic neutrality. By introducing use case analysis along with the Discovery, Translation, and Verification (DTV) framework and validating via Focus Group Discussion, this study revealed novel findings: first, ethical value analysis is best carried out alongside software system analysis. Second, to embed ethics in technology, interdisciplinary expertise is required. Third, the DTV approach combined with the software engineering methodology provides a promising way to embed moral values in technology. Against this backdrop, the two highlighted ethical values—respect for diversity and ethnic neutrality—help ground the pursuit of social justice.
The paper introduces a method for creating a categorical generalized linear model (GLM) based on information extracted from a given black-box predictor. The procedure for creating the guided GLM is as follows: For each covariate, including interactions, a covariate partition is created using partial dependence functions calculated based on the given black-box predictor. In order to enhance the predictive performance, an auto-calibration step is used to determine which parts of each covariate partition should be kept, and which parts should be merged. Given the covariate and interaction partitions, a standard categorical GLM is fitted using a lasso penalty. The performance of the proposed method is illustrated using a number of real insurance data sets where gradient boosting machine (GBM) models are used as black-box reference models. From these examples, it is seen that the predictive performance of the guided GLMs is very close to that of the corresponding reference GBMs. Further, in the examples, the guided GLMs have few parameters, making the resulting models easy to interpret. In the numerical illustrations techniques are used to, e.g., identify important interactions both locally and globally, which is essential when, e.g., constructing a tariff.
The EUMigraTool (EMT) provides short-term and mid-term predictions of asylum seekers arriving in the European Union, drawing on multiple sources of public information and with a focus on human rights. After 3 years of development, it has been tested in real environments by 17 NGOs working with migrants in Spain, Italy, and Greece.
This paper will first describe the functionalities, models, and features of the EMT. It will then analyze the main challenges and limitations of developing a tool for non-profit organizations, focusing on issues such as (1) the validation process and accuracy, and (2) the main ethical concerns, including the challenging exploitation plan when the main target group are NGOs.
The overall purpose of this paper is to share the results and lessons learned from the creation of the EMT, and to reflect on the main elements that need to be considered when developing a predictive tool for assisting NGOs in the field of migration.
A common statistical modelling paradigm used in actuarial pricing is (a) assuming that the possible loss model can be chosen from a dictionary of standard models; (b) selecting the model that provides the best trade-off between goodness of fit and complexity. Machine learning provides a rigorous framework for this selection/validation process. An alternative modelling paradigm, common in the sciences, is to prove the adequacy of a statistical model from first principles: for example, Planck’s distribution, which describes the spectral distribution of blackbody radiation empirically, was explained by Einstein by assuming that radiation is made of quantised harmonic oscillators (photons). In this working party we have been exploring the extent to which loss models, too, can be derived from first principles. Traditionally, the Poisson, negative binomial, and binomial distributions are used as loss count models because they are familiar and easy to work with. We show how reasoning from first principles naturally leads to non-stationary Poisson processes, Lévy processes, and multivariate Bernoulli processes depending on the context. For modelling severities, we build on previous research that shows how graph theory can be used to model property-like losses. We show how the methodology can be extended to deal with business interruption/supply chain risks by considering networks with higher-order dependencies. For liability business, we show the theoretical and practical limitations of traditional models such as the lognormal distribution. We explore the question of where the ubiquitous power-law behaviour comes from, finding a natural explanation in random growth models. We also address the derivation of severity curves in territories where compensation tables are used. This research is foundational in nature, but its results may prove useful to practitioners by guiding model selection and elucidating the relationship between the features of a risk and the model’s parameters.
In the mid to late 19th century, much of Africa was under colonial rule, with the colonisers exercising power over the labour and territory of Africa. However, as much as Africa has predominantly gained independence from traditional colonial rule, another form of colonial rule still dominates the African landscape. This similitude of these different forms of colonialism is found in the power dominance exhibited by Western technological corporations, just like the traditional colonialists. In this digital age, digital colonialism manifests in Africa through the control and ownership of critical digital infrastructure by foreign entities, leading to unequal data flow and asymmetrical power dynamics. This usually occurs under the guise of foreign corporations providing technological assistance to the continent.
By drawing references from the African continent, this article examines the manifestations of digital colonialism and the factors that aid its occurrence on the continent. It further explores the manifestations of digital colonialism in technologies such as Artificial Intelligence (AI) while analysing the occurrence of data exploitation on the continent and the need for African ownership in cultivating the digital future of the African continent. The paper also recognises the benefits linked to the use of AI and makes a cautious approach toward the deployment of AI tools in Africa. It then concludes by recommending the implementation of laws, regulations, and policies that guarantee the inclusiveness, transparency, and ethical values of new technologies, with strategies toward achieving a decolonised digital future on the African continent.
We consider Markov processes that alternate continuous motions and jumps in a general locally compact Polish space. Starting from a mechanistic construction, a first contribution of this article is to provide conditions on the dynamics so that the associated transition kernel forms a Feller semigroup, and to deduce the corresponding infinitesimal generator. As a second contribution, we investigate the ergodic properties in the special case where the jumps consist of births and deaths, a situation observed in several applications including epidemiology, ecology, and microbiology. Based on a coupling argument, we obtain conditions for convergence to a stationary measure with a geometric rate of convergence. Throughout the article, we illustrate our results using general examples of systems of interacting particles in $\mathbb{R}^d$ with births and deaths. We show that in some cases the stationary measure can be made explicit and corresponds to a Gibbs measure on a compact subset of $\mathbb{R}^d$. Our examples include in particular Gibbs measures associated to repulsive Lennard-Jones potentials and to Riesz potentials.
Persistent Betti numbers are a major tool in persistent homology, a subfield of topological data analysis. Many tools in persistent homology rely on the properties of persistent Betti numbers considered as a two-dimensional stochastic process $ (r,s) \mapsto n^{-1/2} (\beta^{r,s}_q ( \mathcal{K}(n^{1/d} \mathcal{X}_n))-\mathbb{E}[\beta^{r,s}_q ( \mathcal{K}( n^{1/d} \mathcal{X}_n))])$. So far, pointwise limit theorems have been established in various settings. In particular, the pointwise asymptotic normality of (persistent) Betti numbers has been established for stationary Poisson processes and binomial processes with constant intensity function in the so-called critical (or thermodynamic) regime; see Yogeshwaran et al. (Prob. Theory Relat. Fields167, 2017) and Hiraoka et al. (Ann. Appl. Prob.28, 2018).
In this contribution, we derive a strong stabilization property (in the spirit of Penrose and Yukich, Ann. Appl. Prob.11, 2001) of persistent Betti numbers, and we generalize the existing results on their asymptotic normality to the multivariate case and to a broader class of underlying Poisson and binomial processes. Most importantly, we show that multivariate asymptotic normality holds for all pairs (r, s), $0\le r\le s<\infty$, and that it is not affected by percolation effects in the underlying random geometric graph.
We present a closed-form solution to a discounted optimal stopping zero-sum game in a model based on a generalised geometric Brownian motion with coefficients depending on its running maximum and minimum processes. The optimal stopping times forming a Nash equilibrium are shown to be the first times at which the original process hits certain boundaries depending on the running values of the associated maximum and minimum processes. The proof is based on the reduction of the original game to the equivalent coupled free-boundary problem and the solution of the latter problem by means of the smooth-fit and normal-reflection conditions. We show that the optimal stopping boundaries are partially determined as either unique solutions to the appropriate system of arithmetic equations or unique solutions to the appropriate first-order nonlinear ordinary differential equations. The results obtained are related to the valuation of the perpetual lookback game options with floating strikes in the appropriate diffusion-type extension of the Black–Merton–Scholes model.
A graph $G$ is $q$-Ramsey for another graph $H$ if in any $q$-edge-colouring of $G$ there is a monochromatic copy of $H$, and the classic Ramsey problem asks for the minimum number of vertices in such a graph. This was broadened in the seminal work of Burr, Erdős, and Lovász to the investigation of other extremal parameters of Ramsey graphs, including the minimum degree.
It is not hard to see that if $G$ is minimally $q$-Ramsey for $H$ we must have $\delta (G) \ge q(\delta (H) - 1) + 1$, and we say that a graph $H$ is $q$-Ramsey simple if this bound can be attained. Grinshpun showed that this is typical of rather sparse graphs, proving that the random graph $G(n,p)$ is almost surely $2$-Ramsey simple when $\frac{\log n}{n} \ll p \ll n^{-2/3}$. In this paper, we explore this question further, asking for which pairs $p = p(n)$ and $q = q(n,p)$ we can expect $G(n,p)$ to be $q$-Ramsey simple.
We first extend Grinshpun’s result by showing that $G(n,p)$ is not just $2$-Ramsey simple, but is in fact $q$-Ramsey simple for any $q = q(n)$, provided $p \ll n^{-1}$ or $\frac{\log n}{n} \ll p \ll n^{-2/3}$. Next, when $p \gg \left ( \frac{\log n}{n} \right )^{1/2}$, we find that $G(n,p)$ is not $q$-Ramsey simple for any $q \ge 2$. Finally, we uncover some interesting behaviour for intermediate edge probabilities. When $n^{-2/3} \ll p \ll n^{-1/2}$, we find that there is some finite threshold $\tilde{q} = \tilde{q}(H)$, depending on the structure of the instance $H \sim G(n,p)$ of the random graph, such that $H$ is $q$-Ramsey simple if and only if $q \le \tilde{q}$. Aside from a couple of logarithmic factors, this resolves the qualitative nature of the Ramsey simplicity of the random graph over the full spectrum of edge probabilities.
We consider the constrained-degree percolation model in a random environment (CDPRE) on the square lattice. In this model, each vertex v has an independent random constraint $\kappa_v$ which takes the value $j\in \{0,1,2,3\}$ with probability $\rho_j$. The dynamics is as follows: at time $t=0$ all edges are closed; each edge e attempts to open at a random time $U(e)\sim \mathrm{U}(0,1]$, independently of all the other edges. It succeeds if at time U(e) both its end vertices have degrees strictly smaller than their respective constraints. We obtain exponential decay of the radius of the open cluster of the origin at all times when its expected size is finite. Since CDPRE is dominated by Bernoulli percolation, this result is meaningful only if the supremum of all values of t for which the expected size of the open cluster of the origin is finite is larger than $\frac12$. We prove this last fact by showing a sharp phase transition for an intermediate model.
Paragraph 53(a) of the new insurance accounting standard IFRS 17 suggests there is a relationship between the liability for remaining coverage (“LFRC”) calculated under the general measurement model (“GMM”) and premium allocation approach (“PAA”), although it is not immediately obvious how the two are related or could result in a similar estimate for the LFRC. This paper explores the underlying relationship between the GMM and PAA through the equivalence principle and presents a set of sufficient mathematical conditions that result in an identical LFRC when calculated under the GMM and PAA. An illustrative example is included to demonstrate how the sufficient conditions can be applied in practice and the optimisation opportunities offered to actuaries and accountants when conducting PAA eligibility testing.
This paper studies a novel Brownian functional defined as the supremum of a weighted average of the running Brownian range and its running reversal from extrema on the unit interval. We derive the Laplace transform for the squared reciprocal of this functional, which leads to explicit moment expressions that are new to the literature. We show that the proposed Brownian functional can be used to estimate the spot volatility of financial returns based on high-frequency price observations.
We propose an individual claims reserving model based on the conditional Aalen–Johansen estimator, as developed in Bladt and Furrer ((2023a) arXiv:2303.02119.). In our approach, we formulate a multi-state problem, where the underlying variable is the individual claim size, rather than time. The states in this model represent development periods, and we estimate the cumulative density function of individual claim sizes using the conditional Aalen–Johansen method as transition probabilities to an absorbing state. Our methodology reinterprets the concept of multi-state models and offers a strategy for modeling the complete curve of individual claim sizes. To illustrate our approach, we apply our model to both simulated and real datasets. Having access to the entire dataset enables us to support the use of our approach by comparing the predicted total final cost with the actual amount, as well as evaluating it in terms of the continuously ranked probability score.
As the global population continues to age, effective management of longevity risk becomes increasingly critical for various stakeholders. Accurate mortality forecasting serves as a cornerstone for addressing this challenge. This study proposes to leverage Kernel Principal Component Analysis (KPCA) to enhance mortality rate predictions. By extending the traditional Lee-Carter model with KPCA, we capture nonlinear patterns and complex relationships in mortality data. The newly proposed KPCA Lee-Carter algorithm is empirically tested and demonstrates superior forecasting performance. Furthermore, the model’s robustness was tested during the COVID-19 pandemic, showing that the KPCA Lee-Carter algorithm effectively captures increased uncertainty during extreme events while maintaining narrower prediction intervals. This makes it a valuable tool for mortality forecasting and risk management. Our findings contribute to the growing body of literature where actuarial science intersects with statistical learning, offering practical solutions to the challenges posed by an aging world population.