To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Providing comprehensive yet accessible coverage, this is the first graduate-level textbook dedicated to the mathematical theory of risk measures. It explains how economic and financial principles result in a profound mathematical theory that allows us to quantify risk in monetary terms, giving rise to risk measures. Each chapter is designed to match the length of one or two lectures, covering the core theory in a self-contained manner, with exercises included in every chapter. Additional material sections then provide further background and insights for those looking to delve deeper. This two-layer modular design makes the book suitable as the basis for diverse lecture courses of varying length and level, and a valuable resource for researchers.
In this work, by considering coherent systems comprising independent components with discrete lifetimes, we introduce the notion of discrete-time signature and then discuss some of its properties. With the use of the introduced signature, a stochastic ordering result is also established. We then introduce transformation formulas for the discrete-time signature to facilitate the comparison of systems of different sizes. Some examples are also presented to illustrate all the results developed here.
An increasing number of reports highlight the potential of machine learning (ML) methodologies over the conventional generalised linear model (GLM) for non-life insurance pricing. In parallel, national and international regulatory institutions are accentuating their focus on pricing fairness to quantify and mitigate algorithmic differences and discrimination. However, comprehensive studies that assess both pricing accuracy and fairness remain scarce. We propose a benchmark of the GLM against mainstream regularised linear models and tree-based ensemble models under two popular distribution modelling strategies (Poisson-gamma and Tweedie), with respect to key criteria including estimation bias, deviance, risk differentiation, competitiveness, loss ratios, discrimination and fairness. Pricing performance and fairness were assessed simultaneously on the same samples of premium estimates for GLM and ML models. The models were compared on two open-access motor insurance datasets, each with a different type of cover (fully comprehensive and third-party liability). While no single ML model outperformed across both pricing and discrimination metrics, the GLM significantly underperformed for most. The results indicate that ML may be considered a realistic and reasonable alternative to current practices. We advocate that benchmarking exercises for risk prediction models should be carried out to assess both pricing accuracy and fairness for any given portfolio.
This paper presents an actuarially oriented approach for estimating health state utility values using an enhanced EQ-5D-5L framework that incorporates demographic heterogeneity directly into a Generalised Linear Model (GLM). Using data from 148 patients with Stage IV non-small cell lung cancer (NSCLC) in South Africa, an inverse Gaussian GLM was fitted with demographic variables and EQ-5D-5L domain responses to explain variation in visual analogue scale (VAS) scores. Model selection relied on Akaike Information Criterion, Bayesian Information Criterion, and residual deviance, and extensive diagnostic checks confirmed good calibration, no overdispersion, and strong robustness under bootstrap validation. The final model identified age, gender, home language, and financial dependency as significant predictors of perceived health, demonstrating that utility values differ meaningfully across demographic groups. By generating subgroup-specific estimates rather than relying on uniform value sets, the framework supports more context-sensitive cost-effectiveness modelling and fairer resource allocation. Although developed in the South African NSCLC setting, the methodology is generalisable and offers actuaries and health economists a replicable tool for integrating population heterogeneity into Health Technology Assessment, pricing analysis, and value-based care.
Longevity risk significantly impacts the reserve adequacy ratio of annuity issuers, thereby reducing product profitability. Effectively managing this risk has thus become a priority for insurance companies. A natural hedging strategy, which involves balancing longevity risk through an optimised portfolio of life insurance and annuity products, offers a promising solution and has attracted considerable academic attention in recent years. In this study, we construct a realistic portfolio scenario comprising annuities and life insurance policies across various ages and genders. By applying Cholesky decomposition, we transform the portfolio into an uncorrelated linear model. Our objective function minimises the variance in portfolio value changes, allowing us to explore the impact of mortality on longevity risk mitigation through natural hedging. Using actuarial mathematics and the Bayesian MCMC algorithm, we analyse the factors influencing the hedging effectiveness of a portfolio with minimised variance. Empirical findings indicate that the optimal life-to-annuity ratio is influenced by multiple factors, including gender, age, projection period, and forecast horizon. Based on these findings, we recommend that insurance companies adjust their business structures and actively pursue product innovation to enhance longevity risk management.
This paper addresses the gap between theoretical modeling of cyber risk propagation and empirical analysis of loss characteristics by introducing a novel approach that integrates both approaches. We model the development of cyber loss counts over time using a discrete-time susceptible-infected-recovered process, linking these counts to covariates, and modeling loss severity with regression models. By incorporating temporal and covariate-dependent transition rates, we eliminate the scaling effect of population size on infection counts, revealing the true underlying dynamics. Simulations show that this susceptible-infected-recovered framework significantly improves aggregate loss prediction accuracy, providing a more effective and practical tool for actuarial assessments and risk management in the cyber risk context.
The generalized Gompertz distribution—an extension of the standard Gompertz distribution as well as the exponential distribution and the generalized exponential distribution—offers more flexibility in modeling survival or failure times as it introduces an additional parameter, which can account for different shapes of hazard functions. This enhances its applicability in various fields such as actuarial science, reliability engineering and survival analysis, where more complex survival models are needed to accurately capture the underlying processes. The effect of heterogeneity has generated increased interest in recent times. In this article, multivariate chain majorization methods are exploited to develop stochastic ordering results for extreme-order statistics arising from independent heterogeneous generalized Gompertz random variables with increased degree of heterogeneity.
This chapter provides, we believe, for the apogee of what we think will form the base for success of the quantum physics–like applications. Readers are invited in this chapter to carefully study the two-slit interference experiment with agents (and the agent two-preference interference) for a variety of real potential functions.
This chapter starts with a discussion on models informing probability versus the case where probability is inherent in the model. The chapter also goes into detail to argue why a particular interpretation of quantum mechanics, Bohmian economics, can be useful in finance. We provide for an example of how such mechanics can be applied to daily returns on commodity prices. We also briefly look into the potential connection between Bohmian mechanics and a macroscopic fluid system.
The chapter begins by answering the question “how did physics, which originally represented the philosophy of nature, evolve into its modern phase of the philosophical as well as the scientific knowledge of nature” in terms of a brief history of physics. This is presented in the form of four chronological phases – ancient times, the scientific revolution, the birth of modern physics and the modern version of the quest for the nature of reality. This is followed by the authors’ interpretation of the generic structure of the physical theories of motion and by the application of the interpretation to the problem of “the motion of a particle under gravity”. We then introduce the reader to three key features which characterize financial and economic systems: markets, decision making, and the economic actor. The chapter goes into some detail on each of these three ingredients. The chapter ends by providing the abstracts of the remaining chapters.
We start in this chapter arguing why quantum probability is a good candidate for modelling purposes in decision-making contexts. The quantum formalism, in this chapter, centres around the argument that such formalism can accommodate paradoxical outcomes in decision making. Quantum probability offers a response to those decision-making contexts where a consistent violation of the law of total probability occurs. Strong results have been obtained in decision-making applications and we go into some detail to discuss the so-called QQ equality and the Aumann theorem.
One of the main purposes of this chapter is to explain, albeit in an abstract manner, how quantum physics–like models of the economics-finance contexts would differ from quantum math-like (or simply, quantum-like) models. For this, the chapter begins by considering, what may be called, the “physical” foundations of quantum theory. These include the foundations pertaining to the theoretical, experimental, and interpretational aspects of quantum theory. With reference to the physical foundations, the chapter elaborates on certain expectations from agent-centric economics-finance models to qualify as “quantum physics–analogous”. Then, by briefly reviewing some of the prominent theories of analogical arguments and reasoning from the philosophy of science (for instance, Aristotle’s theory, Hesse’s theory, Gentner’s structure-mapping theory and Bartha’s articulation model), the chapter ends by proposing a strategy for the systematic construction of quantum physics–analogous models of economics and finance.