To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Nelson–Siegel model is widely used in fixed income markets to produce yield curve dynamics. The multiple time-dependent parameter model conveniently addresses the level, slope, and curvature dynamics of the yield curves. In this study, we present a novel state-space functional regression model that incorporates a dynamic Nelson–Siegel (DNS) model and functional regression formulations applied to a multi-economy setting. This framework offers distinct advantages in explaining the relative spreads in yields between a reference economy and a response economy. To address the inherent challenges of model calibration, a kernel principal component analysis is employed to transform the representation of functional regression into a finite-dimensional, tractable estimation problem. A comprehensive empirical analysis is conducted to assess the efficacy of the functional regression approach, including an in-sample performance comparison with the DNS model. We conducted the stress testing analysis of the yield curves’ term structure within a dual economy framework. The bond ladder portfolio was examined through a case study focused on spread modeling using historical data for US Treasury and UK bonds.
Failure extropy, introduced by Nair and Sathar Nair [(2020). On dynamic failure extropy. J. Indian Soc. Probab. Stat. 21: 287–-313], provides a complementary perspective to entropy for quantifying uncertainty in lifetime distributions. However, it becomes mathematically invalid for distributions with unbounded support. To overcome this limitation, Tahmasebi and Toomaj [(2022). On negative cumulative extropy with applications. Commun. Stat. Theory Methods 51(15): 5025-–5047] proposed the concept of negative cumulative extropy (NCEx), offering a bounded and interpretable alternative. In this paper, we extend the notion of NCEx to the bivariate dynamic setting, where uncertainty is assessed for systems whose components have failed at specified times. The proposed formulation effectively captures the uncertainty associated with past lifetimes under dependence, which the existing NCEx cannot address. The measure is further generalized to a vector-valued form, and its fundamental properties are established, including monotonicity, invariance, bounds expressed in terms of the expected inactivity time, and key characterizations. A new stochastic ordering based on the proposed measure is also established. To facilitate practical implementation, a nonparametric estimator is developed and its performance evaluated through extensive Monte Carlo simulations. The practical relevance of the proposed measure is demonstrated using a real dataset, and its superiority over existing entropy-based approaches is shown on an additional dataset.
Federated Learning is a novel method of training machine learning models, pioneered by Google, aimed for use on smartphones. In contrast to traditional machine learning, where data is centralised and brought to the model, Federated Learning involves the algorithm being brought to the data, ensuring privacy is preserved. This paper will demonstrate how insurance companies in a market could use this technique to build a claims frequency neural network prediction model collectively by combining and using all of their customer data, without actually sharing or compromising any sensitive information with each other. A simulated car insurance market with 10 players was created using the freMTPL2freq dataset. It was found that if all insurers were permitted to share their confidential data with each other, they could collectively build a model that achieved 5.57% of exposure weighted Poisson Deviance Explained (% PDE) on an unseen sample. However, if they are not permitted to share their customer data, none of them can achieve more than 3.82% exposure weighted PDE on the same unseen sample. With Federated Learning, they can retain all of their customer data privately and construct a model that achieves a similar level of accuracy to that achieved by centralising all the data for model training, reaching 5.34% exposure weighted PDE on the same unseen sample.
We study the multiserver-job setting in the load-focused multilevel scaling limit, where system load approaches capacity much faster than the growth of the number of servers $n$. We consider the “1 and $n$” system, where each job requires either one server or all $n$. Within the multilevel scaling limit, we examine three regimes: load dominated by $n$-server jobs, 1-server jobs, or balanced. In each regime, we characterize the asymptotic growth rate of the boundary of the stability region and the scaled mean queue length. We demonstrate that mean queue length peaks near balanced load via theory, numerics, and simulation.
The InsurTech industry has undergone almost a decade of development. Despite its initial success, the industry now faces challenges from global uncertainties and regulatory adjustments, which lead to concerns about sustainable profit growth and the ongoing development of InsurTech. This study provides an overview of the evolution of InsurTech development from both academic and practical perspectives. A bibliometric analysis of more than 20,000 published articles, including both practice articles and academic articles, is put forward. As compared to other review articles in this field, which often focus on either the practice or the scholarly side of development, this article brings together a review of both academic and practice-based articles from fields relevant to InsurTech including artificial intelligence, the Internet of Things, and also powerful computing technology. A keyword extraction framework is developed and applied. Using text analysis, this study reviews the prioritized topics, analyzes the robustness of the development of publication growth, identifies emerging insurance business lines, and also highlights the challenges and gaps in both academic and practice development. This study aims to motivate collaboration between academics and industry to face the challenges posed by the integration of InsurTech into insurance operations.
We evaluate the performance and level of intergenerational cross-subsidy in flat-accrual and dynamic-accrual collective defined contribution (CDC) schemes, which have been designed to be compatible with UK legislation. In the flat-accrual scheme, all members accrue the benefits at the same rate, irrespective of age. This captures the most significant feature of the Royal Mail Collective Pension Plan, which is currently the only UK CDC scheme. The dynamic-accrual schemes seek to reduce intergenerational cross-subsidies by varying the rate of benefit-accrual schemes in accordance with the age of members and the current funding level. We find that these CDC schemes can often be successful in smoothing pension outcomes postretirement while outperforming a defined contribution scheme followed by annuity purchase at the point of retirement. However, this out-performance is not guaranteed in a flat-accrual scheme, and there is little smoothing of projected pension outcomes before retirement. There are significant intergenerational cross-subsidies in the flat-accrual scheme, which qualitatively mirror the cross-subsidies seen in defined benefit schemes, but the magnitude of cross-subsidies is much larger in flat-accrual CDC schemes. The dynamic-accrual scheme design seeks to reduce such cross-subsidies, but we find significant cross-subsidies still arise due to the approximate pricing methodology used to determine the benefits.
The goal of the Paris Agreement is to prevent global temperatures from rising by more than 2°C above pre-industrial levels and pursue efforts to limit them to 1.5°C above pre-industrial levels. This requires a significant reduction in global greenhouse gas emissions and achieving net zero emissions by 2050. Portfolio alignment metrics are forward-looking metrics intended to help investors understand whether their investment portfolios are on track to meet the Paris Agreement goals. They also aim to encourage capital flows towards activities needed for a net zero transition. Since 2020, several metrics have been put forward by industry groups and explored in technical papers. Companies and actuaries have been exploring the practicalities of these metrics and starting to incorporate them into investment reporting and design. But this has not been without key challenges. The Net Zero and Implications for Investment Portfolios working party aims to help actuaries improve their understanding of what net zero means for an investment portfolio and what the key mechanisms are to achieve this, as well as key challenges to date and the outlook for development.
Extropy-based divergence measures offer distinct advantages over entropy-based counterparts, owing to their mathematical simplicity and enhanced interpretability. Relative extropy by Lad et al. [5] is a symmetric divergence measure between two probability distributions, and Mohammadi et al. [8] introduced the asymmetric divergence between two distributions based on extropy. We further study these measures, their properties, and interrelationships in this article. To address the divergence between truncated lifetime distributions, we define dynamic relative extropy for residual and past lifetime scenarios. Exploring the interrelationships of dynamic cases of relative extropy, extropy divergence, and extropy inaccuracy, we derive some unique properties and characterizations for the exponential distribution. A nonparametric estimator for relative extropy is developed, and its performance is assessed through numerical simulation studies. The practical applicability of relative extropy is used to analyze the divergence in lifetime patterns of mice under a lifetime feeding experiment and the shopping patterns of customers based on age and income groups. Further, the application of relative extropy is also applied to find the dissimilarity between two images.
Consecutive $k$-type systems have become important in both reliability theory and applications; in spite of a large literature existing on them, three-dimensional consecutive $k$-type systems have not yet been studied for multi-state case. In this paper, we introduce several different types of multi-state linear three-dimensional consecutive $k$-type systems for the first time, with due consideration to possible overlapping of failure blocks. The finite Markov chain imbedding approach is then used for the derivation of their reliability functions with state spaces and transition matrices provided in a novel way, and the involved computational process is illustrated through several numerical examples. Finally, some possible applications of the work and potential extensions are pointed out.
Providing comprehensive yet accessible coverage, this is the first graduate-level textbook dedicated to the mathematical theory of risk measures. It explains how economic and financial principles result in a profound mathematical theory that allows us to quantify risk in monetary terms, giving rise to risk measures. Each chapter is designed to match the length of one or two lectures, covering the core theory in a self-contained manner, with exercises included in every chapter. Additional material sections then provide further background and insights for those looking to delve deeper. This two-layer modular design makes the book suitable as the basis for diverse lecture courses of varying length and level, and a valuable resource for researchers.
Gaussian Process (GP) modeling is a probabilistic, non-parametric framework for describing spatio-temporal dependence that is well-suited for fitting risk-related surfaces. I summarize the main emerging actuarial use cases of GPs, including their applications in longevity modeling, insurance contract valuation, and loss development. The editorial also discusses further contexts with potential for GP-based approaches.
The $q$-Weibull distribution, as a generalization of the Weibull distribution, plays an important role in the field of reliability theory, survival analysis, finance, engineering, medical science, etc. In contrast to the Weibull distribution, which is limited to describing monotonic hazard rate functions, the $q$-Weibull distribution offers the flexibility to model various behaviors of the hazard rate function, including unimodal, bathtub-shaped, monotonic (both increasing and decreasing), and constant. In this article, we investigated the stochastic comparison of extreme order statistics derived from independent, heterogeneous $q$-Weibull random variables using various stochastic orderings, including the usual stochastic order, hazard rate order, reversed hazard rate order, and likelihood ratio order. Some of these results are further extended to dependent setups by incorporating Archimedean copulas to model the dependence structure. Finally, we explored the behavior of extreme order statistics when the random variables are subjected to random shocks.
This paper investigates the complexity of residual lifetimes of live components in coherent systems through the lens of cumulative residual extropy and its divergence-based extension, Jensen-cumulative residual extropy. Unlike classical reliability metrics that focus on system inactivity or mean residual life, our framework quantifies the hidden informational structure of components that remain alive at the system failure time. We derive closed-form expressions for the cumulative residual extropy of conditional residual lifetimes using system signatures and establish stochastic bounds and comparisons that highlight the impact of structural configuration. A novel divergence measure, the Jensen-cumulative residual extropy, is introduced to capture discrepancies between coherent systems and benchmark $k$-out-of-$n$ structures. Numerical illustrations with gamma-distributed lifetimes demonstrate the sensitivity of cumulative residual extropy and Jensen-cumulative residual extropy to redundancy patterns and dependence structures. Furthermore, by integrating cost considerations into the divergence framework, we provide a rigorous optimization scheme for selecting system signatures that jointly minimize informational complexity and economic expenditure. The proposed approach enriches the theoretical foundation of reliability analysis and offers practical guidelines for designing resilient, cost-effective, and information-efficient engineering systems.
While Value-at-Risk (V@R) often fails to capture the benefits of diversification, coherent and convex risk measures are developed to align with the financial intuition that diversification reduces risk.
On atomless probability spaces, all law-determined convex risk measures on Lp spaces can be represented as a supremum of integrals of Average-Value-at-Risk (AV@R) measures, demonstrating AV@R’s role as a fundamental building block.
This chapter explores various constructions of risk measures, including spectral risk measures, distortion risk measures, and moment-based risk measures, as well as risk measures generated by expected losses.
This chapter demonstrates that coherent and comonotonic additive risk measures are characterized by Choquet integrals with respect to two-alternating (submodular or concave) non-additive measures.