To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider the estimation of rare-event probabilities using sample proportions output by naive Monte Carlo or collected data. Unlike using variance reduction techniques, this naive estimator does not have an a priori relative efficiency guarantee. On the other hand, due to the recent surge of sophisticated rare-event problems arising in safety evaluations of intelligent systems, efficiency-guaranteed variance reduction may face implementation challenges which, coupled with the availability of computation or data collection power, motivate the use of such a naive estimator. In this paper we study the uncertainty quantification, namely the construction, coverage validity, and tightness of confidence intervals, for rare-event probabilities using only sample proportions. In addition to the known normality, Wilson, and exact intervals, we investigate and compare them with two new intervals derived from Chernoff’s inequality and the Berry–Esseen theorem. Moreover, we generalize our results to the natural situation where sampling stops by reaching a target number of rare-event hits. Our findings show that the normality and Wilson intervals are not always valid, but they are close to the newly developed valid intervals in terms of half-width. In contrast, the exact interval is conservative, but safely guarantees the attainment of the nominal confidence level. Our new intervals, while being more conservative than the exact interval, provide useful insights into understanding the tightness of the considered intervals.
Consider a company whose business carries the potential for investment losses and is additionally vulnerable to exogenous shocks. The unpredictability of the shocks makes it challenging for both the company and the regulator to accurately assess their impact, potentially leading to an underestimation of solvency capital when employing traditional approaches. In this paper, we utilize a stylized model to conduct an extreme value analysis of the tail risk of the company under a Fréchet-type and a Gumbel-type shock. Our main results explicitly demonstrate the different roles of investment risk and shock risk in driving large losses. Furthermore, we derive asymptotic estimates for the value at risk and expected shortfall of the total loss. Numerical studies are conducted to examine the accuracy of the obtained estimates.
The decision about when and how much to annuitize is an important element of the retirement planning of most individuals. Optimal annuitization strategies depend on the individual’s exposure to annuity risk, meaning the possibility of meeting unfavorable personal and market conditions at the time of the annuitization decision. This article studies optimal annuitization strategies within a life-cycle consumption and portfolio choice model, focusing on stochastic interest rates as an important source of annuity risk. Closing a gap in the existing literature, our numerical results across different model variants reveal several typical structural effects of interest rate risk on the annuitization decision, which may however vary depending on preference specifications and alternative investment opportunities: When allowing for gradual annuitization, annuity risk is temporally diversified by spreading annuity purchases over the whole pre-retirement period, with annuity market participation starting earlier in the life cycle and becoming more extensive with increasing interest rate risk. Ruling out this temporal diversification possibility, as embedded in many institutional settings, incurs significant welfare losses, which are increasing with higher interest rate risk, together with larger overall demand for annuitization.
Modeling multivariate dependence in high dimensions is challenging, with popular solutions constructing multivariate copula as a composition of lower dimensional copulas. Pair-copula constructions do so by using bivariate linking copulas, but their parametrization, in size, being quadratic in the dimension, is not quite parsimonious. Besides, the number of regular vines grows super-exponentially with the dimension. One parsimonious solution is factor copulas, and in particular, the one-factor copula is touted for its simplicity – with the number of parameters linear in the dimension – while being able to cater to asymmetric non-linear dependence in the tails. In this paper, we add nuance to this claim from the point of view of a popular measure of multivariate tail dependence, the tail dependence matrix (TDM). We focus on the one-factor copula model with the linking copula belonging to the BB1 family, pointing out later the applicability of our results to a wider class of linking copulas. For this model, we derive tail dependence coefficients and study their basic properties as functions of the parameters of the linking copulas. Based on this, we study the representativeness of the class of TDMs supported by this model with respect to the class of all possible TDMs. We establish that since the parametrization is linear in the dimension, it is no surprise that the relative volume is zero for dimensions greater than three, and hence, by necessity, we present a novel manner of evaluating the representativeness that has a combinatorial flavor. We formulate the problem of finding the best representative one-factor BB1 model given a target TDM and suggest an implementation along with a simulation study of its performance across dimensions. Finally, we illustrate the results of the paper by modeling rainfall data, which is relevant in the context of weather-related insurance.
This manuscript introduces deep learning models that simultaneously describe the dynamics of several yield curves. We aim to learn the dependence structure among the different yield curves induced by the globalization of financial markets and exploit it to produce more accurate forecasts. By combining the self-attention mechanism and nonparametric quantile regression, our model generates both point and interval forecasts of future yields. The architecture is designed to avoid quantile crossing issues affecting multiple quantile regression models. Numerical experiments conducted on two different datasets confirm the effectiveness of our approach. Finally, we explore potential extensions and enhancements by incorporating deep ensemble methods and transfer learning mechanisms.
This study conducts an optimal surrender analysis of reverse mortgage (RM) loans offered to elderly homeowners as a financing option. Recent market evidence on borrower early surrenders has raised concerns about the marketability of RM products and their impact on the program viability. In this article, we derive the borrower optimal surrender strategy as a function of the underlying value of the home used as collateral for RM contracts with tenure payment option. Using a probabilistic approach to American option pricing, we present a decomposition result for the value of the contract as the sum of its European counterpart without the surrendering provision and an early exercise premium. The methodology allows policymakers to assess the financial incentive of their policy design, from which we explain the existing market evidence about borrower rational lapse by means of the resulting surrender boundary and reference probabilities.
In this paper, we question the traditional independence assumption between mortality risk and financial risk and model the correlation between these two risks, estimating its impact on the price of different life insurance products. The interest rate and the mortality intensity are modelled as two correlated Hull and White models in an affine set-up. We introduce two building blocks, namely the zero-coupon survival bond and the mortality density, calculate them in closed form and perform an investigation about their dependence on the correlation between mortality and financial risk, both with theoretical results and numerical analysis. We study the impact of correlation also for more structured insurance products, such as pure endowment, annuity, term insurance, whole life insurance and mixed endowment. We show that in some cases, the inclusion of correlation can lead to a severe underestimation or overestimation of the best estimate. Finally, we illustrate that the results obtained using a traditional affine diffusive set-up can be generalised to affine jump diffusion by computing the price of the zero-coupon survival bond in the presence of jumps.
This paper studies optimal defined-contribution (DC) pension management under stochastic interest rates and expected inflation. In addition to financial risk, we consider the risk of pre-retirement death and introduce life insurance to the pension account as an option to manage this risk. We formulate this pension management problem as a random horizon utility maximization problem and derive its explicit solution under the assumption of constant relative risk aversion utility. We calibrate our model to the U.S. data and demonstrate that the pension member’s demand for life insurance has a hump-shaped pattern with age and a U-shaped pattern with the real interest rate and expected inflation. The optimal pension account balance in our model resembles a variable annuity, wherein the death benefits are endogenously determined and depend on various factors including age, mortality, account balance, future contributions, preferences, and market conditions. Our study suggests that offering variable annuities with more flexible death benefits within the DC account could better cater to the bequest demands of its members.
Reinsurers may default when they have to pay large claims to insurers but are unable to fulfill their obligations due to various reasons such as catastrophic events, underwriting losses, inadequate capitalization, or financial mismanagement. This paper studies the problem of optimal reinsurance design from the perspectives of both the insurer and reinsurer when the insurer faces the potential default risk of the reinsurer. If the insurer aims to minimize the convex distortion risk measure of his retained loss, we prove the optimality of a stop-loss treaty when the promised ceded loss function is charged by the expected value premium principle and the reinsurer offers partial recovery in the event of default. For any fixed premium loading set by the reinsurer, we then derive the explicit expressions of optimal deductible levels for three special distortion functions, including the TVaR, Gini, and PH transform distortion functions. Under these three explicit distortion risk measures adopted by the insurer, we seek the optimal safety loading for the reinsurer by maximizing her net profit where the reserve capital is determined by the TVaR measure and the cost is governed by the expectation. This procedure ultimately leads to the Bowley solution between the insurer and the reinsurer. We provide several numerical examples to illustrate the theoretical findings. Sensitivity analyses demonstrate how different settings of default probability, recovery rate, and safety loading affect the optimal deductible values. Simulation studies are also implemented to analyze the effects induced by the default probability and recovery rate on the Bowley solution.
Public agencies routinely collect administrative data that, when shared and integrated, can form a rich picture of the health and well-being of the communities they serve. One major challenge is that these datasets are often siloed within individual agencies or programs and using them effectively presents legal, technical, and cultural obstacles. This article describes work led by the North Carolina Department of Health and Human Services (NCDHHS) with support from university-based researchers to establish enterprise-wide data governance and a legal framework for routine data sharing, toward the goal of increased capacity for integrated data analysis, improved policy and practice, and better health outcomes for North Carolinians. We relied on participatory action research (PAR) methods and Deliberative Dialogue to engage a diverse range of stakeholders in the co-creation of a data governance process and legal framework for routine data sharing in NCDHHS. Four key actions were taken as a result of the participatory research process: NCDHHS developed a data strategy road map, created a data sharing guidebook to operationalize legal and ethical review of requests, staffed the Data Office, and implemented a legal framework. In addition to describing how these ongoing streams of work support data use across a large state health and human services agency, we provide three use cases demonstrating the impact of this work. This research presents a successful, actionable, and replicable framework for developing and implementing processes to support intradepartmental data access, integration, and use.
In the era of the Industrial Revolution 4.0 (IR 4.0), the adequacy of training models for industrial needs is being challenged. Africa is a skills hub, threatened by unemployment among young people, especially graduates, competition, and the sustainability of industrial fabrics. By carrying out a systematic literature review, this article aims to highlight the aspects and outcomes of the educational revolution that must accompany IR 4.0. The results show that IR 4.0 offers new careers, and that training is a key barrier to the successful digital transformation of the industry. University 4.0 is the conversion needed to overcome this barrier. This article explains this new academic model generating skills, which refers to the ability to perform activities effectively with high technical, digital, and flexible management capacities. Faced with the low adoption of IR 4.0, and the lack of a systematic literature review, this article offers a significant platform for the research community, both academic and industrial.
Enabling private sector trust stands as a critical policy challenge for the success of the EU Data Governance Act and Data Act in promoting data sharing to address societal challenges. This paper attributes the widespread trust deficit to the unmanageable uncertainty that arises from businesses’ limited usage control to protect their interests in the face of unacceptable perceived risks. For example, a firm may hesitate to share its data with others in case it is leaked and falls into the hands of business competitors. To illustrate this impasse, competition, privacy, and reputational risks are introduced, respectively, in the context of three suboptimal approaches to data sharing: data marketplaces, data collaboratives, and data philanthropy. The paper proceeds by analyzing seven trust-enabling mechanisms comprised of technological, legal, and organizational elements to balance trust, risk, and control and assessing their capacity to operate in a fair, equitable, and transparent manner. Finally, the paper examines the regulatory context in the EU and the advantages and limitations of voluntary and mandatory data sharing, concluding that an approach that effectively balances the two should be pursued.
The momentum surrounding the use of data for the public good has grown over the past few years, resulting in several initiatives, and rising interest from public bodies, intergovernmental organizations, and private organizations. The potential benefits of data collaboratives (DCs) have been proved in several contexts, including health, migration, pandemics, and public transport. However, these cross-sectoral partnerships have frequently not progressed beyond the pilot level, a condition hindering their ability to generate long-term societal benefits and scale their impact. Governance models play an important role in ensuring DCs’ stability over time; however, existing models do not address this issue. Our research investigates DCs’ governance settings to determine governance dimensions’ design settings enhancing DCs’ long-term stability. The research identifies through the literature on collaborative governance and DCs seven key governance dimensions for the long-term stability of DCs. Then, through the analysis of 16 heterogeneous case studies, it outlines the optimal design configurations for each dimension. Findings make a significant contribution to academic discourse by shedding light on the governance aspects that bolster the long-term stability of DCs. Additionally, this research offers practical insights and evidence-based guidelines for practitioners, aiding in the creation and maintenance of enduring DCs.
A number of data governance policies have recently been introduced or revised by the Indian Government with the stated goal of unlocking the developmental and economic potential of data. The policies seek to implement standardized frameworks for public data management and establish platforms for data exchange. However, India has a longstanding history of record-keeping and information transparency practices, which are crucial in the context of data management. These connections have not been explicitly addressed in recent policies like the Draft National Data Governance Framework, 2022. To understand if record management has a role to play in modern public data governance, we analyze the key new data governance framework and the associated Indian Urban Data Exchange platform as a case study. The study examines the exchange where public records serve as a potential source of data. It evaluates the coverage and the actors involved in the creation of this data to understand the impact of records management on government departments’ ability to publish datasets. We conclude that while India recognizes the importance of data as a public good, it needs to integrate digital records management practices more effectively into its policies to ensure accurate, up-to-date, and accessible data for public benefit.