To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper investigates the complexity of residual lifetimes of live components in coherent systems through the lens of cumulative residual extropy and its divergence-based extension, Jensen-cumulative residual extropy. Unlike classical reliability metrics that focus on system inactivity or mean residual life, our framework quantifies the hidden informational structure of components that remain alive at the system failure time. We derive closed-form expressions for the cumulative residual extropy of conditional residual lifetimes using system signatures and establish stochastic bounds and comparisons that highlight the impact of structural configuration. A novel divergence measure, the Jensen-cumulative residual extropy, is introduced to capture discrepancies between coherent systems and benchmark $k$-out-of-$n$ structures. Numerical illustrations with gamma-distributed lifetimes demonstrate the sensitivity of cumulative residual extropy and Jensen-cumulative residual extropy to redundancy patterns and dependence structures. Furthermore, by integrating cost considerations into the divergence framework, we provide a rigorous optimization scheme for selecting system signatures that jointly minimize informational complexity and economic expenditure. The proposed approach enriches the theoretical foundation of reliability analysis and offers practical guidelines for designing resilient, cost-effective, and information-efficient engineering systems.
We propose a deep reinforcement learning (RL) framework designed to optimize the hedging of specific, user-defined risk factors—referred to as targeted risks—in financial instruments affected by multiple sources of uncertainty. Our methodology uses Shapley value decompositions to establish source of risk grouping’s contribution to the projected contract cash flows, providing a clear attribution of the profit and loss to distinct risk categories. Leveraging this decomposition, we apply deep RL to hedge only the targeted risks, while leaving non-targeted risks mostly unaffected. In addition, we introduce a joint neural network architecture in which the agent network utilizes risk estimates from a risk measurement neural network to stabilize the hedging strategy, taking into account local risk dynamics. Numerical experiments show that our approach outperforms traditional methods, such as delta hedging and traditional deep hedging, significantly reducing targeted risks in variable annuities while maintaining flexibility for broader applications.
This study analyses 18 years of weekly reported dengue cases (January 2002–December 2020; 988 weeks) from Costa Rica’s Central Valley to examine seasonal and multi-year patterns. To model the spatio-temporal dynamics of dengue, we employ three statistical approaches for case counts: the spatial hurdle integer-valued generalized autoregressive conditional heteroskedasticity (INGARCH) model, the spatial zero-inflated generalized Poisson (ZIGP)-INGARCH model, and the endemic–epidemic (EE) model. Covariates include rainfall and maximum temperature or alternatively seasonal Fourier terms to represent annual seasonality. Using a Bayesian framework, we fit the spatial INGARCH-family models to weekly dengue cases. The EE model and the ZIGP-INGARCH model, both with Fourier seasonal terms, show the best predictive accuracy and provide estimates of seasonal intensity and peak timing relevant for dengue surveillance. Incorporating annual seasonality improves modelling of multivariate weekly dengue cases in Costa Rica’s Central Valley, underscoring the importance of cyclical patterns for strengthening early warning systems and guiding targeted vector control.
This paper studies an optimal reinsurance problem for a utility-maximizing insurer, subject to the reinsurer’s endogenous default and background risk. An endogenous default occurs when the insurer’s contractual indemnity exceeds the reinsurer’s available reserve, which is random due to the background risk. We obtain an analytical solution to the optimal contract for two types of reinsurance contracts, differentiated by whether their indemnity functions depend on the reinsurer’s background risk. The results shed light on the joint effect of the reinsurer’s default and background risk on the insurer’s reinsurance demand.
We investigate the limiting spectral distribution of a noncentral unified matrix model defined by $\boldsymbol{\Omega}(\mathbf{X}) = ({(\mathbf{X}\mathbf{P}_1+\mathbf{A})(\mathbf{X}\mathbf{P}_1+\mathbf{A})'}/{n_1}) ({\mathbf{X}\mathbf{P}_2\mathbf{X}'}/{n_2})^{-1}$, where $\mathbf{X}=(X_{ij})_{p\times n}$ is a random matrix with independent and identically distributed real entries having zero mean and finite second moment. $\mathbf{A}$ is a $p\times n$ nonrandom matrix. The matrices $\mathbf{P}_1$ and $\mathbf{P}_2$ are projection matrices satisfying $\mathrm{rank}(\mathbf{P}_1)=n_1$, $\mathrm{rank}(\mathbf{P}_2)=n_2$, and $\mathbf{P}_1\mathbf{P}_2=0$. When $\mathbf{P}_1$ and $\mathbf{P}_2$ are random, they are assumed to be independent of $\mathbf{X}$. When $p/n_1\to c_1\in(0,\infty)$ and $p/n_2\to c_2\in(0,1)$, we establish the almost sure convergence of the empirical spectral distribution of $\boldsymbol{\Omega}$ to a deterministic limiting distribution. Furthermore, we show that this limiting distribution coincides with that of the noncentral F-matrix, thus revealing a deep connection between the proposed model and classical multivariate analysis.
As a direct consequence of liquid kerosene injection, aeroengine combustors may be categorized as non-premixed combustion systems, characterized by a swirl-stabilized and highly complex flow field. In addition to the flow of air through the fuel injector, there are a large number of other features through which the oxidizer can enter the heat release region. These can have an impact on local fuel–air mixing, inducing strong spatial and temporal variations in stoichiometry, thereby affecting emissions and combustion system performance. This article discusses a novel statistical methodology, based on principal component analysis (PCA) and K-means clustering, that aims to improve the understanding of fuel–air mixing in realistic aeroengine combustors. The method is applied in a post-processing step to data sampled from a large-eddy simulation, where every chamber inflow has been tagged with a unique passive scalar, which allows it to be traced across space and time. PCA is used to construct a low-dimensional, visually interpretable representation of a spatially localized fuel–air mixing process, while K-means clustering is employed to produce an unsupervised discretization of the flow field into regions of similar fuel–air mixing characteristics. The proposed methodology is computationally inexpensive, and the easily interpretable outputs can help the combustion engineer make better-informed decisions about combustor design.
Combining simultaneous equations with latent variables and measurement models results in general latent variable SEMs, the subject of Chapter 6. It covers model specifications, implied moments, identification, estimation, outliers and influential cases, model fit, and respecification in such models. Furthermore, Chapter 6 also explores higher order factor analysis, longitudinal models, and Bayesian estimation.
While Value-at-Risk (V@R) often fails to capture the benefits of diversification, coherent and convex risk measures are developed to align with the financial intuition that diversification reduces risk.
This chapter presents the matrix deviation inequality, a uniform deviation bound for random matrices over general sets. Applications include two-sided bounds for random matrices, refined estimates for random projections, covariance estimation in low dimensions, and an extension of the Johnson–Lindenstrauss lemma to infinite sets. We prove two geometric results: the M* bound, which shows how random slicing shrinks high-dimensional sets, and the escape theorem, which shows how slicing can completely miss them. These tools are applied to a fundamental data science task – learning structured high-dimensional linear models. We extend the matrix deviation inequality to arbitrary norms and use it to strengthen the Chevet inequality and derive the Dvoretzky– Milman theorem, which states that random low-dimensional projections of high-dimensional sets appear nearly round. Exercises cover matrix and process-level deviation bounds, high-dimensional estimation techniques such as the Lasso for sparse regression, the Garnaev–Gluskin theorem on random slicing of the cross-polytope, and general-norm extensions of the Johnson–Lindenstrauss lemma.
On atomless probability spaces, all law-determined convex risk measures on Lp spaces can be represented as a supremum of integrals of Average-Value-at-Risk (AV@R) measures, demonstrating AV@R’s role as a fundamental building block.
Chapter 7 covers models with categorical endogenous variables. It examines the consequences of treating such variables as continuous and how to modify SEMs to take account of categorical variables. It begins with single equation regression-like models for binary, ordinal, and count variables and builds to multiequation models. It includes a polychoric correlation approach, models with exogenous observed variables, the treatment of missing values, and alternative modeling approaches for categorical variables.
This chapter introduces structural equation models (SEMs). It defines SEMs and outlines their history. It also presents several widespread misunderstandings about SEMs and presents their strengths and weaknesses. Finally, the chapter provides an outline of the remaining book chapters.
This chapter explores various constructions of risk measures, including spectral risk measures, distortion risk measures, and moment-based risk measures, as well as risk measures generated by expected losses.
This chapter introduces techniques for bounding random processes. We develop Gaussian interpolation to derive powerful comparison inequalities for Gaussian processes, including the Slepian, Sudakov–Fernique, and Gordon inequalities. We use this to get sharp bounds on the operator norm of Gaussian random matrices. We also prove the Sudakov lower bound using covering numbers. We introduce the concept of Gaussian width, which connects probabilistic and geometric perspectives, and apply it to analyze the size of random projections of high-dimensional sets. Exercises cover symmetrization and contraction inequalities for random processes, the Gordon min–max inequality, sharp bounds for Gaussian matrices, the nuclear norm, effective dimension, random projections, and matrix sketching.
This chapter demonstrates that coherent and comonotonic additive risk measures are characterized by Choquet integrals with respect to two-alternating (submodular or concave) non-additive measures.