To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study examines spatiotemporal patterns of tetracycline- and trimethoprim–sulfamethoxazole (TMP–SMX)–resistant Staphylococcus aureus (S. aureus) among United States (US) Veterans Health Administration (VHA) outpatients. Prevalence of tetracycline and TMP–SMX resistance in methicillin-susceptible S. aureus (MSSA) and methicillin-resistant S. aureus (MRSA) was calculated for 2010–2023. MRSA cases from 2018 to 2022 were aggregated to commuting zones (CZs) in the eastern US, and CZ-specific relative risks and temporal trends were estimated using a hierarchical Bayesian Poisson model with a spatiotemporal interaction term. Results indicated that resistance in MRSA increased by 16.4% for tetracycline and 9.3% for TMP–SMX, while MSSA resistance remained stable. High-risk CZs were limited (3% for tetracycline, 4% for TMP–SMX) and distributed across the eastern US, with notable within-state variation in risk and trend. Most CZs exhibited stationary trends, although distinct patterns in the rate and timing of changes in resistance were observed in CZ-specific plots. These evolving and geographically variable patterns of antimicrobial resistance at finer spatial scales highlight the need for local surveillance and outpatient antibiotic stewardship strategies that consider place-based sociodemographic, ecologic, and clinical factors.
Quantum technologies have the potential to play a significant role in future technological and economic advancement. However, our understanding of the specific narratives and topics present in national quantum technology policies is limited, even though these policies are vital for shaping global strategies, progress, and responsible development in the field. In this study, we use narrative policy analysis together with computational topic modeling to examine 55 governmental documents from 24 countries, covering over a decade. Using BERTopic modeling and the Narrative Policy Framework, the results reveal that national initiatives primarily focus on technological leadership for security and economic prosperity, assessing technological readiness, and, to a lesser extent, commercialization, and societal impacts. Over time, we see a trend toward greater alignment in the prevalence of these narratives, with different themes beginning to be considered more equally. Nevertheless, the narrative surrounding responsible quantum development and societal implications remains the least represented. The study shows the strategic priorities of the analyzed countries and introduces an innovative method for analyzing policy texts. Based on the results, we recommend a balanced regulatory approach for quantum technologies that promotes ethical innovation, supports inclusive technological ecosystems, and encourages global collaboration. Furthermore, we caution that an excessive emphasis on leadership and competition may lead to isolated innovation systems that could hinder progress, cooperation, and joint efforts.
The $q$-Weibull distribution, as a generalization of the Weibull distribution, plays an important role in the field of reliability theory, survival analysis, finance, engineering, medical science, etc. In contrast to the Weibull distribution, which is limited to describing monotonic hazard rate functions, the $q$-Weibull distribution offers the flexibility to model various behaviors of the hazard rate function, including unimodal, bathtub-shaped, monotonic (both increasing and decreasing), and constant. In this article, we investigated the stochastic comparison of extreme order statistics derived from independent, heterogeneous $q$-Weibull random variables using various stochastic orderings, including the usual stochastic order, hazard rate order, reversed hazard rate order, and likelihood ratio order. Some of these results are further extended to dependent setups by incorporating Archimedean copulas to model the dependence structure. Finally, we explored the behavior of extreme order statistics when the random variables are subjected to random shocks.
We consider d-dimensional stochastic differential equations (SDEs) of the form $\textrm{d}U_t = b(U_t)\,\textrm{d}t + \sigma\,\textrm{d}Z_t$. Let $X_t$ denote the solution if the driving noise $Z_t$ is a d-dimensional rotationally symmetric $\alpha$-stable process ($1\lt \alpha\lt 2$), and let $Y_t$ be the solution if the driving noise is a d-dimensional Brownian motion. Continuing the work started in Deng et al. (2025), we derive an estimate of the total variation distance $\|\textrm{law}(X_{t})-\textrm{law}(Y_{t})\|_\textrm{TV}$ for all $t \gt 0$, and we show that the ergodic measures $\mu_\alpha$ and $\mu_2$ of $X_t$ and $Y_t$, respectively, satisfy $\|\mu_\alpha-\mu_2\|_\textrm{TV} \leq {Cd\log(1+d)}(2-\alpha)/({\alpha-1})$. We show that this bound is optimal with respect to $\alpha$ by an Ornstein–Uhlenbeck SDE. Combining this bound with a recent interpolation result from Huang et al. (2023), we can derive a bound in the Wasserstein-p distance ($0 \lt p \lt 1$): $\|\mu_\alpha-\mu_2\|_{W_p} \leq {Cd^{(p+3)/2}\log(1+d)}(2-\alpha)/{\alpha-1}$.
A key factor in ensuring the accuracy of computer simulations that model physical systems is the proper calibration of their parameters based on real-world observations or experimental data. Bayesian methods provide a robust framework for quantifying and propagating the uncertainties that inevitably arise. Nevertheless, they produce predictions unable to represent the observed datapoints when paired with inexact models. Additionally, the quantified uncertainties of these overconfident models cannot be propagated to other Quantities of Interest (QoIs) reliably. A promising solution involves embedding a model inadequacy term in the inference parameters, allowing the quantified model form uncertainty to influence non-observed QoIs. In this work, we revisit this embedded formulation and analyze how different likelihood constructions affect the inference of model form uncertainty, particularly under the presence of prescribed measurement noise and unavoidable model discrepancies. Two additional likelihood formulations, the global moment-matching and relative global moment-matching likelihoods, are introduced to explore alternative ways of representing the residual distribution. The behavior of these likelihoods is examined alongside existing formulations to show how different treatments of measurement noise and discrepancies shape the inferred parameter posteriors, and thereby affect the uncertainty ultimately propagated to the QoIs. Particular attention is given to how the uncertainty associated with the model inadequacy term propagates to the QoIs for the posteriors obtained from different likelihood formulations, enabling a more comprehensive statistical analysis of the prediction’s reliability. Finally, the proposed approach is applied to estimate the uncertainty in the predicted heat flux from a transient thermal simulation using temperature observations.
This paper investigates the complexity of residual lifetimes of live components in coherent systems through the lens of cumulative residual extropy and its divergence-based extension, Jensen-cumulative residual extropy. Unlike classical reliability metrics that focus on system inactivity or mean residual life, our framework quantifies the hidden informational structure of components that remain alive at the system failure time. We derive closed-form expressions for the cumulative residual extropy of conditional residual lifetimes using system signatures and establish stochastic bounds and comparisons that highlight the impact of structural configuration. A novel divergence measure, the Jensen-cumulative residual extropy, is introduced to capture discrepancies between coherent systems and benchmark $k$-out-of-$n$ structures. Numerical illustrations with gamma-distributed lifetimes demonstrate the sensitivity of cumulative residual extropy and Jensen-cumulative residual extropy to redundancy patterns and dependence structures. Furthermore, by integrating cost considerations into the divergence framework, we provide a rigorous optimization scheme for selecting system signatures that jointly minimize informational complexity and economic expenditure. The proposed approach enriches the theoretical foundation of reliability analysis and offers practical guidelines for designing resilient, cost-effective, and information-efficient engineering systems.
We propose a deep reinforcement learning (RL) framework designed to optimize the hedging of specific, user-defined risk factors—referred to as targeted risks—in financial instruments affected by multiple sources of uncertainty. Our methodology uses Shapley value decompositions to establish source of risk grouping’s contribution to the projected contract cash flows, providing a clear attribution of the profit and loss to distinct risk categories. Leveraging this decomposition, we apply deep RL to hedge only the targeted risks, while leaving non-targeted risks mostly unaffected. In addition, we introduce a joint neural network architecture in which the agent network utilizes risk estimates from a risk measurement neural network to stabilize the hedging strategy, taking into account local risk dynamics. Numerical experiments show that our approach outperforms traditional methods, such as delta hedging and traditional deep hedging, significantly reducing targeted risks in variable annuities while maintaining flexibility for broader applications.
This study analyses 18 years of weekly reported dengue cases (January 2002–December 2020; 988 weeks) from Costa Rica’s Central Valley to examine seasonal and multi-year patterns. To model the spatio-temporal dynamics of dengue, we employ three statistical approaches for case counts: the spatial hurdle integer-valued generalized autoregressive conditional heteroskedasticity (INGARCH) model, the spatial zero-inflated generalized Poisson (ZIGP)-INGARCH model, and the endemic–epidemic (EE) model. Covariates include rainfall and maximum temperature or alternatively seasonal Fourier terms to represent annual seasonality. Using a Bayesian framework, we fit the spatial INGARCH-family models to weekly dengue cases. The EE model and the ZIGP-INGARCH model, both with Fourier seasonal terms, show the best predictive accuracy and provide estimates of seasonal intensity and peak timing relevant for dengue surveillance. Incorporating annual seasonality improves modelling of multivariate weekly dengue cases in Costa Rica’s Central Valley, underscoring the importance of cyclical patterns for strengthening early warning systems and guiding targeted vector control.
This paper studies an optimal reinsurance problem for a utility-maximizing insurer, subject to the reinsurer’s endogenous default and background risk. An endogenous default occurs when the insurer’s contractual indemnity exceeds the reinsurer’s available reserve, which is random due to the background risk. We obtain an analytical solution to the optimal contract for two types of reinsurance contracts, differentiated by whether their indemnity functions depend on the reinsurer’s background risk. The results shed light on the joint effect of the reinsurer’s default and background risk on the insurer’s reinsurance demand.
We investigate the limiting spectral distribution of a noncentral unified matrix model defined by $\boldsymbol{\Omega}(\mathbf{X}) = ({(\mathbf{X}\mathbf{P}_1+\mathbf{A})(\mathbf{X}\mathbf{P}_1+\mathbf{A})'}/{n_1}) ({\mathbf{X}\mathbf{P}_2\mathbf{X}'}/{n_2})^{-1}$, where $\mathbf{X}=(X_{ij})_{p\times n}$ is a random matrix with independent and identically distributed real entries having zero mean and finite second moment. $\mathbf{A}$ is a $p\times n$ nonrandom matrix. The matrices $\mathbf{P}_1$ and $\mathbf{P}_2$ are projection matrices satisfying $\mathrm{rank}(\mathbf{P}_1)=n_1$, $\mathrm{rank}(\mathbf{P}_2)=n_2$, and $\mathbf{P}_1\mathbf{P}_2=0$. When $\mathbf{P}_1$ and $\mathbf{P}_2$ are random, they are assumed to be independent of $\mathbf{X}$. When $p/n_1\to c_1\in(0,\infty)$ and $p/n_2\to c_2\in(0,1)$, we establish the almost sure convergence of the empirical spectral distribution of $\boldsymbol{\Omega}$ to a deterministic limiting distribution. Furthermore, we show that this limiting distribution coincides with that of the noncentral F-matrix, thus revealing a deep connection between the proposed model and classical multivariate analysis.
As a direct consequence of liquid kerosene injection, aeroengine combustors may be categorized as non-premixed combustion systems, characterized by a swirl-stabilized and highly complex flow field. In addition to the flow of air through the fuel injector, there are a large number of other features through which the oxidizer can enter the heat release region. These can have an impact on local fuel–air mixing, inducing strong spatial and temporal variations in stoichiometry, thereby affecting emissions and combustion system performance. This article discusses a novel statistical methodology, based on principal component analysis (PCA) and K-means clustering, that aims to improve the understanding of fuel–air mixing in realistic aeroengine combustors. The method is applied in a post-processing step to data sampled from a large-eddy simulation, where every chamber inflow has been tagged with a unique passive scalar, which allows it to be traced across space and time. PCA is used to construct a low-dimensional, visually interpretable representation of a spatially localized fuel–air mixing process, while K-means clustering is employed to produce an unsupervised discretization of the flow field into regions of similar fuel–air mixing characteristics. The proposed methodology is computationally inexpensive, and the easily interpretable outputs can help the combustion engineer make better-informed decisions about combustor design.
Combining simultaneous equations with latent variables and measurement models results in general latent variable SEMs, the subject of Chapter 6. It covers model specifications, implied moments, identification, estimation, outliers and influential cases, model fit, and respecification in such models. Furthermore, Chapter 6 also explores higher order factor analysis, longitudinal models, and Bayesian estimation.
While Value-at-Risk (V@R) often fails to capture the benefits of diversification, coherent and convex risk measures are developed to align with the financial intuition that diversification reduces risk.
This chapter presents the matrix deviation inequality, a uniform deviation bound for random matrices over general sets. Applications include two-sided bounds for random matrices, refined estimates for random projections, covariance estimation in low dimensions, and an extension of the Johnson–Lindenstrauss lemma to infinite sets. We prove two geometric results: the M* bound, which shows how random slicing shrinks high-dimensional sets, and the escape theorem, which shows how slicing can completely miss them. These tools are applied to a fundamental data science task – learning structured high-dimensional linear models. We extend the matrix deviation inequality to arbitrary norms and use it to strengthen the Chevet inequality and derive the Dvoretzky– Milman theorem, which states that random low-dimensional projections of high-dimensional sets appear nearly round. Exercises cover matrix and process-level deviation bounds, high-dimensional estimation techniques such as the Lasso for sparse regression, the Garnaev–Gluskin theorem on random slicing of the cross-polytope, and general-norm extensions of the Johnson–Lindenstrauss lemma.
On atomless probability spaces, all law-determined convex risk measures on Lp spaces can be represented as a supremum of integrals of Average-Value-at-Risk (AV@R) measures, demonstrating AV@R’s role as a fundamental building block.
Chapter 7 covers models with categorical endogenous variables. It examines the consequences of treating such variables as continuous and how to modify SEMs to take account of categorical variables. It begins with single equation regression-like models for binary, ordinal, and count variables and builds to multiequation models. It includes a polychoric correlation approach, models with exogenous observed variables, the treatment of missing values, and alternative modeling approaches for categorical variables.
This chapter introduces structural equation models (SEMs). It defines SEMs and outlines their history. It also presents several widespread misunderstandings about SEMs and presents their strengths and weaknesses. Finally, the chapter provides an outline of the remaining book chapters.