To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider spline-based additive models for estimation of conditional treatment effects. To handle the uncertainty due to variable selection, we propose a method of model averaging with weights obtained by minimizing a J-fold cross-validation criterion, in which a nearest neighbor matching is used to approximate the unobserved potential outcomes. We show that the proposed method is asymptotically optimal in the sense of achieving the lowest possible squared loss in some settings and assigning all weight to the correctly specified models if such models exist in the candidate set. Moreover, consistency properties of the optimal weights and model averaging estimators are established. A simulation study and an empirical example demonstrate the superiority of the proposed estimator over other methods.
European Union (EU) public opinion research is a rich field of study. However, as citizens often have little knowledge of the EU it remains the question to what extent their attitudes are grounded in coherent, ideologically informed belief systems. As survey research is not well equipped to study this question, this paper explores the value of the method of cognitive mapping (CM) for public opinion research by studying the cognitive maps of 504 Dutch citizens regarding the Eurozone crisis. The paper shows that respondents perceive the Eurozone crisis predominantly as a governmental debt crisis. Moreover, the concept bureaucracy unexpectedly plays a key role in their belief systems exerting an ambiguous but overall negative effect on the Eurozone and trust in the EU. In contrast to expectation, the attitudes of the respondents are more solidly grounded in (ordoliberal) ideology than that of the Dutch elite. Finally, the paper introduces new ways to measure ambivalence prompting a reevaluation of the significance of different forms of ambivalence and their impact on political behavior. Overall, the results of this study suggest that CM forms a promising addition to the toolbox of public opinion research.
Since 2017, Digital Twins (DTs) have gained prominence in academic research, with researchers actively conceptualising, prototyping, and implementing DT applications across disciplines. The transformative potential of DTs has also attracted significant private sector investment, leading to substantial advancements in their development. However, their adoption in politics and public administration remains limited. While governments fund extensive DT research, their application in governance is often seen as a long-term prospect rather than an immediate priority, hindering their integration into decision-making and policy implementation. This study bridges the gap between theoretical discussions and practical adoption of DTs in governance. Using the Technology Readiness Level (TRL) and Technology Acceptance Model (TAM) frameworks, we analyse key barriers to adoption, including technological immaturity, limited institutional readiness, and scepticism regarding practical utility. Our research combines a systematic literature review of DT use cases with a case study of Germany, a country characterised by its federal governance structure, strict data privacy regulations, and strong digital innovation agenda. Our findings show that while DTs are widely conceptualised and prototyped in research, their use in governance remains scarce, particularly within federal ministries. Institutional inertia, data privacy concerns, and fragmented governance structures further constrain adoption. We conclude by emphasising the need for targeted pilot projects, clearer governance frameworks, and improved knowledge transfer to integrate DTs into policy planning, crisis management, and data-driven decision-making.
The limited stop-loss transform, along with the stop-loss and limited loss transforms – which are special or limiting cases of the limited stop-loss transform – is one of the most important transforms used in insurance, and it also appears extensively in many other fields including finance, economics, and operations research. When the distribution of the underlying loss is uncertain, the worst-case risk measure for the limited stop-loss transform plays a key role in many quantitative risk management problems in insurance and finance. In this paper, we derive expressions for the worst-case distortion risk measure of the limited stop-loss transform, as well as for the stop-loss and limited loss transforms, when the distribution of the underlying loss is uncertain and lies in a general $k$-order Wasserstein ball that contains a reference distribution. We also identify the worst-case distributions under which the worst-case distortion risk measures are attained. Additionally, our results also recover the findings of Guan et al. ((2023) North American Actuarial Journal, 28(3), 611–625), regarding the worst-case stop-loss premium over a $k$-order Wasserstein ball. Furthermore, we use numerical examples to illustrate the worst-case distributions and the worst-case risk measures derived in this paper. We also examine the effects of the reference distribution, the radius of the Wasserstein ball, and the retention levels of limited stop-loss reinsurance on the premium for this type of reinsurance.
In recent years, a wide range of mortality models has been proposed to address the diverse factors influencing mortality rates, which has highlighted the need to perform model selection. Traditional mortality model selection methods, such as AIC and BIC, often require fitting multiple models independently and ranking them based on these criteria. This process can fail to account for uncertainties in model selection, which can lead to overly optimistic prediction intervals, and it disregards the potential insights from combining models. To address these limitations, we propose a novel Bayesian model selection framework that integrates model selection and parameter estimation into the same process. This requires creating a model-building framework that will give rise to different models by choosing different parametric forms for each term. Inference is performed using the reversible jump Markov chain Monte Carlo algorithm, which is devised to allow for transition between models of different dimensions, as is the case for the models considered here. We develop modeling frameworks for data stratified by age and period and for data stratified by age, period, and product. Our results are presented in two case studies.
The escalating complexity of global migration patterns renders evident the limitation of traditional reactive governance approaches and the urgent need for anticipatory and forward-thinking strategies. This Special Collection, “Anticipatory Methods in Migration Policy: Forecasting, Foresight, and Other Forward-Looking Methods in Migration Policymaking,” groups scholarly works and practitioners’ contributions dedicated to the state-of-the-art of anticipatory approaches. It showcases significant methodological evolutions, highlighting innovations from advanced quantitative forecasting using Machine Learning to predict displacement, irregular border crossings, and asylum trends, to rich, in-depth insights generated through qualitative foresight, participatory scenario building, and hybrid methodologies that integrate diverse knowledge forms. The contributions collectively emphasize the power of methodological pluralism, address a spectrum of migration drivers, including conflict and climate change, and critically examine the opportunities, ethical imperatives, and governance challenges associated with novel data sources, such as mobile phone data. By focusing on translating predictive insights and foresight into actionable policies and humanitarian action, this collection aims to advance both academic discourse and provide tangible guidance for policymakers and practitioners. It underscores the importance of navigating inherent uncertainties and strengthening ethical frameworks to ensure that innovations in anticipatory migration policy enhance preparedness, resource allocation, and uphold human dignity in an era of increasing global migration.
Time series of counts often display complex dynamic and distributional characteristics. For this reason, we develop a flexible framework combining the integer-valued autoregressive (INAR) model with a latent Markov structure, leading to the hidden Markov model-INAR (HMM-INAR). First, we illustrate conditions for the existence of an ergodic and stationary solution and derive closed-form expressions for the autocorrelation function and its components. Second, we show consistency and asymptotic normality of the conditional maximum likelihood estimator. Third, we derive an efficient expectation–maximization algorithm with steps available in closed form which allows for fast computation of the estimator. Fourth, we provide an empirical illustration and estimate the HMM-INAR on the number of trades of the Standard & Poor’s Depositary Receipts S&P 500 Exchange-Traded Fund Trust. The combination of the latent HMM structure with a simple INAR$(1)$ formulation not only provides better fit compared to alternative specifications for count data, but it also preserves the economic interpretation of the results.
We prove that determining the weak saturation number of a host graph $F$ with respect to a pattern graph $H$ is computationally hard, even when $H$ is the triangle. Our main tool establishes a connection between weak saturation and the shellability of simplicial complexes.
For a multidimensional Itô semimartingale, we consider the problem of estimating integrated volatility functionals. Jacod and Rosenbaum (2013, The Annals of Statistics 41(3), 1462–1484) studied a plug-in type of estimator based on a Riemann sum approximation of the integrated functional and a spot volatility estimator with a forward uniform kernel. Motivated by recent results that show that spot volatility estimators with general two-sided kernels of unbounded support are more accurate, in this article, an estimator using a general kernel spot volatility estimator as the plug-in is considered. A biased central limit theorem for estimating the integrated functional is established with an optimal convergence rate. Central limit theorems for properly de-biased estimators are also obtained both at the optimal convergence regime for the bandwidth and when applying undersmoothing. Our results show that one can significantly reduce the estimator’s bias by adopting a general kernel instead of the standard uniform kernel. Our proposed bias-corrected estimators are found to maintain remarkable robustness against bandwidth selection in a variety of sampling frequencies and functions.
In this article, we develop a novel high-dimensional coefficient estimation procedure based on high-frequency data. Unlike usual high-dimensional regression procedures such as LASSO, we additionally handle the heavy-tailedness of high-frequency observations as well as time variations of coefficient processes. Specifically, we employ the Huber loss and a truncation scheme to handle heavy-tailed observations, while $\ell _{1}$-regularization is adopted to overcome the curse of dimensionality. To account for the time-varying coefficient, we estimate local coefficients which are biased due to the $\ell _{1}$-regularization. Thus, when estimating integrated coefficients, we propose a debiasing scheme to enjoy the law of large numbers property and employ a thresholding scheme to further accommodate the sparsity of the coefficients. We call this robust thresholding debiased LASSO (RED-LASSO) estimator. We show that the RED-LASSO estimator can achieve a near-optimal convergence rate. In the empirical study, we apply the RED-LASSO procedure to the high-dimensional integrated coefficient estimation using high-frequency trading data.
Disclosing transition plans to meet future net zero climate targets requires organisations to fundamentally move beyond traditional historical-oriented stewardship reporting towards forward-looking accountability to meet their obligations to their future shareholders and stakeholders. However, despite a range of varying requirements concerning disclosure of climate-related targets to meet the Paris Agreement, confusion remains over the appropriate form, content and standard of transition plan disclosure that are required to implement these targets. The former UK based Transition Plan Taskforce set out globally leading requirements for transition plan reporting in 2023, however the extent to which these recommendations have since been implemented has not yet been comprehensively analysed. This paper summarises the key differences between UK, European and International guidelines for transition plans and then discusses the results of an analysis of variations in transition plan reporting practices by a sample of globally large financial and industrial organisations. It is predicted that a combination of both firm-level climate risk and country-level institutional factors are associated with the propensity to produce public transition plans. The empirical results are largely supportive of these predictions. Firms with greater levels of engagement with climate risk (as proxied by the CDP score), and UK and-or EU based firms, are more likely to produce climate transition plans. The empirical results are corroborated by qualitative analysis, which compares examples of good practice transition plan reporting by a sub-sample of firms within each industry sector. It is concluded that the resulting lack of clarity by regulatory authorities, and diversity in transition plan reporting practices by globally large financial and industrial firms, may potentially result in confusion and a lack of informed decision-making by their stakeholders and policymakers concerning climate-related resilience and risk mitigation actions.
Observed competitive market profit margins in property and casualty insurance have typically been higher than the capital assets pricing model adjustment for risky loss cashflows would suggest. Explanations for this difference include frictions from operating an insurance business and capital risks that are not adequately recognised and rewarded by the theory. It is proposed that the difference may instead be related to the consumption of insurance services and claim fulfilment with an additional fair profit margin evaluated using marginal utility pricing principles.
In many economies, youth unemployment rates over the past two decades have exceeded 10 percentage points, highlighting that not all youth successfully transition successfully from schooling to employment. Equally disturbing are the high rates of young adults not observed in employment, education, or training, a rate commonly referred to as “NEET.” There is not a single pathway for successful transitions. Understanding these pathways and the influences of geographic location, employment opportunities, and family and community characteristics that contribute to positive transitions is crucial. While abundant data exists to support this understanding, it is often siloed and not easily combined to inform schools, communities, and policymakers about effective strategies and necessary changes. Researchers prefer working with datasets, while many stakeholders favor results presented through storytelling and visualizations. This paper introduces YouthView, an innovative online platform designed to provide comprehensive insights into youth transition challenges and opportunities. YouthView integrates information from datasets on youth disadvantage indicators, employment, skills demand, and job vacancy at regional levels. The platform features two modes: a guided storytelling mode with selected visualizations, and an open-ended suite of exploratory dashboards for in-depth data analysis. This dual approach enables policymakers, community organizations, and education providers to gain a nuanced understanding of the challenges faced by different communities. By illuminating spatial patterns, socioeconomic disparities, and relationships between disadvantage factors and labor market dynamics, YouthView facilitates informed decision-making and the development of targeted interventions, ultimately contributing to improved youth economic outcomes and expanded opportunities in areas of greatest need.
A seminal result of Komlós, Sárközy, and Szemerédi states that any $n$-vertex graph $G$ with minimum degree at least $(1/2+\alpha )n$ contains every $n$-vertex tree $T$ of bounded degree. Recently, Pham, Sah, Sawhney, and Simkin extended this result to show that such graphs $G$ in fact support an optimally spread distribution on copies of a given $T$, which implies, using the recent breakthroughs on the Kahn-Kalai conjecture, the robustness result that $T$ is a subgraph of sparse random subgraphs of $G$ as well. Pham, Sah, Sawhney, and Simkin construct their optimally spread distribution by following closely the original proof of the Komlós-Sárközy-Szemerédi theorem which uses the blow-up lemma and the Szemerédi regularity lemma. We give an alternative, regularity-free construction that instead uses the Komlós-Sárközy-Szemerédi theorem (which has a regularity-free proof due to Kathapurkar and Montgomery) as a black box. Our proof is based on the simple and general insight that, if $G$ has linear minimum degree, almost all constant-sized subgraphs of $G$ inherit the same minimum degree condition that $G$ has.
We introduce a new family of coalescent mean-field interacting particle systems by producing a pinning property that acts over a chosen sequence of multiple time segments. Throughout their evolution, these stochastic particles converge in time (i.e. get pinned) to their random ensemble average at the termination point of any one of the given time segments, only to burst back into life and repeat the underlying principle of convergence in each of the successive time segments, until they are fully exhausted. Although the architecture is represented by a system of piecewise stochastic differential equations, we prove that the conditions generating the pinning property enable every particle to preserve their continuity over their entire lifetime almost surely. As the number of particles in the system increases asymptotically, the system decouples into mutually independent diffusions, which, albeit displaying progressively uncorrelated behaviour, still close in on, and recouple at, a deterministic value at each termination point. Finally, we provide additional analytics including a universality statement for our framework, a study of what we call adjourned coalescent mean-field interacting particles, a set of results on commutativity of double limits, and a proposal of what we call covariance waves.
We derive the exact asymptotics of $\mathbb{P} {\{\sup\nolimits_{\boldsymbol{t}\in {\mathcal{A}}}X(\boldsymbol{t})>u \}} \textrm{ as}\ u\to\infty,$ for a centered Gaussian field $X({\boldsymbol{t}}),\ {\boldsymbol{t}}\in \mathcal{A}\subset\mathbb{R}^n$, $n>1$ with continuous sample paths almost surely, for which $\arg \max_{\boldsymbol{t}\in {\mathcal{A}}} {\mathrm{Var}}(X(\boldsymbol{t}))$ is a Jordan set with a finite and positive Lebesgue measure of dimension $k\le n$ and its dependence structure is not necessarily locally stationary. Our findings are applied to derive the asymptotics of tail probabilities related to performance tables and chi processes, particularly when the covariance structure is not locally stationary.
With the growing amount of historical infrastructure data available to engineers, data-driven techniques have been increasingly employed to forecast infrastructure performance. In addition to algorithm selection, data preprocessing strategies for machine learning implementations plays an equally important role in ensuring accuracy and reliability. The present study focuses on pavement infrastructure and identifies four categories of strategies to preprocess data for training machine-learning-based forecasting models. The Long-Term Pavement Performance (LTPP) dataset is employed to benchmark these categories. Employing random forest as the machine learning algorithm, the comparative study examines the impact of data preprocessing strategies, the volume of historical data, and forecast horizon on the accuracy and reliability of performance forecasts. The strengths and limitations of each implementation strategy are summarized. Multiple pavement performance indicators are also analysed to assess the generalizability of the findings. Based on the results, several findings and recommendations are provided for short-to medium-term infrastructure management and decision-making: (i) in data-scarce scenarios, strategies that incorporate both explanatory variables and historical performance data provides better accuracy and reliability, (ii) to achieve accurate forecasts, the volume of historical data should at least span a time duration comparable to the intended forecast horizon, and (iii) for International Roughness Index and transverse crack length, a forecast horizon up to 5 years is generally achievable, but forecasts beyond a three-year horizon are not recommended for longitudinal crack length. These quantitative guidelines ultimately support more effective and reliable application of data-driven techniques in infrastructure performance forecasting.
How can admissions officers, employers, and scholarship committees maximize the accuracy of prediction of individual performance while minimizing adverse impact due to group differences? Testing offers a straightforward solution to the first half of this problem. Tests are the best way to predict how someone will perform in school, in the military, in medicine, or while controlling airline traffic and flying a plane. Tests are also useful beyond personnel selection, such as for selection of a college major or courses. However, the other side of this problem is more complex. Using tests is always accompanied by group differences that could result in continued systemic discrimination by limiting opportunities for those who are marginalized. This book charts an approach to using tests that incorporates evidence, transparency, and societal values to maximize efficiency and fairness.
Community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) is a significant public health concern, disproportionately affecting socioeconomically disadvantaged populations, including individuals experiencing poverty, homelessness, incarceration, and injection drug use. This scoping review synthesizes existing literature on factors influencing CA-MRSA occurrence and community transmission in these populations. A comprehensive search of PubMed, MEDLINE, and Scopus for studies published between January 2000 and February 2024 identified 3,223 articles, of which 40 met the inclusion criteria. Findings indicate that the CA-MRSA burden remains high, with community transmission influenced by factors, such as limited access to hygiene resources, structural barriers to care, and social network dynamics. Surveillance and intervention strategies remain largely healthcare-focused, with limited data on community-level transmission and risk. This review highlights the urgent need for targeted public health interventions and the adoption of expanded, innovative surveillance methods, such as genomic epidemiology, to better track and mitigate CA-MRSA transmission in vulnerable populations. As antibiotic resistance continues to rise, future research should prioritize longitudinal studies and community-based surveillance to develop effective, population-specific infection prevention, and control strategies.