To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study minimax regret treatment rules under matched treatment assignment in a setup where a policymaker, informed by a sample of size N, needs to decide between T different treatments for a $T\geq 2$. Randomized rules are allowed for. We show that the generalization of the minimax regret rule derived in Schlag (2006, ELEVEN—Tests needed for a recommendation, EUI working paper) and Stoye (2009, Journal of Econometrics 151, 70–81) for the case $T=2$ is minimax regret for general finite $T>2$ and also that the proof structure via the Nash equilibrium and the “coarsening” approaches generalizes as well. We also show by example, that in the case of random assignment the generalization of the minimax rule in Stoye (2009, Journal of Econometrics 151, 70–81) to the case $T>2$ is not necessarily minimax regret and derive minimax regret rules for a few small sample cases, e.g., for $N=2$ when $T=3.$
In the case where a covariate x is included, it is shown that a minimax regret rule is obtained by using minimax regret rules in the “conditional-on-x” problem if the latter are obtained as Nash equilibria.
The cumulative residual extropy has been proposed recently as an alternative measure of extropy to the cumulative distribution function of a random variable. In this paper, the concept of cumulative residual extropy has been extended to cumulative residual extropy inaccuracy (CREI) and dynamic cumulative residual extropy inaccuracy (DCREI). Some lower and upper bounds for these measures are provided. A characterization problem for the DCREI measure under the proportional hazard rate model is studied. Nonparametric estimators for CREI and DCREI measures based on kernel and empirical methods are suggested. Also, a simulation study is presented to evaluate the performance of the suggested measures. Simulation results show that the kernel-based estimator performs better than the empirical-based estimator. Finally, applications of the DCREI measure for model selection are provided using two real data sets.
We propose Rényi information generating function (RIGF) and discuss its properties. A connection between the RIGF and the diversity index is proposed for discrete-type random variables. The relation between the RIGF and Shannon entropy of order q > 0 is established and several bounds are obtained. The RIGF of escort distribution is derived. Furthermore, we introduce the Rényi divergence information generating function (RDIGF) and discuss its effect under monotone transformations. We present nonparametric and parametric estimators of the RIGF. A simulation study is carried out and a real data relating to the failure times of electronic components is analyzed. A comparison study between the nonparametric and parametric estimators is made in terms of the standard deviation, absolute bias, and mean square error. We have observed superior performance for the newly proposed estimators. Some applications of the proposed RIGF and RDIGF are provided. For three coherent systems, we calculate the values of the RIGF and other well-established uncertainty measures, and similar behavior of the RIGF is observed. Further, a study regarding the usefulness of the RDIGF and RIGF as model selection criteria is conducted. Finally, three chaotic maps are considered and then used to establish a validation of the proposed information generating function.
This work studies the reliability function of K-out-of-N systems with a general repair time distribution and a single repair facility. It introduces a new repair mechanism using an effort function, described by a nonlinear ordinary differential equation. Three theoretical results are obtained: regularity properties preventing simultaneous failures and repairs, derivation of a Kolmogorov forward system for micro-state and macro-state probabilities, and comparison of reliability functions of two K-out-of-N systems. An additional hypothesis on the model’s parameters allows us to obtain an ordering relation between the reliability functions. A numerical example demonstrates the model’s practical application and confirms the theoretical results.
The various global refugee and migration events of the last few years underscore the need for advancing anticipatory strategies in migration policy. The struggle to manage large inflows (or outflows) highlights the demand for proactive measures based on a sense of the future. Anticipatory methods, ranging from predictive models to foresight techniques, emerge as valuable tools for policymakers. These methods, now bolstered by advancements in technology and leveraging nontraditional data sources, can offer a pathway to develop more precise, responsive, and forward-thinking policies.
This paper seeks to map out the rapidly evolving domain of anticipatory methods in the realm of migration policy, capturing the trend toward integrating quantitative and qualitative methodologies and harnessing novel tools and data. It introduces a new taxonomy designed to organize these methods into three core categories: Experience-based, Exploration-based, and Expertise-based. This classification aims to guide policymakers in selecting the most suitable methods for specific contexts or questions, thereby enhancing migration policies.
Our study aimed to describe the transmission dynamics and genotypic diversity of Mycobacterium tuberculosis in people deprived of liberty (PDL) in four Colombian prisons. Our cohort study included 64 PDL with bacteriologically confirmed pulmonary tuberculosis diagnosed in four Colombian prisons. The 132 isolates were genotyped using 24-mycobacterial interspersed repeated units-variable number tandem repeats (MIRUs-VNTR). A cluster was defined when ≥2 isolates from different PDL had the same genotype. Tuberculosis acquired in prison was considered when ≥2 persons were within the same cluster and had an epidemiological link. We mapped the place of residence before incarceration and within prisons. We assessed overcrowding and ventilation conditions in the prison that had clusters. We found that the most frequent genotypes were LAM (56.8%) and Haarlem (36.4%), and 45.3% of the PDL diagnosed with tuberculosis were clustered. Most PDL diagnosed in prison came from neighborhoods in Medellin with a high TB incidence. M. tuberculosis infection acquired in prison was detected in 19% of PDL, 9.4% had mixed infection, 3.1% reinfection, and 1.6% relapse. Clusters only appeared in one prison, in cell blocks with overcrowding >100%, and inadequate ventilation conditions. Prisons require the implementation of effective respiratory infection control measures to prevent M. tuberculosis transmission.
In this paper, we investigate the number of customers that overlap or coincide with a virtual customer in an Erlang-A queue. Our analysis starts with the fluid and diffusion limit differential equations to obtain the mean and variance of the queue length. We then develop precise approximations for waiting times using fluid limits and the polygamma function. Building on this, we introduce a novel approximation scheme to calculate the mean and variance of the number of overlapping customers. This method facilitates the assessment of transient overlap risks in complex service systems, offering a useful tool for service providers to mitigate significant overlaps during pandemic seasons.
Regression is a fundamental prediction task common in data-centric engineering applications that involves learning mappings between continuous variables. In many engineering applications (e.g., structural health monitoring), feature-label pairs used to learn such mappings are of limited availability, which hinders the effectiveness of traditional supervised machine learning approaches. This paper proposes a methodology for overcoming the issue of data scarcity by combining active learning (AL) for regression with hierarchical Bayesian modeling. AL is an approach for preferentially acquiring feature-label pairs in a resource-efficient manner. In particular, the current work adopts a risk-informed approach that leverages contextual information associated with regression-based engineering decision-making tasks (e.g., inspection and maintenance). Hierarchical Bayesian modeling allow multiple related regression tasks to be learned over a population, capturing local and global effects. The information sharing facilitated by this modeling approach means that information acquired for one engineering system can improve predictive performance across the population. The proposed methodology is demonstrated using an experimental case study. Specifically, multiple regressions are performed over a population of machining tools, where the quantity of interest is the surface roughness of the workpieces. An inspection and maintenance decision process is defined using these regression tasks, which is in turn used to construct the active-learning algorithm. The novel methodology proposed is benchmarked against an uninformed approach to label acquisition and independent modeling of the regression tasks. It is shown that the proposed approach has superior performance in terms of expected cost—maintaining predictive performance while reducing the number of inspections required.
This paper proposes a consistent nonparametric test with good sampling properties to detect instantaneous causality between vector autoregressive (VAR) variables with time-varying variances. The new test takes the form of the U-statistic, and has a limiting standard normal distribution under the null. We further show that the test is consistent against any fixed alternatives, and has nontrivial asymptotic power against a class of local alternatives with a rate slower than $T^{-1/2}$. We also propose a wild bootstrap procedure to better approximate the finite sample null distribution of the test statistic. Monte Carlo experiments are conducted to highlight the merits of the proposed test relative to other popular tests in finite samples. Finally, we apply the new test to investigate the instantaneous causality relationship between money supply and inflation rates in the USA.
We consider the count of subgraphs with an arbitrary configuration of endpoints in the random-connection model based on a Poisson point process on ${\mathord{\mathbb R}}^d$. We present combinatorial expressions for the computation of the cumulants and moments of all orders of such subgraph counts, which allow us to estimate the growth of cumulants as the intensity of the underlying Poisson point process goes to infinity. As a consequence, we obtain a central limit theorem with explicit convergence rates under the Kolmogorov distance and connectivity bounds. Numerical examples are presented using a computer code in SageMath for the closed-form computation of cumulants of any order, for any type of connected subgraph, and for any configuration of endpoints in any dimension $d{\geq} 1$. In particular, graph connectivity estimates, Gram–Charlier expansions for density estimation, and correlation estimates for joint subgraph counting are obtained.
We apply moral foundations theory (MFT) to explore how the public conceptualizes the first eight months of the conflict between Ukraine and the Russian Federation (Russia). Our analysis includes over 1.1 million English tweets related to the conflict over the first 36 weeks. We used linguistic inquiry word count (LIWC) and a moral foundations dictionary to identify tweets’ moral components (care, fairness, loyalty, authority, and sanctity) from the United States, pre- and post-Cold War NATO countries, Ukraine, and Russia. Following an initial spike at the beginning of the conflict, tweet volume declined and stabilized by week 10. The level of moral content varied significantly across the five regions and the five moral components. Tweets from the different regions included significantly different moral foundations to conceptualize the conflict. Across all regions, tweets were dominated by loyalty content, while fairness content was infrequent. Moral content over time was relatively stable, and variations were linked to reported conflict events.
Children, adolescents, and young people living with HIV (CALWHIV), including those in resource-limited settings, may be at increased risk of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection, poorer coronavirus disease 2019 (COVID-19) outcomes, and multisystem inflammatory syndrome (MIS). We conducted a repeat SARS-CoV-2 seroprevalence survey among CALWHIV in Europe (n = 493) and South Africa (SA, n = 307), and HIV-negative adolescents in SA (n = 100), in 2020–2022. Blood samples were tested for SARS-CoV-2 antibody, questionnaires collected data on SARS-CoV-2 risk factors and vaccination status, and clinical data were extracted from health records. SARS-CoV-2 seroprevalence (95% CI) was 55% (50%–59%) in CALWHIV in Europe, 67% (61%–72%) in CALWHIV in SA, and 85% (77%–92%) among HIV-negative participants in SA. Among those unvaccinated at time of sampling (n = 769, 85%), seroprevalence was 40% (35%–45%), 64% (58%–70%), and 81% (71%–89%), respectively. Few participants (11% overall) had a known history of SARS-CoV-2-positive PCR or self-reported COVID-19. Three CALWHIV were hospitalized, two with COVID-19 (nonsevere disease) and one young adult with MIS. Although SARS-CoV-2 seroprevalence was high across all settings, even in unvaccinated participants, it was broadly comparable to general population estimates, and most infections were mild/asymptomatic. Results support policy decisions excluding CALWHIV without severe immunosuppression from high-risk groups for COVID-19.
Even though Sub-Saharan Africa (SSA) is lagging in digital technology adoption among the global average, there is substantial progress in terms of Information and Communication Technology (ICT) access and use, where it plays a crucial role in increasing the quality of life in the regions. However, digital gaps still exist within the continents, even though technology adoption across African nations has shown an increase in progress. This paper aims to explore factors that contribute to different adoption rates among three digital technologies in SSA, specifically mobile phones, fixed broadband, and fixed telephones. The methodology utilizes panel regression analysis to examine data sourced from the World Bank, which consists of 48 SSA countries from 2006 to 2022. The findings show a consistent growth in mobile phone subscriptions, different from fixed telephone and broadband internet that shows stagnant progress. Furthermore, infrastructure, and human capital are the most significant factors in addition to other influencing factors. The results of this study provide the African governments with insightful advice on addressing the digital divide and accelerating their digital transformation.
We consider bootstrap inference in predictive (or Granger-causality) regressions when the parameter of interest may lie on the boundary of the parameter space, here defined by means of a smooth inequality constraint. For instance, this situation occurs when the definition of the parameter space allows for the cases of either no predictability or sign-restricted predictability. We show that in this context constrained estimation gives rise to bootstrap statistics whose limit distribution is, in general, random, and thus distinct from the limit null distribution of the original statistics of interest. This is due to both (i) the possible location of the true parameter vector on the boundary of the parameter space and (ii) the possible non-stationarity of the posited predicting (resp. Granger-causing) variable. We discuss a modification of the standard fixed-regressor wild bootstrap scheme where the bootstrap parameter space is shifted by a data-dependent function in order to eliminate the portion of limiting bootstrap randomness attributable to the boundary and prove validity of the associated bootstrap inference under non-stationarity of the predicting variable as the only remaining source of limiting bootstrap randomness. Our approach, which is initially presented in a simple location model, has bearing on inference in parameter-on-the-boundary situations beyond the predictive regression problem.
Vaccination is one of the most cost-effective and successful public health interventions to prevent infectious diseases. Governments worldwide have tried to optimize vaccination coverage, including using vaccine mandates. This review of recent literature and policy aims to provide a comprehensive overview of Malaysia’s childhood vaccination landscape. The document analysis was used to identify and examine information from government policy documents, official government media statements, mainstream news content, and research papers. Content analysis was then employed to analyze the gathered information. Despite the successes of Malaysia’s National Immunization Programme, a resurgence of vaccine-preventable diseases has raised concerns about vaccine hesitancy and refusal. Several contributing factors have been identified, including a preference for alternative medicines, doubts about halal status, fear of vaccine injury, concerns about the vaccines’ contents, conspiracy theories, as well as convenience and access barriers. While various initiatives have been implemented, Malaysia may consider using vaccine mandates, as several countries have recently done, as a potential policy intervention to address these challenges. This review benefits policymakers, epidemiologists, as well as researchers involved in regional or global policy planning and advocacy efforts. It also offers comprehensive insights into designing effective interventions and making informed policy decisions regarding childhood vaccination programmes.
Novel methods of data collection and analysis can enhance traditional risk management practices that rely on expert engineering judgment and established safety records, specifically when key conditions are met: Analysis is linked to the decisions it is intended to support, standards and competencies remain up to date, and assurance and verification activities are performed. This article elaborates on these conditions. The reason engineers are required to perform calculations is to support decision-making. Since humans are famously weak natural statisticians, rather than ask stakeholders to implicitly assimilate data, and arrive at a decision, we can instead rely on subject matter experts to explicitly define risk management decision problems. The results of engineering calculation can then also communicate which interventions (if any) are considered to be risk-optimal. It is also proposed that the next generation of engineering standards should learn from the success of open source software development in community building. Interacting with open datasets and code can promote engagement, identification (and resolution) of errors, training and ultimately competence. Finally, the profession’s tradition of independent verification should also be applied to the complex models that will increasingly contribute to the safety of the built environment. Model assurance will be required to keep pace with model development to identify suitable use cases as adequately safe. These are considered to be increasingly important components in ensuring that methods of data-centric engineering can be safely and appropriately adopted in industry.
In this paper, we explore a non-cooperative optimal reinsurance problem incorporating likelihood ratio uncertainty, aiming to minimize the worst-case risk of the total retained loss for the insurer. We establish a general relation between the optimal reinsurance strategy under the reference probability measure and the strategy in the worst-case scenario. This relation can further be generalized to insurance design problems quantified by tail risk measures. We also characterize distortion risk measures for which the insurer’s optimal strategy remains the same in the worst-case scenario. As an application, we determine the optimal policies for the worst-case scenario using an expectile risk measure. Additionally, we propose and explore a cooperative problem, which can be viewed as a general risk sharing problem between two agents in a comonotonic market. We determine the risk measure value and the optimal reinsurance strategy in the worst-case scenario for the insurer and compare the results from the non-cooperative and cooperative models.
Data for Policy (dataforpolicy.org), a global community, focuses on policy–data interactions by exploring how data can be used for policy in an ethical, responsible, and efficient manner. Within its journal, six focus areas, including Data for Policy Area 1: Digital & Data-driven Transformations in Governance, were established to delineate the evolving research landscape from the Data for Policy Conference series. This review addresses the absence of a formal conceptualization of digital and data-driven transformations in governance within this focus area. The paper achieves this by providing a working definition, mapping current research trends, and proposing a future research agenda centered on three core transformations: (1) public participation and collective intelligence; (2) relationships and organizations; and (3) open data and government. The paper outlines research questions and connects these transformations to related areas such as artificial intelligence (AI), sustainable smart cities, digital divide, data governance, co-production, and service quality. This contribution forms the foundational development of a research agenda for academics and practitioners engaged in or impacted by digital and data-driven transformations in policy and governance.