To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The feasibility of non-pharmacological public health interventions (NPIs) such as physical distancing or isolation at home to prevent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission in low-resource countries is unknown. Household survey data from 54 African countries were used to investigate the feasibility of SARS-CoV-2 NPIs in low-resource settings. Across the 54 countries, approximately 718 million people lived in households with ⩾6 individuals at home (median percentage of at-risk households 56% (95% confidence interval (CI), 51% to 60%)). Approximately 283 million people lived in households where ⩾3 people slept in a single room (median percentage of at-risk households 15% (95% CI, 13% to 19%)). An estimated 890 million Africans lack on-site water (71% (95% CI, 62% to 80%)), while 700 million people lacked in-home soap/washing facilities (56% (95% CI, 42% to 73%)). The median percentage of people without a refrigerator in the home was 79% (95% CI, 67% to 88%), while 45% (95% CI, 39% to 52%) shared toilet facilities with other households. Individuals in low-resource settings have substantial obstacles to implementing NPIs for mitigating SARS-CoV-2 transmission. These populations urgently need to be prioritised for coronavirus disease 2019 vaccination to prevent disease and to contain the global pandemic.
We revisit in-sample asymptotic analysis extensively used in the realized volatility literature. We show that there are gains to be made in estimating current realized volatility from considering realizations in prior periods. The weighting schemes also relate to Kalman-Bucy filters, although our approach is non-Gaussian and model-free. We derive theoretical results for a broad class of processes pertaining to volatility, higher moments, and leverage. The paper also contains a Monte Carlo simulation study showing the benefits of across-sample combinations.
Self-instigated isolation is heavily relied on to curb severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission. Accounting for uncertainty in the latent and prepatent periods, as well as the proportion of infections that remain asymptomatic, the limits of this intervention at different phases of infection resurgence are estimated. We show that by October 2020, SARS-CoV-2 transmission rates in England had already begun exceeding levels that could be interrupted using this intervention alone, lending support to the second national lockdown on 5th November 2020.
We present a mathematical model for the simulation of the development of an outbreak of coronavirus disease 2019 (COVID-19) in a slum area under different interventions. Instead of representing interventions as modulations of the parameters of a free-running epidemic, we introduce a model structure that accounts for the actions but does not assume the results. The disease is modelled in terms of the progression of viraemia reported in scientific studies. The emergence of symptoms in the model reflects the statistics of a nation-wide highly detailed database consisting of more than 62 000 cases (about a half of them confirmed by reverse transcription-polymerase chain reaction tests) with recorded symptoms in Argentina. The stochastic model displays several of the characteristics of COVID-19 such as a high variability in the evolution of the outbreaks, including long periods in which they run undetected, spontaneous extinction followed by a late outbreak and unimodal as well as bimodal progressions of daily counts of cases (second waves without ad-hoc hypothesis). We show how the relation between undetected cases (including the ‘asymptomatic’ cases) and detected cases changes as a function of the public policies, the efficiency of the implementation and the timing with respect to the development of the outbreak. We show also that the relation between detected cases and total cases strongly depends on the implemented policies and that detected cases cannot be regarded as a measure of the outbreak, being the dependency between total cases and detected cases in general not monotonic as a function of the efficiency in the intervention method. According to the model, it is possible to control an outbreak with interventions based on the detection of symptoms only in the case when the presence of just one symptom prompts isolation and the detection efficiency reaches about 80% of the cases. Requesting two symptoms to trigger intervention can be enough to fail in the goals.
About 800 foodborne disease outbreaks are reported in the United States annually. Few are associated with food recalls. We compared 226 outbreaks associated with food recalls with those not associated with recalls during 2006–2016. Recall-associated outbreaks had, on average, more illnesses per outbreak and higher proportions of hospitalisations and deaths than non-recall-associated outbreaks. The top confirmed aetiology for recall-associated outbreaks was Salmonella. Pasteurised and unpasteurised dairy products, beef and molluscs were the most frequently implicated foods. The most common pathogen−food pairs for outbreaks with recalls were Escherichia coli-beef and norovirus-molluscs; the top pairs for non-recall-associated outbreaks were scombrotoxin-fish and ciguatoxin-fish. For outbreaks with recalls, 48% of the recalls occurred after the outbreak, 27% during the outbreak, 3% before the outbreak, and 22% were inconclusive or had unknown recall timing. Fifty per cent of recall-associated outbreaks were multistate, compared with 2% of non-recall-associated outbreaks. The differences between recall-associated outbreaks and non-recall-associated outbreaks help define the types of outbreaks and food vehicles that are likely to have a recall. Improved outbreak vehicle identification and traceability of rarely recalled foods could lead to more recalls of these products, resulting in fewer illnesses and deaths.
We develop a theory of graph algebras over general fields. This is modelled after the theory developed by Freedman et al. (2007, J. Amer. Math. Soc.20 37–51) for connection matrices, in the study of graph homomorphism functions over real edge weight and positive vertex weight. We introduce connection tensors for graph properties. This notion naturally generalizes the concept of connection matrices. It is shown that counting perfect matchings, and a host of other graph properties naturally defined as Holant problems (edge models), cannot be expressed by graph homomorphism functions with both complex vertex and edge weights (or even from more general fields). Our necessary and sufficient condition in terms of connection tensors is a simple exponential rank bound. It shows that positive semidefiniteness is not needed in the more general setting.
Estimating the case fatality ratio (CFR) for COVID-19 is an important aspect of public health. However, calculating CFR accurately is problematic early in a novel disease outbreak, due to uncertainties regarding the time course of disease and difficulties in diagnosis and reporting of cases. In this work, we present a simple method for calculating the CFR using only public case and death data over time by exploiting the correspondence between the time distributions of cases and deaths. The time-shifted distribution (TSD) analysis generates two parameters of interest: the delay time between reporting of cases and deaths and the CFR. These parameters converge reliably over time once the exponential growth phase has finished. Analysis is performed for early COVID-19 outbreaks in many countries, and we discuss corrections to CFR values using excess-death and seroprevalence data to estimate the infection fatality ratio (IFR). While CFR values range from 0.2% to 20% in different countries, estimates for IFR are mostly around 0.5–0.8% for countries that experienced moderate outbreaks and 1–3% for severe outbreaks. The simplicity and transparency of TSD analysis enhance its usefulness in characterizing a new disease as well as the state of the health and reporting systems.
This article discusses the technology of city digital twins (CDTs) and its potential applications in the policymaking context. The article analyzes the history of the development of the concept of digital twins and how it is now being adopted on a city-scale. One of the most advanced projects in the field—Virtual Singapore—is discussed in detail to determine the scope of its potential domains of application and highlight challenges associated with it. Concerns related to data privacy, availability, and its applicability for predictive simulations are analyzed, and potential usage of synthetic data is proposed as a way to address these challenges. The authors argue that despite the abundance of urban data, the historical data are not always applicable for predictions about the events for which there does not exist any data, as well as discuss the potential privacy challenges of the usage of micro-level individual mobility data in CDTs. A task-based approach to urban mobility data generation is proposed in the last section of the article. This approach suggests that city authorities can establish services responsible for asking people to conduct certain activities in an urban environment in order to create data for possible policy interventions for which there does not exist useful historical data. This approach can help in addressing the challenges associated with the availability of data without raising privacy concerns, as the data generated through this approach will not represent any real individual in society.
We review a combinatoric approach to the Hodge conjecture for Fermat varieties and announce new cases where the conjecture is true. We show the Hodge conjecture for Fermat fourfolds $ {X}_m^4 $ of degree m ≤ 100 coprime to 6, and also prove the conjecture for $ {X}_{21}^n $ and $ {X}_{27}^n $, for all n.
Most of the real-life populations are heterogeneous and homogeneity is often just a simplifying assumption for the relevant statistical analysis. Mixtures of lifetime distributions that correspond to homogeneous subpopulations were intensively studied in the literature. Various distributional and stochastic properties of finite and continuous mixtures were discussed. In this paper, following recent publications, we develop further a mixture concept in the form of the generalized α-mixtures that include all mixture models that are widely explored in the literature. We study some main stochastic properties of the suggested mixture model, that is, aging and appropriate stochastic comparisons. Some relevant examples and counterexamples are given to illustrate our findings.
In this article, we derive a closed-form pricing formula for catastrophe equity put options under a stochastic interest rate framework. A distinguishing feature of the proposed solution is its simplified form in contrast to several recently published formulae that require evaluating several layers of infinite sums of $n$-fold convoluted distribution functions. As an application of the proposed formula, we consider two different frameworks and obtain the closed-form formula for the joint characteristic function of the asset price and the losses, which is the only required ingredient in our pricing formula. The prices obtained by the newly derived formula are compared with those obtained using Monte-Carlo simulations to show the accuracy of our formula.
In this study we compared radiation dose received by organs at risk (OARs) after breast conservation surgery(BCS) and mastectomy in patients with left breast cancer.
Materials and methods
Total 30 patients, 15 each of BCS and mastectomy were included in this study. Planning Computerised Tomography (CT) was done for each patient. Chest wall, whole breast, heart, lungs, LAD, proximal and distal LAD, and contra lateral breast was contoured for each patient. Radiotherapy plans were made by standard tangent field. Dose prescribed was 40Gy/16#/3 weeks. Mean heart dose, LAD, proximal and distal LAD, mean and V5 of right lung, and mean, V5, V10 and V20 of left lung, mean dose and V2 of contra lateral breast were calculated for each patient and compared between BCS and mastectomy patients using student’s T test.
Results
Mean doses to the heart, LAD, proximal LAD and distal LAD were 3.364Gy, 16.06Gy, 2.7Gy, 27.5Gy; and 4.219Gy, 14.653Gy, 4.306Gy, 24.6Gy, respectively for mastectomy and BCS patients. Left lung mean dose, V5, V10 and V20 were 5.96Gy, 16%, 14%, 12.4%; and 7.69Gy, 21%, 18% and 16% in mastectomy and BCS patients, respectively. There was no statistical significant difference in the doses to the heart and left lung between mastectomy and BCS. Mean dose to the right lung was significantly less in mastectomy as compared to BCS, 0.29Gy vs. 0.51Gy, respectively (p = 0.007). Mean dose to the opposite breast was significantly lower in patients with mastectomy than BCS (0.54Gy Vs 0.37Gy, p = 0.007). The dose to the distal LAD was significantly higher than proximal LAD both in BCS (24.6Gy Vs 4.3Gy, p = <0.0001) and mastectomy (27.5Gy Vs 2.7Gy, p = <0.0001) patients.
Conclusion
There was no difference in doses received by heart and left lung between BCS and mastectomy patients. Mean doses to the right lung and breast were significantly less in mastectomy patients.
The joint signatures of binary-state and multi-state (semi-coherent or mixed) systems with i.i.d. (independent and identically distributed) binary-state components are considered in this work. For the comparison of pairs of binary-state systems of different sizes, transformation formulas of their joint signatures are derived by using the concept of equivalent systems and a generalized triangle rule for order statistics. Similarly, for facilitating the comparison of pairs of multi-state systems of different sizes, transformation formulas of their multi-state joint signatures are also derived. Some examples are finally presented to illustrate and to verify the theoretical results established here.
The COVID-19 pandemic has exposed the need for more contactless interactions, leading to an acceleration in the design, development, and deployment of digital identity tools and contact-free solutions. A potentially positive outcome of the current crisis could be the development of a more data privacy and human rights compliant framework for digital identity. However, for such a framework to thrive, two essential conditions must be met: (1) respect for and protection of data privacy irrespective of the type of architecture or technology chosen and (2) consideration of the broader impacts that digital identity can have on individuals’ human rights. The article draws on legal, technology-facing, and policy-oriented academic literature to evaluate each of these conditions. It then proposes two ways to leverage the process of digitalization strengthened by the pandemic: a data privacy-centric and a human rights-based approach to digital identity solutions fit for post-COVID-19 societies.
Patient-specific surgical simulations require the patient-specific identification of the constitutive parameters. The sparsity of the experimental data and the substantial noise in the data (e.g., recovered during surgery) cause considerable uncertainty in the identification. In this exploratory work, parameter uncertainty for incompressible hyperelasticity, often used for soft tissues, is addressed by a probabilistic identification approach based on Bayesian inference. Our study particularly focuses on the uncertainty of the model: we investigate how the identified uncertainties of the constitutive parameters behave when different forms of model uncertainty are considered. The model uncertainty formulations range from uninformative ones to more accurate ones that incorporate more detailed extensions of incompressible hyperelasticity. The study shows that incorporating model uncertainty may improve the results, but this is not guaranteed.
This article concerns the tail probabilities of a light-tailed Markov-modulated Lévy process stopped at a state-dependent Poisson rate. The tails are shown to decay exponentially at rates given by the unique positive and negative roots of the spectral abscissa of a certain matrix-valued function. We illustrate the use of our results with an application to the stationary distribution of wealth in a simple economic model in which agents with constant absolute risk aversion are subject to random mortality and income fluctuation.
We propose and analyze a temporal concatenation heuristic for solving large-scale finite-horizon Markov decision processes (MDP), which divides the MDP into smaller sub-problems along the time horizon and generates an overall solution by simply concatenating the optimal solutions from these sub-problems. As a “black box” architecture, temporal concatenation works with a wide range of existing MDP algorithms. Our main results characterize the regret of temporal concatenation compared to the optimal solution. We provide upper bounds for general MDP instances, as well as a family of MDP instances in which the upper bounds are shown to be tight. Together, our results demonstrate temporal concatenation's potential of substantial speed-up at the expense of some performance degradation.
With the increased availability of data and the capacity to make sense of these data, computational approaches to analyze, model and simulate public policy evolved toward viable instruments to deliberate, plan, and evaluate them in different areas of application. Such examples include infrastructure, mobility, monetary, or austerity policies, policies on different aspects of societies (health, pandemic, skills, inclusion, etc.). Technological advances along with the evolution of theoretical models and frameworks open valuable opportunities, while at the same time, posing new challenges. The paper investigates the current state of research in the domain and aims at identifying the most pressing areas for future research. This is done through both literature research of policy modeling and the analysis of research and innovation projects that either focus on policy modeling or involve it as a significant component of the research design. In the paper, 16 recent projects involving the keyword policy modeling were analyzed. The majority of projects concern the application of policy modeling to a specific domain or area of interest, while several projects tackled the cross-cutting topics (risk and crisis management). The detailed analysis of the projects led to topics of future research in the domain of policy modeling. Most prominent future research topics in policy modeling include stakeholder involvement approaches, applicability of research results, handling complexity of models, integration of models from different modeling and simulation paradigms and approaches, visualization of simulation results, real-time data processing, and scalability. These aspects require further research to appropriately contribute to further advance the field.
Data sharing efforts to allow underserved groups and organizations to overcome the concentration of power in our data landscape. A few special organizations, due to their data monopolies and resources, are able to decide which problems to solve and how to solve them. But even though data sharing creates a counterbalancing democratizing force, it must nevertheless be approached cautiously. Underserved organizations and groups must navigate difficult barriers related to technological complexity and legal risk. To examine what those common barriers are, one type of data sharing effort—data trusts—are examined, specifically the reports commenting on that effort. To address these practical issues, data governance technologies have a large role to play in democratizing data trusts safely and in a trustworthy manner. Yet technology is far from a silver bullet. It is dangerous to rely upon it. But technology that is no-code, flexible, and secure can help more responsibly operate data trusts. This type of technology helps innovators put relationships at the center of their efforts.