To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The process to better understand the intricate evolution of our urban territories requires combining urban data from different or concurrent instances of time to provide stakeholders with more complete views of possible evolutions of a city. Geospatial rules have been proposed in the past to validate 3D semantic city models, however, there is a lack of research in the validation of multiple, concurrent and successive, scenarios of urban evolution. Using Semantic Web Ontologies and logical rules, we present a novel standards-based methodology for validating integrated city models. Using this methodology, we propose interoperable rules for validating integrated open 3D city snapshots used for representing multiple scenarios of evolution. We also implement a reproducible proof of concept test suite for applying the proposed rules. To illustrate how these contributions can be used in a real-world data validation use-case, we also provide example queries on the validated data. These queries are specifically used to construct a 3D web application for visualizing and analysing urban changes across multiple scenarios of evolution of a selected zone of interest.
In this paper, we investigate asymmetric Nash bargaining in the context of proportional insurance contracts between a risk-averse insured and a risk-averse insurer, both seeking to enhance their expected utilities. We obtain a necessary and sufficient condition for the Pareto optimality of the status quo and derive the optimal Nash bargaining solution when the status quo is Pareto dominated. If the insured’s and the insurer’s risk preference exhibit decreasing absolute risk aversion and the insurer’s initial wealth decreases in the insurable risk in the sense of reversed hazard rate order, we show that both the optimal insurance coverage and the optimal insurance premium increase with the insured’s degree of risk aversion and the insurer’s bargaining power. If the insured’s risk preference further follows constant absolute risk aversion, we find that greater insurance coverage is induced as the insurer’s constant initial wealth increases.
There is a growing attention towards personalised digital health interventions such as health apps. These often depend on the collection of sensitive personal data, which users generally have limited control over. This work explores perspectives on data sharing and health apps in two different policy contexts, London and Hong Kong. Through this study, our goal is to generate insight about what digital health futures should look like and what needs to be done to achieve them. Using a survey based on a hypothetical health app, we considered a range of behavioural influences on personal health data sharing with the Capability, Opportunity, Motivation model of Behaviour (COM-B) to explore some of the key factors affecting the acceptability of data sharing. Results indicate that willingness to use health apps is influenced by users’ data literacy and control, comfort with sharing health and location data, existing health concerns, access to personalised health advice from a trusted source, and willingness to provide data access to specific parties. Gender is a statistically significant factor, as men are more willing to use health apps. Survey respondents in London are statistically more willing to use health apps than respondents in Hong Kong. Finally, we propose several policy approaches to address these factors, which include the co-creation of standards for using artificial intelligence (AI) to generate health advice, innovating app design and governance models that allow users to carefully control their data, and addressing concerns of gender-specific privacy risks and public trust in institutions dealing with data.
Despite their widespread use, purely data-driven methods often suffer from overfitting, lack of physical consistency, and high data dependency, particularly when physical constraints are not incorporated. This study introduces a novel data assimilation approach that integrates Graph Neural Networks (GNNs) with optimization techniques to enhance the accuracy of mean flow reconstruction, using Reynolds-averaged Navier–Stokes (RANS) equations as a baseline. The method leverages the adjoint approach, incorporating RANS-derived gradients as optimization terms during GNN training, ensuring that the learned model adheres to physical laws and maintains consistency. Additionally, the GNN framework is well-suited for handling unstructured data, which is common in the complex geometries encountered in computational fluid dynamics. The GNN is interfaced with the finite element method for numerical simulations, enabling accurate modeling in unstructured domains. We consider the reconstruction of mean flow past bluff bodies at low Reynolds numbers as a test case, addressing tasks such as sparse data recovery, denoising, and inpainting of missing flow data. The key strengths of the approach lie in its integration of physical constraints into the GNN training process, leading to accurate predictions with limited data, making it particularly valuable when data are scarce or corrupted. Results demonstrate significant improvements in the accuracy of mean flow reconstructions, even with limited training data, compared to analogous purely data-driven models.
Accurate mortality forecasting is crucial for actuarial pricing, reserving, and capital planning, yet the traditional Lee-Carter model struggles with non-linear age and cohort patterns, coherent multi-population forecasting, and quantifying prediction uncertainties. Recent advances in deep learning provide a range of tools that can address these limitations, but actuarial surveys have not kept pace. This paper provides the first concise view of deep learning in mortality forecasting. We cover six deep network architectures, namely Recurrent Neural Networks, Convolutional Neural Networks, Transformers, Autoencoders, Locally Connected Networks, and Multi-Task Feed-Forward Networks. We discuss how these architectures tackle cohort effects, population coherence, interpretability, and uncertainty in mortality forecasting. Evidence from the literature shows that carefully calibrated deep learning models can consistently outperform the Lee-Carter baselines; however, no single architecture resolves every challenge, and open issues remain with data scarcity, interpretability, uncertainty quantification, and keeping pace with the advances of deep learning. This review is also intended to provide actuaries with a practical roadmap for adopting deep learning models in mortality forecasting.
Bridging theory and practice in network data analysis, this guide offers an intuitive approach to understanding and analyzing complex networks. It covers foundational concepts, practical tools, and real-world applications using Python frameworks including NumPy, SciPy, scikit-learn, graspologic, and NetworkX. Readers will learn to apply network machine learning techniques to real-world problems, transform complex network structures into meaningful representations, leverage Python libraries for efficient network analysis, and interpret network data and results. The book explores methods for extracting valuable insights across various domains such as social networks, ecological systems, and brain connectivity. Hands-on tutorials and concrete examples develop intuition through visualization and mathematical reasoning. The book will equip data scientists, students, and researchers in applications using network data with the skills to confidently tackle network machine learning projects, providing a robust toolkit for data science applications involving network-structured data.
European asylum policy still has a long way to go to better address protection challenges. This paper presents data and visualizations that should help improve responsibility-sharing and solidarity between states. We developed an interactive cartographic tool to map the distribution of refugees in Europe. Besides the observed geographic distribution of asylum seekers and beneficiaries of the temporary protection status, our tool allows for the calculation of a theoretical distribution between countries based on different criteria. The tool is an interactive visualization created with the software “Tableau Desktop.” The original data was collected from Eurostat and the World Bank, before being processed by the research team with the Extract Transform Load (ETL) utility “Tableau Prep” and made available through the Tableau Desktop application. The actual number of asylum applications lodged in country A can thus be compared with the number that would be proportional to that country’s population within Europe in combination with three other criteria. Maps of observed and theoretical reallocations can thus be produced based on population size, area, unemployment rate, economic prosperity or a mix of these factors. The number of refugees received is represented by a red semicircle while the “equitable” number in proportion to given criteria is represented by a grey semicircle. Our database not only allows geographical analysis of the drivers of refugee distribution in Europe, but it also provides the population and policymakers with a solid basis for discussing responsibility-sharing schemes, such as those envisaged in the new EU Asylum Pact of 2024.
The emergence of large language models has significantly expanded the use of natural language processing (NLP), even as it has heightened exposure to adversarial threats. We present an overview of adversarial NLP with an emphasis on challenges, policy implications, emerging areas, and future directions. First, we review attack methods and evaluate the vulnerabilities of popular NLP models. Then, we review defense strategies that include adversarial training. We describe major policy implications, identify key trends, and suggest future directions, such as the use of Bayesian methods to improve the security and robustness of NLP systems.
The integration of artificial intelligence (AI)-driven technologies into peace dialogues offers both innovative possibilities and critical challenges for contemporary peacebuilding practice. This article proposes a context-sensitive taxonomy of digital deliberation tools designed to guide the selection and adaptation of AI-assisted platforms in conflict-affected environments. Moving beyond static typologies, the framework accounts for variables such as scale, digital literacy, inclusivity, security, and the depth of AI integration. By situating digital peace dialogues within broader peacebuilding and digital democracy frameworks, the article examines how AI can enhance participation, scale deliberation, and support knowledge synthesis, —while also highlighting emerging concerns around algorithmic bias, digital exclusion, and cybersecurity threats. Drawing on case studies involving the United Nations (UN) and civil society actors, the article underscores the limitations of one-size-fits-all approaches and makes the case for hybrid models that balance AI capabilities with human facilitation to foster trust, legitimacy, and context-responsive dialogue. The analysis contributes to peacebuilding scholarship by engaging with the ethics of AI, the politics of digital diplomacy, and the sustainability of technological interventions in peace processes. Ultimately, the study argues for a dynamic, adaptive approach to AI integration, continuously attuned to the ethical, political, and socio-cultural dimensions of peacebuilding practice.
β-Lactam/β-lactamase inhibitor combinations and carbapenems are the first-line treatments for multidrug-resistant Pseudomonas aeruginosa (P. aeruginosa) infections. However, carbapenem resistance is increasing globally at an alarming rate, which is especially concerning given the pivotal role of these agents. This study comprehensively evaluated the global distribution of carbapenem resistance in clinical P. aeruginosa isolates. The keywords including ‘Pseudomonas’, P. aeruginosa’, ‘P. aeruginosa’, ‘resistance’, ‘susceptibility’, ‘carbapenem antibiotics’, ‘carbapenems’, ‘imipenem’, ‘meropenem’, ‘ertapenem’, ‘doripenem’, as well as ‘prevalence’ and ‘incidence’ were searched in electronic databases as the appropriate keywords. After screening, 160 studies were excluded, with 87 eligible studies from diverse geographic regions retained for final analysis. A comprehensive meta-analysis was then conducted on the data collected. The mean resistance rates (95% CI) were 33.3% (imipenem), 23.3% (meropenem), 60.9% (ertapenem), and 36.7% (doripenem). The time trend analysis showed that the resistance to meropenem has increased from the year 1997 to 2023. Meta-analysis showed substantial heterogeneity (I2 = 92%, p < 0.05) but no significant publication bias by Egger’s or Begg’s test. Global carbapenem resistance is alarmingly high in clinical P. aeruginosa isolates. The increasing prevalence of carbapenem-resistant P. aeruginosa is a major global health threat requiring urgent action through new antimicrobials and improved antibiotic stewardship to protect these last-line drugs.
Given $n$ convex bodies in the Euclidean space $\mathbb{R}^d$, we can find their volume polynomial which is a homogeneous polynomial of degree $d$ in $n$ variables. We consider the set of homogeneous polynomials of degree $d$ in $n$ variables that can be represented as the volume polynomial of any such given convex bodies. This set is a subset of the set of Lorentzian polynomials. Using our knowledge of operations that preserve the Lorentzian property, we give a complete classification of the cases for $(n,d)$ when the two sets are equal.
We study equations with nonlinearity in the form of a double-well potential, randomised by a velocity-switching (telegraph) stochastic process. If the speed parameters of the randomisation are small, then this dynamics has one metastable uncertainty interval and two invariant attractors. The probabilities of leaving the metastable interval through the upper boundary are determined, as well as characteristics of the first crossing times. Invariant measures are also found. When and if the direction of the telegraph process velocity coincides with the direction of the periodic change in potential, the system can go into a metastable state, having received a time window for the interwell transition. The obtained results can be used as an alternative to stochastic resonance models.
In 2022, an Mpox clade II outbreak affected many countries. To optimize control, knowledge on the number of new introductions (human cases infected from outside the study population) versus local transmission is important. We extracted sequences of all 48 Mpox cases in Slovenia in 2022 from the NCBI database, of which 42 passed quality control. We estimated the number of introductions using the phylodynamic model phybreak by integrating genomic and epidemiological data and inferred transmission events. By repeating this analysis with weekly cumulative case data, we assessed if introductions could have been reliably inferred in real time. The number of introductions, estimated after the outbreak ended, was 19 (95% CI: 13–29), and two larger transmission clusters existed. As these introductions occurred throughout the outbreak, we conclude that the Slovenian Mpox outbreak was mainly driven by new introductions. Analysing the data ‘in real time’ would have only slightly overestimated the number of introductions per week, capturing the trend of introductions as main driver of the outbreak. This makes it useful for guiding control policy during outbreaks, prioritizing the rapid identification of cases among travellers, and with that preventing emergence of new transmission chains.
Although ‘in-the-wild’ technology testing provides an important opportunity to collect evidence about the performance of new technologies in real world deployment environments, such tests may themselves cause harm and wrongfully interfere with the rights of others. This paper critically examines real-world AI testing, focusing on live facial recognition technology (FRT) trials by European law enforcement agencies (in London, Wales, Berlin, and Nice) undertaken between 2016 and 2020, which serve as a set of comparative case studies. We argue that there is an urgent need for a clear framework of principles to govern real-world AI testing, which is currently a largely ungoverned ‘wild west’ without adequate safeguards or oversight. We propose a principled framework to ensure that these tests are undertaken in an epistemically, ethically, and legally responsible manner, thereby helping to ensure that such tests generate sound, reliable evidence while safeguarding the human rights and other vital interests of others. Although the case studies of FRT testing were undertaken prior to the passage of the EU’s AI Act, we suggest that these three kinds of responsibility should provide the foundational anchor points to inform the design and conduct of real-world testing of high-risk AI systems pursuant to Article 60 of the AI Act.
The hard-core model has as its configurations the independent sets of some graph instance $G$. The probability distribution on independent sets is controlled by a ‘fugacity’ $\lambda \gt 0$, with higher $\lambda$ leading to denser configurations. We investigate the mixing time of Glauber (single-site) dynamics for the hard-core model on restricted classes of bounded-degree graphs in which a particular graph $H$ is excluded as an induced subgraph. If $H$ is a subdivided claw then, for all $\lambda$, the mixing time is $O(n\log n)$, where $n$ is the order of $G$. This extends a result of Chen and Gu for claw-free graphs. When $H$ is a path, the set of possible instances is finite. For all other $H$, the mixing time is exponential in $n$ for sufficiently large $\lambda$, depending on $H$ and the maximum degree of $G$.
There are limited data on the illness trajectory for individuals with long COVID. We prospectively followed 1,234 individuals with long COVID at 3 months post-SARS-CoV-2 infection to identify factors associated with persistence or recovery. At 6 months post-infection, 724 (58.7%) had persistent long COVID and 510 (41.3%) had fully recovered. In multivariable analyses, pre-existing health conditions at the time of initial SARS-CoV-2 infection and reporting fatigue, shortness of breath, and cough 3 months post-infection were independent predictors of persistent long COVID. Age, sex, and number of COVID vaccinations were not significantly associated with persistent long COVID. For persons with persistent long COVID, the median number of symptoms remained stable over follow-up, indicating that there had been little symptomatic improvement. A third of those with persistent long COVID reported seeking medical care for their symptoms and a third had ceased or reduced their hours of work/study. Our findings suggest that there may be distinct clinical trajectories for long COVID observed between 3- and 6-month follow-up, that is, persons who experience full recovery and those with minimal clinical improvement, and this may have implications for management of affected individuals.
This appendix delves into the mathematical foundations of network representation techniques, focusing on two key areas: maximum likelihood estimation (MLE) and spectral embedding theory. It begins by exploring MLE for Erdös-Rényi (ER) and stochastic block model (SBM) networks, demonstrating the unbiasedness and consistency of estimators. The limitations of MLE for more complex models are discussed, leading to the introduction of spectral methods. The chapter then presents theoretical considerations for spectral embeddings, including the adjacency spectral embedding (ASE) and its statistical properties. It explores the concepts of consistency and asymptotic normality in the context of random dot product graphs (RDPGs). Finally, we extend these insights to multiple network models, covering graph matching for correlated networks and joint spectral embeddings like the omnibus embedding and multiple adjacency spectral embedding (MASE).
This chapter presents a unified framework for analyzing complex networks through statistical models. Starting with the Inhomogeneous Erdős-Rényi model’s concept of independent edge probabilities, we progress through increasingly sophisticated representations, including the Erdös-Rényi, Stochastic Block Model, and Random Dot Product Graph (RDPG) models. We explore how each model generalizes its predecessors, with the RDPG encompassing many earlier models under certain conditions. The crucial role of positive semidefiniteness in connecting block models to RDPGs is examined, providing insight into model interrelationships. We also introduce models addressing specific network characteristics, such as heterogeneous node degrees and edge-based clustering. The chapter extends to multiple and correlated network models, demonstrating how concepts from simpler models inform more complex scenarios. A hierarchical framework is presented, unifying these models and illustrating their relative generality, thus laying the groundwork for advanced network analysis techniques.