To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Mediterranean spotted fever (MSF) is a rickettsial disease caused by Rickettsia conorii, transmitted by brown dog ticks, and endemic in the Mediterranean region. Its incidence is increasing, with varied presentations and potential complications because of delayed diagnosis. This study retrospectively included 173 adult patients hospitalized for MSF at La Rabta University Hospital, Tunis, from 2000 to 2020. Patients, predominantly male (67.6%) and averaging 40 years of age, mostly resided in urban areas (82.7%). Animal exposure was reported in 74.6%, and cases peaked during the hot season (68.8%), with no cases in winter months. The classic triad of fever, rash, and eschar was observed in 69.9%, with maculopapular lesions affecting palms and soles in 83.8%. Headache (64.5%), myalgia (60.7%), and arthralgia (57.2%) were also common. Laboratory findings included elevated white blood cell count (36.4%), thrombocytopenia (48%), and increased aspartate aminotransferase (50.9%). Treatment with doxycycline (n = 161) resolved fever within 2.8±1.3 [1–5] days. Complications, including encephalitis, chorioretinitis, anterior uveitis and vasculitis, occurred in 2.3% of cases, but all patients recovered without relapse. No significant risk factors for severe forms were identified. Improved awareness of MSF’s clinical features may be the key to an early diagnosis and successful treatment.
Experiments in engineering are typically conducted in controlled environments where parameters can be set to any desired value. This assumes that the same applies in a real-world setting, which is often incorrect as many experiments are influenced by uncontrollable environmental conditions such as temperature, humidity, and wind speed. When optimizing such experiments, the focus should be on finding optimal values conditionally on these uncontrollable variables. This article extends Bayesian optimization to the optimization of systems in changing environments that include controllable and uncontrollable parameters. The extension fits a global surrogate model over all controllable and environmental variables but optimizes only the controllable parameters conditional on measurements of the uncontrollable variables. The method is validated on two synthetic test functions, and the effects of the noise level, the number of environmental parameters, the parameter fluctuation, the variability of the uncontrollable parameters, and the effective domain size are investigated. ENVBO, the proposed algorithm from this investigation, is applied to a wind farm simulator with eight controllable and one environmental parameter. ENVBO finds solutions for the entire domain of the environmental variable that outperform results from optimization algorithms that only focus on a fixed environmental value in all but one case while using a fraction of their evaluation budget. This makes the proposed approach very sample-efficient and cost-effective. An off-the-shelf open-source version of ENVBO is available via the NUBO Python package.
In an era of globalized research endeavors, the interplay between government funding programs, funding decisions, and their influence on successful research collaborations and grant application success rates has emerged as a critical focus of inquiry. This study embarks on an in-depth analysis of cross-country funding dynamics over the past three decades, with a specific emphasis on support for academic-industry collaboration versus sole academic or industry funding. Drawing insights from comprehensive datasets and policy trends, our research illuminates the evolving landscape of research funding and collaboration policies. We examine funding by Innosuisse (Swiss Innovation Project Funding) and SBIR (US Small Business Innovation Research), exploring the rates of future grant success for both academic and industry partners. We find strong evidence of rich-get-richer phenomenon in the Innosuisse program for both academic partners and industry partners in terms of winning future grants. For SBIR we find weaker levels of continued funding to the same partners with most attaining at most a few grants. With the increasing prevalence of academic-industry collaborations among both funders, it is worth considering additional efforts to ensure that novel ideas and new individuals and teams are supported.
We study two continuous-time Stackelberg games between a life insurance buyer and seller over a random time horizon. The buyer invests in a risky asset and purchases life insurance, and she maximizes a mean-variance criterion applied to her wealth at death. The seller chooses the insurance premium rate to maximize its expected wealth at the buyer’s random time of death. We consider two life insurance games: one with term life insurance and the other with whole life insurance—the latter with pre-commitment of the constant investment strategy. In the term life insurance game, the buyer chooses her life insurance death benefit and investment strategy continuously from a time-consistent perspective. We find the buyer’s equilibrium control strategy explicitly, along with her value function, for the term life insurance game by solving the extended Hamilton–Jacobi–Bellman equations. By contrast, in the whole life insurance game, the buyer pre-commits to a constant life insurance death benefit and a constant amount to invest in the risky asset. To solve the whole life insurance problem, we first obtain the buyer’s objective function and then we maximize that objective function over constant controls. Under both models, the seller maximizes its expected wealth at the buyer’s time of death, and we use the resulting optimal life insurance premia to find the Stackelberg equilibria of the two life insurance games. We also analyze the effects of the parameters on the Stackelberg equilibria, and we present some numerical examples to illustrate our results.
This article interrogates three claims made in relation to the use of data in relation to peace. That more data, faster data, and impartial data will lead to better policy and practice outcomes. Taken together, this data myth relies on a lack of curiosity about the provenance of data and the infrastructure that produces it and asserts its legitimacy. Our discussion is concerned with issues of power, inclusion, and exclusion, and particularly how knowledge hierarchies attend to the collection and use of data in relation to conflict-affected contexts. We therefore question the axiomatic nature of these data myth claims and argue that the structure and dynamics of peacebuilding actors perpetuate the myth. We advocate a fuller reflection of the data wave that has overtaken us and echo calls for an ethics of numbers. In other words, this article is concerned with the evidence base for evidence-based peacebuilding. Mindful of the policy implications of our concerns, the article puts forward five tenets of good practice in relation to data and the peacebuilding sector. The concluding discussion further considers the policy implications of the data myth in relation to peace, and particularly, the consequences of casting peace and conflict as technical issues that can be “solved” without recourse to human and political factors.
The popularity of green, social and sustainability-linked bonds (GSS bonds) continues to rise, with circa US$939 billion of such bonds issued globally in 2023. Given the rising popularity of ESG-related investment solutions, their relatively recent emergence, and limited research in this field, continued investigation is essential. Extending non-traditional techniques such as neural networks to these fields creates a good blend of innovation and potential. This paper follows on from our initial publication, where we aim to replicate the S&P Green Bond Index (i.e. this is a time series problem) over a period using non-traditional techniques (neural networks) predicting 1 day ahead. We take a novel approach of applying an N-BEATS model architecture. N-BEATS is a complex feedforward neural network architecture, consisting of basic building blocks and stacks, introducing the novel doubly residual stacking of backcasts and forecasts. In this paper, we also revisit the neural network architectures from our initial publication, which include DNNs, CNNs, GRUs and LSTMs. We continue the univariate time series problem, increasing the data input window from 1 day to 2 and 5 days respectively, whilst still aiming to predict 1 day ahead.
COVID-19 vaccine effectiveness (VE) studies are limited in low- and middle-income countries. A case-control study was conducted among COVID-19 and other pneumonia patients admitted to a hospital in the Philippines during the pre-Omicron and Omicron periods. To elucidate factors associated with in-hospital death, 1782 COVID-19 patients were assessed. To estimate absolute VE for various severe outcomes, 1059 patients were assessed (869 [82.1%] COVID-19 cases; 190 [17.9%] controls). Factors associated with in-hospital death included older age, tuberculosis (adjusted odds ratio [aOR] 2.45 [95% confidence interval {95% CI} 1.69–3.57]), HIV (aOR 3.30 [95% CI 2.03–5.37]), and current smokers (aOR 2.65 [95% CI 1.72–4.10]). Pre-Omicron, the primary series provided high protection within a median of 2 months (hospitalization: 85.4% [95% CI 35.9–96.7%]; oxygen requirement: 91.0% [95% CI 49.4–98.4%]; invasive mechanical ventilation (IMV): 97.0% [95% CI 65.7–99.7%]; death: 96.5% [95% CI 67.1–99.6%]). During Omicron, the primary series provided moderate-high protection within a median of 6–9 months (hospitalization: 70.2% [95% CI 27.0–87.8%]; oxygen requirement: 71.4% [95% CI 29.3–88.4%]; IMV: 72.7% [95% CI −11.6–93.3%]; death: 58.9% [95% CI −82.8–90.8%]). Primary series VE against severe COVID-19 outcomes was consistently high for both pre-Omicron and Omicron in a setting where approximately half of the vaccinees received inactivated vaccines.
We propose a novel and unified sampling scheme, called the accelerated group sequential sampling scheme, which incorporates four different types of sampling scheme: (i) the classic Anscombe–Chow–Robbins purely sequential sampling scheme; (ii) the accelerated sequential sampling scheme; (iii) the relatively new k-at-a-time group sequential sampling scheme; and (iv) the new k-at-a-time accelerated group sequential sampling scheme. The first-order and second-order properties of this unified sequential sampling scheme are fully investigated with two illustrations on minimum risk point estimation for the mean of a normal distribution and on bounded variance point estimation for the location parameter of a negative exponential distribution. We also provide extensive Monte Carlo simulation studies and real data analyses for each illustration.
This paper obtains logarithmic asymptotics of moderate deviations of the stochastic process of the number of customers in a many-server queue with generally distributed inter-arrival and service times under a heavy-traffic scaling akin to the Halfin–Whitt regime. The deviation function is expressed in terms of the solution to a Fredholm equation of the second kind. A key element of the proof is the large-deviation principle in the scaling of moderate deviations for the sequential empirical process. The techniques of large-deviation convergence and idempotent processes are used extensively.
This analysis provides a critical account of AI governance in the modern “smart city” through a feminist lens. Evaluating the case of Sidewalk Labs’ Quayside project—a smart city development that was to be implemented in Toronto, Canada—it is argued that public–private partnerships can create harmful impacts when corporate actors seek to establish new “rules of the game” regarding data regulation. While the Quayside project was eventually abandoned in 2020, it demonstrates key observations for the state of urban algorithmic governance both within Canada and internationally. Articulating the need for a revitalised and participatory smart city governance programme prioritizes meaningful engagement in the forms of transparency and accountability measures. Taking a feminist lens, it argues for a two-pronged approach to governance: integrating collective engagement from the outset in the design process and ensuring the civilian data protection through a robust yet localized rights-based privacy regulation strategy. Engaging with feminist theories of intersectionality in relation to technology and data collection, this framework articulates the need to understand the broader histories of social marginalization when implementing governance strategies regarding artificial intelligence in cities.
Various measures have been introduced in the existing literature to evaluate extreme risk exposure under the effect of an observable factor. Due to the nice properties of the higher-moment (HM) coherent risk measure, we propose a conditional version of the HM (CoHM) risk measure by incorporating the information of an observable factor. We conduct an asymptotic analysis of this measure in the presence of extreme risks under the weak contagion at a high confidence level, which is further applied to the special case of the conditional Haezendonck–Goovaerts risk measure (CoHG). Numerical illustrations are also provided to examine the accuracy of the asymptotic formulas and to analyze the sensitivity of the risk contribution of the CoHG. Based on the asymptotic result in the Fréchet case, we propose an estimator for the CoHM via an extrapolation, supported by a simulation study.
The international community, and the UN in particular, is in urgent need of wise policies, and a regulatory institution to put data-based systems, notably AI, to positive use and guard against their abuse. Digital transformation and “artificial intelligence (AI)”—which can more adequately be called “data-based systems (DS)”—present ethical opportunities and risks. Helping humans and the planet to flourish sustainably in peace and guaranteeing globally that human dignity is respected not only offline but also online, in the digital sphere, and the domain of DS requires two policy measures: (1) human rights-based data-based systems (HRBDS) and (2) an International Data-Based Systems Agency (IDA): IDA should be established at the UN as a platform for cooperation in the field of digital transformation and DS, fostering human rights, security, and peaceful uses of DS.
Anticipating future migration trends is instrumental to the development of effective policies to manage the challenges and opportunities that arise from population movements. However, anticipation is challenging. Migration is a complex system, with multifaceted drivers, such as demographic structure, economic disparities, political instability, and climate change. Measurements encompass inherent uncertainties, and the majority of migration theories are either under-specified or hardly actionable. Moreover, approaches for forecasting generally target specific migration flows, and this poses challenges for generalisation.
In this paper, we present the results of a case study to predict Irregular Border Crossings (IBCs) through the Central Mediterranean Route and Asylum requests in Italy. We applied a set of Machine Learning techniques in combination with a suite of traditional data to forecast migration flows. We then applied an ensemble modelling approach for aggregating the results of the different Machine Learning models to improve the modelling prediction capacity.
Our results show the potential of this modelling architecture in producing forecasts of IBCs and Asylum requests over 6 months. The explained variance of our models through a validation set is as high as 80%. This study offers a robust basis for the construction of timely forecasts. In the discussion, we offer a comment on how this approach could benefit migration management in the European Union at various levels of policy making.
Public procurement is a fundamental aspect of public administration. Its vast size makes its oversight and control very challenging, especially in countries where resources for these activities are limited. To support decisions and operations at public procurement oversight agencies, we developed and delivered VigIA, a data-based tool with two main components: (i) machine learning models to detect inefficiencies measured as cost overruns and delivery delays, and (ii) risk indices to detect irregularities in the procurement process. These two components cover complementary aspects of the procurement process, considering both active and passive waste, and help the oversight agencies to prioritize investigations and allocate resources. We show how the models developed shed light on specific features of the contracts to be considered and how their values signal red flags. We also highlight how these values change when the analysis focuses on specific contract types or on information available for early detection. Moreover, the models and indices developed only make use of open data and target variables generated by the procurement processes themselves, making them ideal to support continuous decisions at overseeing agencies.
We propose a physics-constrained convolutional neural network (PC-CNN) to solve two types of inverse problems in partial differential equations (PDEs), which are nonlinear and vary both in space and time. In the first inverse problem, we are given data that is offset by spatially varying systematic error (i.e., the bias, also known as the epistemic uncertainty). The task is to uncover the true state, which is the solution of the PDE, from the biased data. In the second inverse problem, we are given sparse information on the solution of a PDE. The task is to reconstruct the solution in space with high resolution. First, we present the PC-CNN, which constrains the PDE with a time-windowing scheme to handle sequential data. Second, we analyze the performance of the PC-CNN to uncover solutions from biased data. We analyze both linear and nonlinear convection-diffusion equations, and the Navier–Stokes equations, which govern the spatiotemporally chaotic dynamics of turbulent flows. We find that the PC-CNN correctly recovers the true solution for a variety of biases, which are parameterized as non-convex functions. Third, we analyze the performance of the PC-CNN for reconstructing solutions from sparse information for the turbulent flow. We reconstruct the spatiotemporal chaotic solution on a high-resolution grid from only 1% of the information contained in it. For both tasks, we further analyze the Navier–Stokes solutions. We find that the inferred solutions have a physical spectral energy content, whereas traditional methods, such as interpolation, do not. This work opens opportunities for solving inverse problems with partial differential equations.
Urban communities rely on built utility infrastructures as critical lifelines that provide essential services such as water, gas, and power, to sustain modern socioeconomic systems. These infrastructures consist of underground and surface-level assets that are operated and geo-distributed over large regions where continuous monitoring for anomalies is required but challenging to implement. This article addresses the problem of deploying heterogeneous Internet of Things sensors in these networks to support future decision-support tasks, for example, anomaly detection, source identification, and mitigation. We use stormwater as a driving use case; these systems are responsible for drainage and flood control, but act as conduits that can carry contaminants to the receiving waters. Challenges toward effective monitoring include the transient and random nature of the pollution incidents, the scarcity of historical data, the complexity of the system, and technological limitations for real-time monitoring. We design a SemanTics-aware sEnsor Placement framework (STEP) to capture pollution incidents using structural, behavioral, and semantic aspects of the infrastructure. We leverage historical data to inform our system with new, credible instances of potential anomalies. Several key topological and empirical network properties are used in proposing candidate deployments that optimize the balance between multiple objectives. We also explore the quality of anomaly representation in the network through new perspectives, and provide techniques to enhance the realism of the anomalies considered in a network. We evaluate STEP on six real-world stormwater networks in Southern California, USA, which shows its efficacy in monitoring areas of interest over other baseline methods.
Many documents are produced over the years of managing assets, particularly those with long lifespans. However, during this time, the assets may deviate from their original as-designed or as-built state. This presents a significant challenge for tasks that occur in later life phases but require precise knowledge of the asset, such as retrofit, where the assets are equipped with new components. For a third party who is neither the original manufacturer nor the operator, obtaining a comprehensive understanding of the asset can be a tedious process, as this requires going through all available but often fragmented information and documents. While common knowledge regarding the domain or general type of asset can be helpful, it is often based on the experiences of engineers and is, therefore, only implicitly available. This article presents a graph-based information management system that complements traditional PLM systems and helps connect fragments by utilizing generic information about assets. To achieve this, techniques from systems engineering and data science are used. The overarching management platform also includes geometric analyses and operations that can be performed with geometric and product information extracted from STEP files. While the management itself is first described generically, it is also later applied to cabin retrofit in aviation. A mock-up of an Airbus A320 is utilized as the case study to demonstrate further how the platform can provide benefits for retrofitting such long-living assets.
In 2022, the largest ever virgin soil outbreak of Japanese encephalitis (JE) occurred in Australia resulting in 45 reported human cases of JE, with seven fatalities. Japanese encephalitis virus (JEV) was detected in 84 piggeries across Australia. In response, states implemented targeted vaccination programs for those individuals at the highest risk of JEV exposure. A mixed methods approach, including geospatial mapping of JEV vaccine distribution in Queensland, a case series of Queensland human cases and interviews with Queensland Health staff, assessed the JEV vaccination response program. Five notified human cases were reviewed, with three having occupational outdoor risk and local travel-related exposure. Vaccine coverage ranged from 0 to 7.4 doses per 100 people after 12 months of the program. The highest uptake was in southern Queensland, where 95% of the state’s commercial pig population is located. The vaccination program was limited by a heavy reliance on general practitioners, vast geographical distribution of eligible populations, difficulties mobilising and engaging eligible cohorts, and suboptimal One Health collaboration. Population and climate factors make it possible for the virus to become endemic. Targeted vaccination programs remain an important strategy to protect people at the highest risk of exposure, however, program improvements are required to optimize vaccine accessibility.
Focusing on methods for data that are ordered in time, this textbook provides a comprehensive guide to analyzing time series data using modern techniques from data science. It is specifically tailored to economics and finance applications, aiming to provide students with rigorous training. Chapters cover Bayesian approaches, nonparametric smoothing methods, machine learning, and continuous time econometrics. Theoretical and empirical exercises, concise summaries, bolded key terms, and illustrative examples are included throughout to reinforce key concepts and bolster understanding. Ancillary materials include an instructor's manual with solutions and additional exercises, PowerPoint lecture slides, and datasets. With its clear and accessible style, this textbook is an essential tool for advanced undergraduate and graduate students in economics, finance, and statistics.
Brownian motion is an important topic in various applied fields where the analysis of random events is necessary. Introducing Brownian motion from a statistical viewpoint, this detailed text examines the distribution of quadratic plus linear or bilinear functionals of Brownian motion and demonstrates the utility of this approach for time series analysis. It also offers the first comprehensive guide on deriving the Fredholm determinant and the resolvent associated with such statistics. Presuming only a familiarity with standard statistical theory and the basics of stochastic processes, this book brings together a set of important statistical tools in one accessible resource for researchers and graduate students. Readers also benefit from online appendices, which provide probability density graphs and solutions to the chapter problems.