To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce a novel human-centric deep reinforcement learning recommender system designed to co-optimize energy consumption, thermal comfort, and air quality in commercial buildings. Existing approaches typically optimize these objectives separately or focus solely on controlling energy-consuming building resources without directly engaging occupants. We develop a deep reinforcement learning architecture based on multitask learning with humans-in-the-loop and demonstrate how it can jointly learn energy savings, comfort, and air quality improvements for different building and occupant actions. In addition to controlling typical building resources (e.g., thermostat setpoint), our system provides real-time actionable recommendations that occupants can take (e.g., move to a new location) to co-optimize energy, comfort, and air quality. Through real deployments across multiple commercial buildings, we show that our multitask deep reinforcement learning recommender system has the potential to reduce energy consumption by up to 8% in energy-focused optimization, improve all objectives by 5–10% in joint optimization, and improve thermal comfort by up to 21% in comfort and air quality-focused optimization compared to existing solutions.
A vast literature exists on theories of public opinion - how to measure, analyze, predict, and influence it; however, there is no synthesis of best practices for interpreting public opinion: existing knowledge is disparate and spread across many disciplines. Polls, Pollsters, and Public Opinion presents a systematic analytical approach for understanding, predicting, and engaging public opinion. It tells the story through the eyes of the pollster and draws an analytical road map for examining public opinion, both conceptually and practically. Providing a theoretical and conceptual foundation, as well as debunking popular myths, this book delves into the science of polling, offering tools analysts can use to assess the quality of polls. It also introduces methods that can be used to predict elections and other socio-political outcomes while understanding the nuances of messaging, engaging, and moving public opinion.
Crowd monitoring for sports games is important to improve public safety, game experience, and venue management. Recent crowd-crushing incidents (e.g., the Kanjuruhan Stadium disaster) have caused 100+ deaths, calling for advancements in crowd-monitoring methods. Existing monitoring approaches include manual observation, wearables, video-, audio-, and WiFi-based sensing. However, few meet the practical needs due to their limitations in cost, privacy protection, and accuracy.
In this paper, we introduce a novel crowd monitoring method that leverages floor vibrations to infer crowd reactions (e.g., clapping) and traffic (i.e., the number of people entering) in sports stadiums. Our method allows continuous crowd monitoring in a privacy-friendly and cost-effective way. Unlike monitoring one person, crowd monitoring involves a large population, leading to high uncertainty in the vibration data. To overcome the challenge, we bring in the context of crowd behaviors, including (1) temporal context to inform crowd reactions to the highlights of the game and (2) spatial context to inform crowd traffic in relation to the facility layouts. We deployed our system at Stanford Maples Pavilion and Michigan Stadium for real-world evaluation, which shows a 14.7% and 12.5% error reduction compared to the baseline methods without the context information.
The aspirations-ability framework proposed by Carling has begun to place the question of who aspires to migrate at the center of migration research. In this article, building on key determinants assumed to impact individual migration decisions, we investigate their prediction accuracy when observed in the same dataset and in different mixed-migration contexts. In particular, we use a rigorous model selection approach and develop a machine learning algorithm to analyze two original cross-sectional face-to-face surveys conducted in Turkey and Lebanon among Syrian migrants and their respective host populations in early 2021. Studying similar nationalities in two hosting contexts with a distinct history of both immigration and emigration and large shares of assumed-to-be mobile populations, we illustrate that a) (im)mobility aspirations are hard to predict even under ‘ideal’ methodological circumstances, b) commonly referenced “migration drivers” fail to perform well in predicting migration aspirations in our study contexts, while c) aspects relating to social cohesion, political representation and hope play an important role that warrants more emphasis in future research and policymaking. Methodologically, we identify key challenges in quantitative research on predicting migration aspirations and propose a novel modeling approach to address these challenges.
One of the goals of open science is to promote the transparency and accessibility of research. Sharing data and materials used in network research is critical to these goals. In this paper, we present recommendations for whether, what, when, and where network data and materials should be shared. We recommend that network data and materials should be shared, but access to or use of shared data and materials may be restricted if necessary to avoid harm or comply with regulations. Researchers should share the network data and materials necessary to reproduce reported results via a publicly accessible repository when an associated manuscript is published. To ensure the adoption of these recommendations, network journals should require sharing, and network associations and academic institutions should reward sharing.
Yield curve extrapolation to unobservable tenors is a key technique for the market-consistent valuation of actuarial liabilities required by Solvency II and forthcoming similar regulations. Since the regulatory method, the Smith–Wilson method, is inconsistent with observable yield curve dynamics, parsimonious parametric models, the Nelson–Siegel model and its extensions, are often used for yield curve extrapolation in risk management. However, it is difficult for the parsimonious parametric models to extrapolate yield curves without excessive volatility because of their limited ability to represent observed yield curves with a limited number of parameters. To extend the representational capabilities, we propose a novel yield curve extrapolation method using machine learning. Using the long short-term memory architecture, we achieve purely data-driven yield curve extrapolation with better generalization performance, stability, and consistency with observed yield curve dynamics than the previous parsimonious parametric models on US and Japanese yield curve data. In addition, our method has model interpretability using the backpropagation algorithm. The findings of this study prove that neural networks, which have recently received considerable attention in mortality forecasting, are useful for yield curve extrapolation, where they have not been used before.
We investigate the number of maximal cliques, that is, cliques that are not contained in any larger clique, in three network models: Erdős–Rényi random graphs, inhomogeneous random graphs (IRGs) (also called Chung–Lu graphs), and geometric inhomogeneous random graphs (GIRGs). For sparse and not-too-dense Erdős–Rényi graphs, we give linear and polynomial upper bounds on the number of maximal cliques. For the dense regime, we give super-polynomial and even exponential lower bounds. Although (G)IRGs are sparse, we give super-polynomial lower bounds for these models. This comes from the fact that these graphs have a power-law degree distribution, which leads to a dense subgraph in which we find many maximal cliques. These lower bounds seem to contradict previous empirical evidence that (G)IRGs have only few maximal cliques. We resolve this contradiction by providing experiments indicating that, even for large networks, the linear lower-order terms dominate, before the super-polynomial asymptotic behavior kicks in only for networks of extreme size.
The importance of automating pavement maintenance tasks for highway systems has garnered interest from both industry and academia. Despite significant research efforts and promising demonstrations being devoted to reaching a level of semi-automation featuring digital sensing and inspection, site maintenance work still requires manual processes using special vehicles and equipment, reflecting a clear gap to transition to fully autonomous maintenance. This paper reviews the current progress in pavement maintenance automation in terms of inspection and repair operations, followed by a discussion of three key technical challenges related to robotic sensing, control, and actuation. To address these challenges, we propose a conceptual solution we term Autonomous Maintenance Plant (AMP), mainly consisting of five modules for sensing, actuation, control, power supply, and mobility. This AMP concept is part of the “Digital Roads” project’s cyber-physical platform where a road digital twin (DT) is created based on its physical counterpart to enable real-time condition monitoring, sensory data processing, maintenance decision making, and repair operation execution. In this platform, the AMP conducts high-resolution survey and autonomous repair operations enabled (instructed) by the road DT. This process is unmanned and completely autonomous with an expectation to create a fully robotized highway pavement maintenance system.
Data mining and techniques for analyzing big data play a crucial role in various practical fields, including financial markets. However, only a few quantitative studies have been focused on predicting daily stock market returns. The data mining methods used in previous studies are either incomplete or inefficient. This study used the FPC clustering algorithm and prominent clustering algorithms such as K-means, IPC, FDPC, and GOPC for clustering stock market data. The stock market data utilized in this study comprise data from cement companies listed on the Tehran Stock Exchange. These data concerning capital returns and price fluctuations will be examined and analyzed to guide investment decisions. The analysis process involves extracting the stock market data of these companies over the past two years. Subsequently, these companies are categorized based on two criteria: profitability percentage and short-term and long-term price fluctuations, using the FPC clustering algorithm and the classification above algorithms. Then, the results of these clustering analyses are compared against each other using standard and recognized evaluation criteria to assess the quality of the clustering analysis. The findings of this investigation indicate that the FPC algorithm provides more favorable results than other algorithms. Based on the results, companies demonstrating profitability, stability, and loss within short-term (weekly and monthly) and long-term (three-month, six-month, and one-year) time frames will be placed within their respective clusters and introduced accordingly.
While governments have long discussed the promise of delegating important decisions to machines, actual use often lags. Consequently, we know little about the variation in the deployment of such delegations in large numbers of similar governmental organizations. Using data from crime laboratories in the United States, we examine the uneven distribution over time of a specific, well-known expert system for ballistics imaging for a large sample of local and regional public agencies; an expert system is an inference engine joined with a knowledge base. Our statistical model is informed by the push-pull-capability theory of innovation in the public sector. We test hypotheses about the probability of deployment and provide evidence that the use of this expert system varies with the pull of agency task environments and the enabling support of organizational resources—and that the impacts of those factors have changed over time. Within this context, we also present evidence that general knowledge of the use of expert systems has supported the use of this specific expert system in many agencies. This empirical case and this theory of innovation provide broad evidence about the historical utilization of expert systems as algorithms in public sector applications.
As one of the most neglected zoonotic diseases, brucellosis has posed a serious threat to public health worldwide. This study is purposed to apply different machine learning models to improve the prediction accuracy of human brucellosis (HB) in Shaanxi, China from 2008 to 2020, under livestock husbandry intensification from a spatiotemporal perspective. We quantitatively evaluated the performance and suitability of ConvLSTM, RF, and LSTM models in epidemic forecasting, and investigated the spatial heterogeneity of how different factors drive the occurrence and transmission of HB in distinct sub-regions by using Kernel Density Analysis and Shapley Additional Explanations. Our findings demonstrated that ConvLSTM network yielded the best predictive performance with the lowest average RMSE of 13.875 and MAE values of 18.393. RF model generated an underestimated outcome while LSTM model had an overestimated one. In addition, climatic conditions, intensification of livestock keeping and socioeconomic status were identified as the dominant factors that drive the occurrence of HB in Shaanbei Plateau, Guanzhong Plain, and Shaannan Region, respectively. This work provided a comprehensive understanding of the potential risk of HB epidemics in Northwest China driven by both anthropogenic activities and natural environment, which can support further practice in disease control and prevention.
Tuberculosis infection (TBI) has been associated with increased cardiovascular risks. We aimed to characterize abnormal blood pressure (BP) readings in individuals with TBI. We conducted a retrospective study of adults with TBI presenting for their initial medical visit at a large midwestern U.S. public health clinic between 2019 and 2020. Abnormal BP was defined as having a systolic BP ≥ 130 mmHg and/or a diastolic BP ≥ 80 mmHg. Of 310 individuals with TBI, median age was 36 years (interquartile range 27–48), 34% were male, 64% non-US-born; 58 (18.7%) were previously diagnosed with hypertension. The prevalence of any hypertension (i.e., had a history of hypertension and/or an abnormal BP reading) was 64.2% (95% confidence interval 58.7–69.4). Any hypertension was independently associated with older age, male sex, higher body mass index, and individuals of Black race. In conclusion, any hypertension was present in over half of the adults evaluated for TBI in our clinic. Established hypertension risk factors were also common among this group, suggesting that individuals with TBI could benefit from clinical and public health interventions aiming to reduce the risk of future cardiovascular events.
Recent studies have suggested an increased incidence of myocarditis and pericarditis following mRNA vaccination or COVID-19. However, the potential interaction effect between vaccine type and COVID-19 on heart disease risk remains uncertain. Our study aimed to examine the impact of COVID-19 status and vaccine type following the first dose on acute heart disease in the Korean population, using data from the National Health Insurance Service COVID-19 database (October 2018–March 2022). We sought to provide insights for public health policies and clinical decisions pertaining to COVID-19 vaccination strategies. We analysed heart disease risk, including acute cardiac injury, acute myocarditis, acute pericarditis, cardiac arrest, and cardiac arrhythmia, in relation to vaccine type and COVID-19 within 21 days after the first vaccination date, employing Cox proportional hazards models with time-varying covariates. This study included 3,350,855 participants. The results revealed higher heart disease risk in individuals receiving mRNA vaccines than other types (adjusted HR, 1.48; 95% CI, 1.35–1.62). Individuals infected by SARS-CoV-2 also exhibited significantly higher heart disease risk than those uninfected (adjusted HR, 3.56; 95% CI, 1.15–11.04). We found no significant interaction effect between vaccine type and COVID-19 status on the risk of acute heart disease. Notably, however, younger individuals who received mRNA vaccines had a higher heart disease risk compared to older individuals. These results may suggest the need to consider alternative vaccine options for the younger population. Further research is needed to understand underlying mechanisms and guide vaccination strategies effectively.
The “la Caixa” Foundation has been experimenting with artificial intelligence (AI)-assisted decision-making geared toward alleviating the administrative burden associated with the evaluation pipeline of its flagship funding program, piloting an algorithm to detect immature project proposals before they reach the peer review stage, and suggest their removal from the selection process to a human overseer. In this article, we explore existing uses of AI by publishers and research funding organizations to automate their selection pipelines, in addition to analyzing the conditions under which the focal case corresponds to a responsible use of AI and the extent to which these conditions are met by the current implementation, highlighting challenges and areas of improvement.
Inappropriate antibiotic use is a key driver of antibiotic resistance and one that can be mitigated through stewardship. A better understanding of current prescribing practices is needed to develop successful stewardship efforts. This study aims to identify factors that are associated with human cases of enteric illness receiving an antibiotic prescription. Cases of laboratory-confirmed enteric illness reported to the FoodNet Canada surveillance system between 2015 and 2019 were the subjects of this study. Laboratory data were combined with self-reported data collected from an enhanced case questionnaire that included demographic data, illness duration and symptoms, and antibiotic prescribing. The data were used to build univariable logistic regression models and a multivariable logistic regression model to explore what factors were associated with a case receiving an antibiotic prescription. The final multivariable model identified several factors as being significantly associated with cases being prescribed an antibiotic. Some of the identified associations indicate that current antibiotic prescribing practices include a substantial level of inappropriate use. This study provides evidence that antibiotic stewardship initiatives targeting infectious diarrhoea are needed to optimize antibiotic use and combat the rise of antibiotic resistance.
In the context of the ongoing Russian invasion and the uncertainties surrounding the potential return migration of millions of displaced Ukrainians, this study explores the future of (return) migration through an innovative and inclusive participatory foresight approach, engaging 20 displaced Ukrainians residing in Valencia, Spain, from May to December 2023. The foresight process included workshops, discussions via online messaging groups, interviews, participatory observations, and culminated in an open art exhibition. Through this process, we conducted a collective horizon scanning, identifying weak signals and emerging trends, followed by an examination of critical uncertainties, which led to the development of four distinct scenarios: Exhaustion Return, Energetic Return, Virtual Return, and Disconnection. The insights derived from this foresight exercise hold practical relevance for both Ukrainian and EU migration policymakers, emphasizing the importance of lived experiences in shaping anticipatory migration policies. This study also offers theoretical contributions by applying participatory foresight to the field of return migration, challenging established knowledge paradigms, and fostering a more inclusive and nuanced understanding of migration dynamics and their broader implications.
The COVID-19 pandemic impacted the transmission of many pathogens. The aim was to determine the effect of non-pharmaceutical interventions on the incidence of diseases transmitted via food. Weekly incidence rates for nine foodborne pathogens were collected from national surveillance registries. Weekly pathogen incidence during lockdown weeks of 2020 and 2021 were compared with corresponding weeks in 2015–2019. The same analyses were performed to determine the effect of self-defined expected impact levels of measures (low, intermediate and high). Eight out of 9 diseases showed a significant decrease in case number in 2020, except for listeriosis, which remained unchanged. The largest decrease was observed for rotavirus gastronteritis A (−81%), norovirus gastroenteritis (−78%), hepatitis A (−75%) and shigellosis (−72). In 2021, lower case numbers were observed for 6 out of 9 diseases compared with 2015-2019, with the largest decrease for shigellosis (−5/%) and hepatitis E (−47%). No significant change was observed for listeriosis, STEC infection and rotavirus gastroenteritis. Overall, measures with increased expected impact level did not result in a larger decrease in number of cases, except for Campylobacter, and norovirus and rotavirus gastroenteritis. Disease transmitted via food significantly decreased during the COVID-19 pandemic, with a more pronounced effect during 2020 than 2021.
The epidemiology of respiratory infections may vary depending on factors such as climate changes, geographical features, and urbanization. Pandemics also change the epidemiological characteristics of not only the relevant infectious agent itself but also other infectious agents. This study aims to assess the impact of the COVID-19 pandemic on the epidemiology of viral respiratory infections in children. We retrospectively reviewed the medical records of children aged ≤18 years with laboratory-confirmed viral respiratory infections other than COVID-19 from January 2018 to March 2023. Data on demographic characteristics, month and year of admission, and microbiological results were collected. During the study period, 1,829 respiratory samples were sent for polymerase chain reaction testing. Rhinovirus was identified in 24% of the patients, mixed infections in 21%, influenza virus in 20%, and respiratory syncytial virus in 12.5%. A 38.6% decrease in viral respiratory infections was observed in 2020, followed by a 188% increase in 2021. The respiratory syncytial virus was significantly more common in the post-pandemic period (13.8%) compared to the pre-pandemic period (8.1%), but no seasonal shift in respiratory syncytial virus infection was observed. There was also a yearly increase in influenza infections in the post-pandemic period compared to the pre-pandemic period. After the COVID-19 pandemic, the frequency of parainfluenza virus infections increased during the summer months, and this finding provides a new contribution to the existing literature.
The “digital twin” is now a recognized core component of the Industry 4.0 journey, helping organizations to understand their complex processes, resources and data to provide insight, and help optimize their operations. Despite this, there are still multiple definitions and understandings of what a digital twin is; all of which has led to a “mysticism” around the concept. Following the “hype curve” model, it can be seen that digital twins have moved past their initial hype phase with only minimal implementation in industry, this is often due to the perceived high cost of initial development and sensor outfit. However, a second hype peak is predicted through the development of “lean digital twins.” Lean digital twins represent conceptual or physical systems in much lower detail (and hence at much lower cost to build and manage the models), focusing in on the key parameters and operators that most affect the desired optimal outcomes of the physical system. These lean digital twins are requirements managed with the system to ensure added value and tapping into existing architectures such as onboard platform management systems to minimize costs. This article was developed in partnership between BMT and Siemens to demystify the different definitions and components of a lean digital twin and discuss the process of implementing a lean digital twin solution that is tied to the core benefits in question and outlining the tools available to make implementation a reality.