To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article proposes Bayesian adaptive trials (BATs) as both an efficient method to conduct trials and a unifying framework for the evaluation of social policy interventions, addressing the limitations inherent in traditional methods, such as randomized controlled trials. Recognizing the crucial need for evidence-based approaches in public policy, the proposed approach aims to lower barriers to the adoption of evidence-based methods and to align evaluation processes more closely with the dynamic nature of policy cycles. BATs, grounded in decision theory, offer a dynamic, “learning as we go” approach, enabling the integration of diverse information types and facilitating a continuous, iterative process of policy evaluation. BATs’ adaptive nature is particularly advantageous in policy settings, allowing for more timely and context-sensitive decisions. Moreover, BATs’ ability to value potential future information sources positions it as an optimal strategy for sequential data acquisition during policy implementation. While acknowledging the assumptions and models intrinsic to BATs, such as prior distributions and likelihood functions, this article argues that these are advantageous for decision-makers in social policy, effectively merging the best features of various methodologies.
This paper demonstrates how learning the structure of a Bayesian network, often used to predict and represent causal pathways, can be used to inform policy decision-making.
We show that Bayesian networks are a rigorous and interpretable representation of interconnected factors that affect the complex environment in which policy decisions are made. Furthermore, Bayesian structure learning differentiates between proximal or immediate factors and upstream or root causes, offering a comprehensive set of potential causal pathways leading to specific outcomes.
We show how these causal pathways can provide critical insights into the impact of a policy intervention on an outcome. Central to our approach is the integration of causal discovery within a Bayesian framework, which considers the relative likelihood of possible causal pathways rather than only the most probable pathway.
We argue this is an essential part of causal discovery in policy making because the complexity of the decision landscape inevitably means that there are many near equally probable causal pathways. While this methodology is broadly applicable across various policy domains, we demonstrate its value within the context of educational policy in Australia. Here, we identify pathways influencing educational outcomes, such as student attendance, and examine the effects of social disadvantage on these pathways. We demonstrate the methodology’s performance using synthetic data and its usefulness by applying it to real-world data. Our findings in the real example highlight the usefulness of Bayesian networks as a policy decision tool and show how data science techniques can be used for practical policy development.
Bayesian model updating (BMU) is frequently used in Structural Health Monitoring to investigate the structure’s dynamic behavior under various operational and environmental loadings for decision-making, e.g., to determine whether maintenance is required. Data collected by sensors are used to update the prior of some physics-based model’s latent parameters to yield the posterior. The choice of prior may significantly affect posterior predictions and subsequent decision-making, especially under the typical case in engineering applications of little informative data. Therefore, understanding how the choice of prior affects the posterior prediction is of great interest. In this article, a robust Bayesian inference technique evaluates the optimal and worst-case prior in the vicinity of a chosen nominal prior and their corresponding posteriors. This technique derives an interacting Wasserstein gradient flow that minimizes and maximizes/minimizes the KL divergence between the posterior and the approximation to the posterior, with respect to the approximation to the posterior and the prior. Two numerical case studies are used to showcase the proposed algorithm: a double-banana-posterior and a double-beam structure. Optimal and worst-case priors are modeled by specifying an ambiguity set containing any distribution at a statistical distance to the nominal prior, less than or equal to the radius. The resulting posteriors may be used to yield the lower and upper bounds on subsequent calculations of an engineering metric (e.g., failure probability) used for decision-making. If the metric used for decision-making is not sensitive to the resulting posteriors, it may be assumed that decisions taken are robust to prior uncertainty.
Robotic manipulation inherently involves contact with objects for task accomplishment. Traditional motion planning techniques, while having shown their success in collision-free scenarios, may not handle manipulation tasks effectively because they typically avoid contact. Although geometric constraints have been introduced into classical motion planners for tasks that involve interactions, they still lack the capability to fully incorporate contact. In addition, these planning methods generally do not operate on objects that cannot be directly controlled. In this work, building on a recently proposed framework for energy-based quasi-static manipulation, we propose an approach to manipulation planning by adapting a numerical continuation algorithm to compute the equilibrium manifold (EM), which is implicitly derived from physical laws. By defining a manipulation potential energy function that captures interaction and natural potentials, the numerical continuation approach is integrated with adaptive ordinary differential equations that converge to the EM. This allows discretizing the implicit manifold as a graph with a finite set of equilibria as nodes interconnected by weighted edges defined via a haptic metric. The proposed framework is evaluated with an inverted pendulum task, where the explored branch of the manifold demonstrates effectiveness.
The ideating phase of product design is critical, as decisions made here influence the rest of the product’s lifecycle. Usually, early preliminary designs in engineering are created with pen and paper, which are incompatible with the subsequent digital design process. In an effort to find a modeling tool for early designs that provides the creative flexibility of freehand sketching but also the further processability of digital models, this research investigates natural modeling in virtual reality (VR). To do so, a VR modeling method allowing the intuitive creation of preliminary designs as simplified computer-aided design (CAD) models is presented. The main contribution is the evaluation of this natural VR modeling method against freehand sketching in an extensive user study.
Ambient air pollution remains a global challenge, with adverse impacts on health and the environment. Addressing air pollution requires reliable data on pollutant concentrations, which form the foundation for interventions aimed at improving air quality. However, in many regions, including the United Kingdom, air pollution monitoring networks are characterized by spatial sparsity, heterogeneous placement, and frequent temporal data gaps, often due to issues such as power outages. We introduce a scalable data-driven supervised machine learning model framework designed to address temporal and spatial data gaps by filling missing measurements within the United Kingdom. The machine learning framework used is LightGBM, a gradient boosting algorithm based on decision trees, for efficient and scalable modeling. This approach provides a comprehensive dataset for England throughout 2018 at a 1 km2 hourly resolution. Leveraging machine learning techniques and real-world data from the sparsely distributed monitoring stations, we generate 355,827 synthetic monitoring stations across the study area. Validation was conducted to assess the model’s performance in forecasting, estimating missing locations, and capturing peak concentrations. The resulting dataset is of particular interest to a diverse range of stakeholders engaged in downstream assessments supported by outdoor air pollution concentration data for nitrogen dioxide (NO2), Ozone (O3), particulate matter with a diameter of 10 μm or less (PM10), particulate matter with a diameter of 2.5 μm or less PM2.5, and sulphur dioxide (SO2), at a higher resolution than was previously possible.
Resource management skills are critical to success during new product development processes. Design processes are ambiguous and complex, and designers often face a scarcity of resources that limits their ability to move the new product development process forward, such as limited financial capital, time or human resources. A team’s ability to use resources effectively may determine their likelihood of success during new product development processes. Technology-based startup teams represent an authentic, unique subset of new product development teams that are trying to bring innovative technologies to market. While prior work has identified salient traits of team members that affect a team’s trajectory, little work has investigated how these traits may interact with each other and how they affect an individual’s ability to manage resources. Using a mixed-methods approach, we leveraged data from 241 startup team members to study the relationship between individual traits, team characteristics and resource management skills. A k-means cluster analysis unveiled two distinct archetypes of startup team members, differentiated by (1) self-efficacy, (2) bricolage, (3) risk propensity and (4) perceptions of psychological safety. Team members with higher levels of these traits exhibited greater resource management skills.
In the topic-sensitive theory of the logic of imagination due to Berto [3], the topic of the imaginative output must be contained within the imaginative input. That is, imaginative episodes can never expand what they are about. We argue, with Badura [2], that this constraint is implausible from a psychological point of view, and it wrongly predicts the falsehood of true reports of imagination. Thus the constraint should be relaxed; but how? A number of direct approaches to relaxing the controversial content-inclusion constraint are explored in this paper. The core idea is to consider adding an expansion operator to the mereology of topics. The logic that results depends on the formal constraints placed on topic expansion, the choice of which are subject to philosophical dispute. The first semantics we explore is a topological approach using a closure operator, and we show that the resulting logic is the same as Berto’s own system. The second approach uses an inclusive and monotone increasing operator, and we give a sound and complete axiomatiation for its logic. The third approach uses an inclusive and additive operator, and we show that the associated logic is strictly weaker than the previous two systems, and additivity is not definable in the language. The latter result suggests that involved techniques or a more expressive language is required for a complete axiomatization of the system, which is left as an open question. All three systems are simple tweaks on Berto’s system in that the language remains propositional, and the underlying theory of topics is unchanged.
Understanding the complex dynamics of climate patterns under different anthropogenic emissions scenarios is crucial for predicting future environmental conditions and formulating sustainable policies. Using Dynamic Mode Decomposition with control (DMDc), we analyze surface air temperature patterns from climate simulations to elucidate the effects of various climate-forcing agents. This improves upon previous DMD-based methods by including forcing information as a control variable. Our study identifies both common climate patterns, like the North Atlantic Oscillation and El Niño Southern Oscillation, and distinct impacts of aerosol and carbon emissions. We show that these emissions’ effects vary with climate scenarios, particularly under conditions of higher radiative forcing. Our findings confirm DMDc’s utility in climate analysis, highlighting its role in extracting modes of variability from surface air temperature while controlling for emissions contributions and exposing trends in these spatial patterns as forcing scenarios change.
This article considers the intersecting of remembering and imagining vis à vis individual and cultural amnesia. It focuses on two artists’ films, Shona Illingworth’s video installation Time Present (2016) and Trinh-T Minh-Ha’s film, Forgetting Vietnam (2015). Time Present portrays the experience of an individual living with amnesia and further relates it to the immobility that denotes the cultural representation of the island of St Kilda (Outer Hebrides). Forgetting Vietnam questions the problematic legacy of the Vietnam War and its recollection by bridging personal and shared experiences through a portrait of Vietnam itself. Both Illingworth and Trinh use the film’s features of frames and movement to convey the emotional and affective resonances of the experiences and places presented to generate the possibility of presence. This article closely examines Time Present and Forgetting Vietnam with a focus on the films’ respective structures and thematic developments and reads them by suggesting the intersecting of remembering and imagining culturally and its potentiality for engaging with absence and silenced histories through decentralized approaches.
Forests play a crucial role in the Earth’s system processes and provide a suite of social and economic ecosystem services, but are significantly impacted by human activities, leading to a pronounced disruption of the equilibrium within ecosystems. Advancing forest monitoring worldwide offers advantages in mitigating human impacts and enhancing our comprehension of forest composition, alongside the effects of climate change. While statistical modeling has traditionally found applications in forest biology, recent strides in machine learning and computer vision have reached important milestones using remote sensing data, such as tree species identification, tree crown segmentation, and forest biomass assessments. For this, the significance of open-access data remains essential in enhancing such data-driven algorithms and methodologies. Here, we provide a comprehensive and extensive overview of 86 open-access forest datasets across spatial scales, encompassing inventories, ground-based, aerial-based, satellite-based recordings, and country or world maps. These datasets are grouped in OpenForest, a dynamic catalog open to contributions that strives to reference all available open-access forest datasets. Moreover, in the context of these datasets, we aim to inspire research in machine learning applied to forest biology by establishing connections between contemporary topics, perspectives, and challenges inherent in both domains. We hope to encourage collaborations among scientists, fostering the sharing and exploration of diverse datasets through the application of machine learning methods for large-scale forest monitoring. OpenForest is available at the following url: https://github.com/RolnickLab/OpenForest.
Data for Policy (dataforpolicy.org), a trans-disciplinary community of research and practice, has emerged around the application and evaluation of data technologies and analytics for policy and governance. Research in this area has involved cross-sector collaborations, but the areas of emphasis have previously been unclear. Within the Data for Policy framework of six focus areas, this report offers a landscape review of Focus Area 2: Technologies and Analytics. Taking stock of recent advancements and challenges can help shape research priorities for this community. We highlight four commonly used technologies for prediction and inference that leverage datasets from the digital environment: machine learning (ML) and artificial intelligence systems, the internet-of-things, digital twins, and distributed ledger systems. We review innovations in research evaluation and discuss future directions for policy decision-making.
Stochastic generators are essential to produce synthetic realizations that preserve target statistical properties. We propose GenFormer, a stochastic generator for spatio-temporal multivariate stochastic processes. It is constructed using a Transformer-based deep learning model that learns a mapping between a Markov state sequence and time series values. The synthetic data generated by the GenFormer model preserve the target marginal distributions and approximately capture other desired statistical properties even in challenging applications involving a large number of spatial locations and a long simulation horizon. The GenFormer model is applied to simulate synthetic wind speed data at various stations in Florida to calculate exceedance probabilities for risk management.
This study compares the design practices and performance of ChatGPT 4.0, a large language model (LLM), against graduate engineering students in a 48-h prototyping hackathon, based on a dataset comprising more than 100 prototypes. The LLM participated by instructing two participants who executed its instructions and provided objective feedback, generated ideas autonomously and made all design decisions without human intervention. The LLM exhibited similar prototyping practices to human participants and finished second among six teams, successfully designing and providing building instructions for functional prototypes. The LLM’s concept generation capabilities were particularly strong. However, the LLM prematurely abandoned promising concepts when facing minor difficulties, added unnecessary complexity to designs, and experienced design fixation. Communication between the LLM and participants was challenging due to vague or unclear descriptions, and the LLM had difficulty maintaining continuity and relevance in answers. Based on these findings, six recommendations for implementing an LLM like ChatGPT in the design process are proposed, including leveraging it for ideation, ensuring human oversight for key decisions, implementing iterative feedback loops, prompting it to consider alternatives, and assigning specific and manageable tasks at a subsystem level.
Experience in teaching functional programming (FP) on a relational basis has led the author to focus on a graphical style of expression and reasoning in which a geometric construct shines: the (semi) commutative square. In the classroom this is termed the “magic square” (MS), since virtually everything that we do in logic, FP, database modeling, formal semantics and so on fits in some MS geometry. The sides of each magic square are binary relations and the square itself is a comparison of two paths, each involving two sides. MSs compose and have a number of useful properties. Among several examples given in the paper ranging over different application domains, free-theorem MSs are shown to be particularly elegant and productive. Helped by a little bit of Galois connections, a generic, induction-free theory for ${\mathsf{foldr}}$ and $\mathsf{foldl}$ is given, showing in particular that ${\mathsf{foldl} \, {{s}}{}\mathrel{=}\mathsf{foldr}{({flip} \unicode{x005F}{s})}{}}$ holds under conditions milder than usually advocated.
We say that a Kripke model is a GL-model (Gödel and Löb model) if the accessibility relation $\prec $ is transitive and converse well-founded. We say that a Kripke model is a D-model if it is obtained by attaching infinitely many worlds $t_1, t_2, \ldots $, and $t_\omega $ to a world $t_0$ of a GL-model so that $t_0 \succ t_1 \succ t_2 \succ \cdots \succ t_\omega $. A non-normal modal logic $\mathbf {D}$, which was studied by Beklemishev [3], is characterized as follows. A formula $\varphi $ is a theorem of $\mathbf {D}$ if and only if $\varphi $ is true at $t_\omega $ in any D-model. $\mathbf {D}$ is an intermediate logic between the provability logics $\mathbf {GL}$ and $\mathbf {S}$. A Hilbert-style proof system for $\mathbf {D}$ is known, but there has been no sequent calculus. In this paper, we establish two sequent calculi for $\mathbf {D}$, and show the cut-elimination theorem. We also introduce new Hilbert-style systems for $\mathbf {D}$ by interpreting the sequent calculi. Moreover, we show that D-models can be defined using an arbitrary limit ordinal as well as $\omega $. Finally, we show a general result as follows. Let X and $X^+$ be arbitrary modal logics. If the relationship between semantics of X and semantics of $X^+$ is equal to that of $\mathbf {GL}$ and $\mathbf {D}$, then $X^+$ can be axiomatized based on X in the same way as the new axiomatization of $\mathbf {D}$ based on $\mathbf {GL}$.