To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Global path planning using roadmap (RM) path-planning methods including Voronoi diagram (VD), rapidly exploring random trees (RRT), and probabilistic roadmap (PRM) has gained popularity over the years in robotics. These global path-planning methods are usually combined with other path-planning techniques to achieve collision-free robot control to a specified destination. However, it is unclear which of these methods is the best choice to compute the efficient path in terms of path length, computation time, path safety, and consistency of path computation. This article reviewed and adopted a comparative research methodology to perform a comparative analysis to determine the efficiency of these methods in terms of path optimality, safety, consistency, and computation time. A hundred maps of different complexities with obstacle occupancy rates ranging from 50.95% to 78.42% were used to evaluate the performance of the RM path-planning methods. Each method demonstrated unique strengths and limitations. The study provides critical insights into their relative performance, highlighting application-specific recommendations for selecting the most suitable RM method. These findings contribute to advancing robot path-planning techniques by offering a detailed evaluation of widely adopted methods.
Some informal arguments are valid, others are invalid. A core application of logic is to tell us which is which by capturing these validity facts. Philosophers and logicians have explored how well a host of logics carry out this role, familiar examples being propositional, first-order and second-order logic. Since natural language and standard logics are countable, a natural question arises: is there a countable logic guaranteed to capture the validity patterns of any language fragment? That is, is there a countable omega-universal logic? Our article philosophically motivates this question, makes it precise, and then answers it. It is a self-contained, concise sequel to ‘Capturing Consequence’ by A.C. Paseau (RSL vol. 12, 2019).
The alignment of artificial intelligence (AI) systems with societal values and the public interest is a critical challenge in the field of AI ethics and governance. Traditional approaches, such as Reinforcement Learning with Human Feedback (RLHF) and Constitutional AI, often rely on pre-defined high-level ethical principles. This article critiques these conventional alignment frameworks through the philosophical perspectives of pragmatism and public interest theory, arguing against their rigidity and disconnect with practical impacts. It proposes an alternative alignment strategy that reverses the traditional logic, focusing on empirical evidence and the real-world effects of AI systems. By emphasizing practical outcomes and continuous adaptation, this pragmatic approach aims to ensure that AI technologies are developed according to the principles that are derived from the observable impacts produced by technology applications.
This study shows the impact of black carbon (BC) aerosol atmospheric rivers (AAR) on the Antarctic Sea ice retreat. We detect that a higher number of BC AARs arrived in the Antarctic region due to increased anthropogenic wildfire activities in 2019 in the Amazon compared to 2018. Our analyses suggest that the BC AARs led to a reduction in the sea ice albedo, increased the amount of sunlight absorbed at the surface, and a significant reduction of sea ice over the Weddell, Ross Sea (Ross), and Indian Ocean (IO) regions in 2019. The Weddell region experienced the largest amount of sea ice retreat ($ \sim \mathrm{33,000} $ km2) during the presence of BC AARs as compared to $ \sim \mathrm{13,000} $ km2 during non-BC days. We used a suite of data science techniques, including random forest, elastic net regression, matrix profile, canonical correlations, and causal discovery analyses, to discover the effects and validate them. Random forest, elastic net regression, and causal discovery analyses show that the shortwave upward radiative flux or the reflected sunlight, temperature, and longwave upward energy from the earth are the most important features that affect sea ice extent. Canonical correlation analysis confirms that aerosol optical depth is negatively correlated with albedo, positively correlated with shortwave energy absorbed at the surface, and negatively correlated with Sea Ice Extent. The relationship is stronger in 2019 than in 2018. This study also employs the matrix profile and convolution operation of the Convolution Neural Network (CNN) to detect anomalous events in sea ice loss. These methods show that a higher amount of anomalous melting events were detected over the Weddell and Ross regions.
Machine learning’s integration into reliability analysis holds substantial potential to ensure infrastructure safety. Despite the merits of flexible tree structure and formulable expression, random forest (RF) and evolutionary polynomial regression (EPR) cannot contribute to reliability-based design due to absent uncertainty quantification (UQ), thus hampering broader applications. This study introduces quantile regression and variational inference (VI), tailored to RF and EPR for UQ, respectively, and explores their capability in identifying material indices. Specifically, quantile-based RF (QRF) quantifies uncertainty by weighting the distribution of observations in leaf nodes, while VI-based EPR (VIEPR) works by approximating the parametric posterior distribution of coefficients in polynomials. The compression index of clays is taken as an exemplar to develop models, which are compared in terms of accuracy and reliability, and also with deterministic counterparts. The results indicate that QRF outperforms VIEPR, exhibiting higher accuracy and confidence in UQ. In the regions of sparse data, predicted uncertainty becomes larger as errors increase, demonstrating the validity of UQ. The generalization ability of QRF is further verified on a new creep index database. The proposed uncertainty-incorporated modeling approaches are available under diverse preferences and possess significant prospects in broad scientific computing domains.
The increasing popularity of large language models has not only led to widespread use but has also brought various risks, including the potential for systematically spreading fake news. Consequently, the development of classification systems such as DetectGPT has become vital. These detectors are vulnerable to evasion techniques, as demonstrated in an experimental series: Systematic changes of the generative models’ temperature proofed shallow learning—detectors to be the least reliable (Experiment 1). Fine-tuning the generative model via reinforcement learning circumvented BERT-based—detectors (Experiment 2). Finally, rephrasing led to a >90% evasion of zero-shot—detectors like DetectGPT, although texts stayed highly similar to the original (Experiment 3). A comparison with existing work highlights the better performance of the presented methods. Possible implications for society and further research are discussed.
We consider the moments and the distribution of hitting times on the lollipop graph which is the graph exhibiting the maximum expected hitting time among all the graphs having the same number of nodes. We obtain recurrence relations for the moments of all order and we use these relations to analyze the asymptotic behavior of the hitting time distribution when the number of nodes tends to infinity.
This article proposes Bayesian adaptive trials (BATs) as both an efficient method to conduct trials and a unifying framework for the evaluation of social policy interventions, addressing the limitations inherent in traditional methods, such as randomized controlled trials. Recognizing the crucial need for evidence-based approaches in public policy, the proposed approach aims to lower barriers to the adoption of evidence-based methods and to align evaluation processes more closely with the dynamic nature of policy cycles. BATs, grounded in decision theory, offer a dynamic, “learning as we go” approach, enabling the integration of diverse information types and facilitating a continuous, iterative process of policy evaluation. BATs’ adaptive nature is particularly advantageous in policy settings, allowing for more timely and context-sensitive decisions. Moreover, BATs’ ability to value potential future information sources positions it as an optimal strategy for sequential data acquisition during policy implementation. While acknowledging the assumptions and models intrinsic to BATs, such as prior distributions and likelihood functions, this article argues that these are advantageous for decision-makers in social policy, effectively merging the best features of various methodologies.
This paper demonstrates how learning the structure of a Bayesian network, often used to predict and represent causal pathways, can be used to inform policy decision-making.
We show that Bayesian networks are a rigorous and interpretable representation of interconnected factors that affect the complex environment in which policy decisions are made. Furthermore, Bayesian structure learning differentiates between proximal or immediate factors and upstream or root causes, offering a comprehensive set of potential causal pathways leading to specific outcomes.
We show how these causal pathways can provide critical insights into the impact of a policy intervention on an outcome. Central to our approach is the integration of causal discovery within a Bayesian framework, which considers the relative likelihood of possible causal pathways rather than only the most probable pathway.
We argue this is an essential part of causal discovery in policy making because the complexity of the decision landscape inevitably means that there are many near equally probable causal pathways. While this methodology is broadly applicable across various policy domains, we demonstrate its value within the context of educational policy in Australia. Here, we identify pathways influencing educational outcomes, such as student attendance, and examine the effects of social disadvantage on these pathways. We demonstrate the methodology’s performance using synthetic data and its usefulness by applying it to real-world data. Our findings in the real example highlight the usefulness of Bayesian networks as a policy decision tool and show how data science techniques can be used for practical policy development.
Bayesian model updating (BMU) is frequently used in Structural Health Monitoring to investigate the structure’s dynamic behavior under various operational and environmental loadings for decision-making, e.g., to determine whether maintenance is required. Data collected by sensors are used to update the prior of some physics-based model’s latent parameters to yield the posterior. The choice of prior may significantly affect posterior predictions and subsequent decision-making, especially under the typical case in engineering applications of little informative data. Therefore, understanding how the choice of prior affects the posterior prediction is of great interest. In this article, a robust Bayesian inference technique evaluates the optimal and worst-case prior in the vicinity of a chosen nominal prior and their corresponding posteriors. This technique derives an interacting Wasserstein gradient flow that minimizes and maximizes/minimizes the KL divergence between the posterior and the approximation to the posterior, with respect to the approximation to the posterior and the prior. Two numerical case studies are used to showcase the proposed algorithm: a double-banana-posterior and a double-beam structure. Optimal and worst-case priors are modeled by specifying an ambiguity set containing any distribution at a statistical distance to the nominal prior, less than or equal to the radius. The resulting posteriors may be used to yield the lower and upper bounds on subsequent calculations of an engineering metric (e.g., failure probability) used for decision-making. If the metric used for decision-making is not sensitive to the resulting posteriors, it may be assumed that decisions taken are robust to prior uncertainty.
Robotic manipulation inherently involves contact with objects for task accomplishment. Traditional motion planning techniques, while having shown their success in collision-free scenarios, may not handle manipulation tasks effectively because they typically avoid contact. Although geometric constraints have been introduced into classical motion planners for tasks that involve interactions, they still lack the capability to fully incorporate contact. In addition, these planning methods generally do not operate on objects that cannot be directly controlled. In this work, building on a recently proposed framework for energy-based quasi-static manipulation, we propose an approach to manipulation planning by adapting a numerical continuation algorithm to compute the equilibrium manifold (EM), which is implicitly derived from physical laws. By defining a manipulation potential energy function that captures interaction and natural potentials, the numerical continuation approach is integrated with adaptive ordinary differential equations that converge to the EM. This allows discretizing the implicit manifold as a graph with a finite set of equilibria as nodes interconnected by weighted edges defined via a haptic metric. The proposed framework is evaluated with an inverted pendulum task, where the explored branch of the manifold demonstrates effectiveness.
The ideating phase of product design is critical, as decisions made here influence the rest of the product’s lifecycle. Usually, early preliminary designs in engineering are created with pen and paper, which are incompatible with the subsequent digital design process. In an effort to find a modeling tool for early designs that provides the creative flexibility of freehand sketching but also the further processability of digital models, this research investigates natural modeling in virtual reality (VR). To do so, a VR modeling method allowing the intuitive creation of preliminary designs as simplified computer-aided design (CAD) models is presented. The main contribution is the evaluation of this natural VR modeling method against freehand sketching in an extensive user study.
Ambient air pollution remains a global challenge, with adverse impacts on health and the environment. Addressing air pollution requires reliable data on pollutant concentrations, which form the foundation for interventions aimed at improving air quality. However, in many regions, including the United Kingdom, air pollution monitoring networks are characterized by spatial sparsity, heterogeneous placement, and frequent temporal data gaps, often due to issues such as power outages. We introduce a scalable data-driven supervised machine learning model framework designed to address temporal and spatial data gaps by filling missing measurements within the United Kingdom. The machine learning framework used is LightGBM, a gradient boosting algorithm based on decision trees, for efficient and scalable modeling. This approach provides a comprehensive dataset for England throughout 2018 at a 1 km2 hourly resolution. Leveraging machine learning techniques and real-world data from the sparsely distributed monitoring stations, we generate 355,827 synthetic monitoring stations across the study area. Validation was conducted to assess the model’s performance in forecasting, estimating missing locations, and capturing peak concentrations. The resulting dataset is of particular interest to a diverse range of stakeholders engaged in downstream assessments supported by outdoor air pollution concentration data for nitrogen dioxide (NO2), Ozone (O3), particulate matter with a diameter of 10 μm or less (PM10), particulate matter with a diameter of 2.5 μm or less PM2.5, and sulphur dioxide (SO2), at a higher resolution than was previously possible.
Resource management skills are critical to success during new product development processes. Design processes are ambiguous and complex, and designers often face a scarcity of resources that limits their ability to move the new product development process forward, such as limited financial capital, time or human resources. A team’s ability to use resources effectively may determine their likelihood of success during new product development processes. Technology-based startup teams represent an authentic, unique subset of new product development teams that are trying to bring innovative technologies to market. While prior work has identified salient traits of team members that affect a team’s trajectory, little work has investigated how these traits may interact with each other and how they affect an individual’s ability to manage resources. Using a mixed-methods approach, we leveraged data from 241 startup team members to study the relationship between individual traits, team characteristics and resource management skills. A k-means cluster analysis unveiled two distinct archetypes of startup team members, differentiated by (1) self-efficacy, (2) bricolage, (3) risk propensity and (4) perceptions of psychological safety. Team members with higher levels of these traits exhibited greater resource management skills.
In the topic-sensitive theory of the logic of imagination due to Berto [3], the topic of the imaginative output must be contained within the imaginative input. That is, imaginative episodes can never expand what they are about. We argue, with Badura [2], that this constraint is implausible from a psychological point of view, and it wrongly predicts the falsehood of true reports of imagination. Thus the constraint should be relaxed; but how? A number of direct approaches to relaxing the controversial content-inclusion constraint are explored in this paper. The core idea is to consider adding an expansion operator to the mereology of topics. The logic that results depends on the formal constraints placed on topic expansion, the choice of which are subject to philosophical dispute. The first semantics we explore is a topological approach using a closure operator, and we show that the resulting logic is the same as Berto’s own system. The second approach uses an inclusive and monotone increasing operator, and we give a sound and complete axiomatiation for its logic. The third approach uses an inclusive and additive operator, and we show that the associated logic is strictly weaker than the previous two systems, and additivity is not definable in the language. The latter result suggests that involved techniques or a more expressive language is required for a complete axiomatization of the system, which is left as an open question. All three systems are simple tweaks on Berto’s system in that the language remains propositional, and the underlying theory of topics is unchanged.
Understanding the complex dynamics of climate patterns under different anthropogenic emissions scenarios is crucial for predicting future environmental conditions and formulating sustainable policies. Using Dynamic Mode Decomposition with control (DMDc), we analyze surface air temperature patterns from climate simulations to elucidate the effects of various climate-forcing agents. This improves upon previous DMD-based methods by including forcing information as a control variable. Our study identifies both common climate patterns, like the North Atlantic Oscillation and El Niño Southern Oscillation, and distinct impacts of aerosol and carbon emissions. We show that these emissions’ effects vary with climate scenarios, particularly under conditions of higher radiative forcing. Our findings confirm DMDc’s utility in climate analysis, highlighting its role in extracting modes of variability from surface air temperature while controlling for emissions contributions and exposing trends in these spatial patterns as forcing scenarios change.