To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Achieving net-zero carbon emissions by 2050 necessitates the integration of substantial wind power capacity into national power grids. However, the inherent variability and uncertainty of wind energy present significant challenges for grid operators, particularly in maintaining system stability and balance. Accurate short-term forecasting of wind power is therefore essential. This article introduces an innovative framework for regional wind power forecasting over short-term horizons (1–6 h), employing a novel Automated Deep Learning regression framework called WindDragon. Specifically designed to process wind speed maps, WindDragon automatically creates Deep Learning models leveraging Numerical Weather Prediction (NWP) data to deliver state-of-the-art wind power forecasts. We conduct extensive evaluations on data from France for the year 2020, benchmarking WindDragon against a diverse set of baselines, including both deep learning and traditional methods. The results demonstrate that WindDragon achieves substantial improvements in forecast accuracy over the considered baselines, highlighting its potential for enhancing grid reliability in the face of increased wind power integration.
This is the first book to revisit the theory of rewriting in the context of strict higher categories, through the unified approach provided by polygraphs, and put it in the context of homotopical algebra. The first half explores the theory of polygraphs in low dimensions and its applications to the computation of the coherence of algebraic structures. Illustrated with algorithmic computations on algebraic structures, the only prerequisite in this section is basic category theory. The theory is introduced step-by-step, with detailed proofs. The second half introduces and studies the general notion of n-polygraph, before addressing the homotopy theory of these polygraphs. It constructs the folk model structure on the category on strict higher categories and exhibits polygraphs as cofibrant objects. This allows the formulation of higher-dimensional generalizations of the coherence results developed in the first half. Graduate students and researchers in mathematics and computer science will find this work invaluable.
English as a foreign language (EFL) students are increasingly learning English in extramural digital settings (informal digital learning of English; IDLE). Previous research has investigated the antecedents of IDLE engagement, focusing on basic psychological needs (BPNs) in classroom settings. However, little attention has been given to the role of BPNs in digital settings, where digital-native EFL students often fulfil their psychological needs. This study explores the relationship between two core BPNs – competence and relatedness – in both classroom and digital settings and IDLE engagement among 226 Kazakhstani university EFL students. Hierarchical multiple regression analyses indicate that, in the classroom, students who perceive themselves as more competent are more likely to engage in receptive and productive IDLE. Also, a higher sense of in-class relatedness strengthens the positive relationship between in-class competence and productive IDLE. In the digital settings, students who perceive themselves as more competent are more likely to engage in receptive IDLE, while competence alone does not directly lead to productive IDLE. A higher sense of relatedness positively moderates the links, amplifying the connection between competence and engagement in both receptive and productive IDLE. These findings suggest that educators can enhance EFL students’ IDLE engagement by designing and recommending activities that foster competence and a sense of community in both classroom and digital settings.
Robotic lower limb exoskeletons are wearable devices designed to augment human motor functions and enhance physical capabilities mostly adopted in healthcare and rehabilitation. The field is strongly dominated by rigid exoskeletons driven by electromagnetic actuators constituted by electrical motors, gearboxes, and cylinders. This review focuses on the design and specifications of the actuation systems of lower limb exoskeletons, with the ultimate goal of providing reporting guidelines to allow for full reproducibility. For each paper, we assessed the quality and completeness of technical characteristics with two ad hoc rating scales for motors and reducers; we extracted the main parameters of the actuation unit and a quantitative analysis of the mechanical characteristics of the individual components was carried out considering the exoskeleton application. Overall, we observed a lack of details in reporting on actuation systems equipped on exoskeletons. To overcome this limitation, herein we conclude by proposing a data form and a checklist to provide researchers with a common approach in reporting the mechanical characteristics of the actuation unit of their lower limb exoskeletons. We believe that the convergence of exoskeletons’ literature toward a clearer standardization of design and reporting will boost the development of this technology and its diffusion outside the laboratory.
Processing and extracting actionable information, such as fault or anomaly indicators originating from vibration telemetry, is both challenging and critical for an accurate assessment of mechanical system health and subsequent predictive maintenance. In the setting of predictive maintenance for populations of similar assets, the knowledge gained from any single asset should be leveraged to provide improved predictions across the entire population. In this paper, a novel approach to population-level health monitoring is presented adopting a transfer learning approach. The new methodology is applied to monitor multiple rotating plant assets in a power generation scenario. The focus is on the detection of statistical anomalies as a means of identifying deviations from the typical operating regime from a time series of telemetry data. This is a challenging task because the machine is observed under different operating regimes. The proposed methodology can effectively transfer information across different assets, automatically identifying segments with common statistical characteristics and using them to enrich the training of the local supervised learning models. The proposed solution leads to a substantial reduction in mean square error relative to a baseline model.
The EU’s Common European Data Space (CEDS) aims to create a single market for data-sharing in Europe, build trust among stakeholders, uphold European values, and benefit society. However, there is the possibility that the values of the EU and the benefits for the common good of European society may get overlooked for the economic benefits of organisations if norms and social values are not considered. We propose that the concept of “data commons” is relevant for defining openness versus enclosure of data in data spaces and is important when considering the balance and trade-off between individual (market) versus collective (societal) benefits from data-sharing within the CEDS. Commons are open-access resources governed by a group, either formally by regulation or informally by local customs. The application of the data commons to the CEDS would promote data-sharing for the “common good.” However, we propose that the data commons approach should be balanced with the market-based approach to CEDS in an inclusive hybrid data governance approach that meets material, price-driven interests, while stimulating collective learning in online networks to form social communities that offer participants a shared identity and social recognition.
In this paper we study logical bilateralism understood as a theory of two primitive derivability relations, namely provability and refutability, in a language devoid of a primitive strong negation and without a falsum constant, $\bot $, and a verum constant, $\top $. There is thus no negation that toggles between provability and refutability, and there are no primitive constants that are used to define an “implies falsity” negation and a “co-implies truth” co-negation. This reduction of expressive power notwithstanding, there remains some interaction between provability and refutability due to the presence of (i) a conditional and the refutability condition of conditionals and (ii) a co-implication and the provability condition of co-implications. Moreover, assuming a hyperconnexive understanding of refuting conditionals and a dual understanding of proving co-implications, neither non-trivial negation inconsistency nor hyperconnexivity is lost for unary negation connectives definable by means of certain surrogates of falsum and verum. Whilst a critical attitude towards $\bot $ and $\top $ can be justified by problematic aspects of the Brouwer-Heyting-Kolmogorov interpretation of the logical operations for these constants, the aim to reduce the availability of a toggling negation and observations on undefinability may also give further reasons to abandon $\bot $ and $\top $.
The notion of global supervenience captures the idea that the overall distribution of certain properties in the world is fixed by the overall distribution of certain other properties. A formal implementation of this idea in constant-domain Kripke models is as follows: predicates $Q_1,\dots ,Q_m$ globally supervene on predicates $P_1,\dots ,P_n$ in world w if two successors of w cannot differ with respect to the extensions of the $Q_i$ without also differing with respect to the extensions of the $P_i$. Equivalently: relative to the successors of w, the extensions of the $Q_i$ are functionally determined by the extensions of the $P_i$. In this paper, we study this notion of global supervenience, achieving three things. First, we prove that claims of global supervenience cannot be expressed in standard modal predicate logic. Second, we prove that they can be expressed naturally in an inquisitive extension of modal predicate logic, where they are captured as strict conditionals involving questions; as we show, this also sheds light on the logical features of global supervenience, which are tightly related to the logical properties of strict conditionals and questions. Third, by making crucial use of the notion of coherence, we prove that the relevant system of inquisitive modal logic is compact and has a recursively enumerable set of validities; these properties are non-trivial, since in this logic a strict conditional expresses a second-order quantification over sets of successors.
This paper presents a reverse mathematical analysis of several forms of the sorites paradox. We first illustrate how traditional discrete formulations are reliant on Hölder’s representation theorem for ordered Archimedean groups. While this is provable in $\mathsf {RCA}_0$, we also consider two forms of the sorites which rest on non-constructive principles: the continuous sorites of Weber & Colyvan [35] and a variant we refer to as the covering sorites. We show in the setting of second-order arithmetic that the former depends on the existence of suprema and thus on arithmetical comprehension ($\mathsf {ACA}_0$) while the latter depends on the Heine–Borel theorem and thus on Weak König’s lemma ($\mathsf {WKL}_0$). We finally illustrate how recursive counterexamples to these principles provide resolutions to the corresponding paradoxes which can be contrasted with supervaluationist, epistemicist, and constructivist approaches.
The tensor notation used in several areas of mathematics is a useful one, but it is not widely available to the functional programming community. In a practical sense, the (embedded) domain-specific languages (dsls) that are currently in use for tensor algebra are either 1. array-oriented languages that do not enforce or take advantage of tensor properties and algebraic structure or 2. follow the categorical structure of tensors but require the programmer to manipulate tensors in an unwieldy point-free notation. A deeper issue is that for tensor calculus, the dominant pedagogical paradigm assumes an audience which is either comfortable with notational liberties which programmers cannot afford, or focus on the applied mathematics of tensors, largely leaving their linguistic aspects (behaviour of variable binding, syntax and semantics, etc.) for the reader to figure out by themselves. This state of affairs is hardly surprising, because, as we highlight, several properties of standard tensor notation are somewhat exotic from the perspective of lambda calculi. We bridge the gap by defining a dsl, embedded in Haskell, whose syntax closely captures the index notation for tensors in wide use in the literature. The semantics of this edsl is defined in terms of the algebraic structures which define tensors in their full generality. This way, we believe that our edsl can be used both as a tool for scientific computing, but also as a vehicle to express and present the theory and applications of tensors.
Metal–organic polyhedra (MOPs) are discrete, porous metal–organic assemblies known for their wide-ranging applications in separation, drug delivery, and catalysis. As part of The World Avatar (TWA) project—a universal and interoperable knowledge model—we have previously systematized known MOPs and expanded the explorable MOP space with novel targets. Although these data are available via a complex query language, a more user-friendly interface is desirable to enhance accessibility. To address a similar challenge in other chemistry domains, the natural language question-answering system “Marie” has been developed; however, its scalability is limited due to its reliance on supervised fine-tuning, which hinders its adaptability to new knowledge domains. In this article, we introduce an enhanced database of MOPs and a first-of-its-kind question-answering system tailored for MOP chemistry. By augmenting TWA’s MOP database with geometry data, we enable the visualization of not just empirically verified MOP structures but also machine-predicted ones. In addition, we renovated Marie’s semantic parser to adopt in-context few-shot learning, allowing seamless interaction with TWA’s extensive MOP repository. These advancements significantly improve the accessibility and versatility of TWA, marking an important step toward accelerating and automating the development of reticular materials with the aid of digital assistants.
In today’s world, smart algorithms—artificial intelligence (AI) and other intelligent systems—are pivotal for promoting the development agenda. They offer novel support for decision-making across policy planning domains, such as analysing poverty alleviation funds and predicting mortality rates. To comprehensively assess their efficacy and implications in policy formulation, this paper conducts a systematic review of 207 publications. The analysis underscores their integration within and across stages of the policy planning cycle: problem diagnosis and goal articulation; resource and constraint identification; design of alternative solutions; outcome projection; and evaluation. However, disparities exist in smart algorithm applications across stages, economic development levels, and Sustainable Development Goals (SDGs). While these algorithms predominantly focus on resource identification (29%) and contribute significantly to designing alternatives—such as long-term national energy policies—and projecting outcomes, including predicting multi-scenario land-use ecological security strategies, their application in evaluation remains limited (10%). Additionally, low-income nations have yet to fully harness AI’s potential, while upper-middle-income countries effectively leverage it. Notably, smart algorithm applications for SDGs also exhibit unevenness, with more emphasis on SDG 11 than on SDG 5 and SDG 17. Our study identifies literature gaps. Firstly, despite theoretical shifts, a disparity persists between physical and socioeconomic/environmental planning applications. Secondly, there is limited attention to policy-making in development initiatives, which is critical for improving lives. Future research should prioritise developing adaptive planning systems using emerging powerful algorithms to address uncertainty and complex environments. Ensuring algorithmic transparency, human-centered approaches, and responsible AI are crucial for AI accountability, trust, and credibility.
In recent years, there has been a global trend among governments to provide free and open access to data collected by Earth-observing satellites with the purpose of maximizing the use of this data for a broad array of research and applications. Yet, there are still significant challenges facing non-remote sensing specialists who wish to make use of satellite data. This commentary explores an illustrative case study to provide concrete examples of these challenges and barriers. We then discuss how the specific challenges faced within the case study illuminate some of the broader issues in data accessibility and utility that could be addressed by policymakers that aim to improve the reach of their data, increase the range of research and applications that it enables, and improve equity in data access and use.
This paper introduces an equivalent series mechanism model to improve ankle rehabilitation robots’ ability to recurrence the complex movements of the anthropo-ankle and enhance human-machine locomotion compatibility. The model emulates the true anatomical architecture of the ankle joint and is integrated with a parallel rehabilitative mechanism. The rehabilitative robot includes dual virtual motion centers to mimic the ankle joint’s intricate motion, accommodate individual patient variations, and address the rehabilitation requirements of both right and left feet. Firstly, a serial equivalence model of anthropo-ankle is developed based on the kinematic and anatomical characteristics of the human ankle. The type design for the 4-degree of freedom (4-DOF) parallel ankle rehabilitative robot is then conducted on the basis of the kinematical and restrictive properties of the anthropo-ankle equivalence kinematic model. Secondly, the mechanism’s motion properties allow it to be equivalent to a series branch chain, enabling the establishment of an inverse kinematics model. The kinematical performance of the mechanisms is analyzed using the transmissibility and constrainability indices, followed by workspace analysis and dimensional optimization of the rehabilitative mechanism. Finally, a human-machine coupled rehabilitative simulation model is developed using OpenSim biomechanics software to evaluate the recovery effect.
Engineering design requires humans to make complex, multi-objective decisions involving trade-offs where it is challenging to identify the best solution. AI-embedded computational support tools are increasingly used to aid in such scenarios, enhancing the design decision-making process. However, over- or under-reliance on imperfect “blackbox” models may prevent optimal outcomes. To investigate AI-assisted decision-making in engineering design, two complementary experiments (N = 90) were conducted. Participants chose between pairs of aircraft jet engine brackets and were tasked with selecting the better design based on two (Experiment 1) or three (Experiment 2) competing objectives. Participants received simulated AI suggestions, which correctly suggested a better design, incorrectly suggested a worse design, or arbitrarily suggested an approximately equivalent design. At times, these suggestions were accompanied by an example-based explanation. Results demonstrate that participants follow suggestions less than expected when the model can objectively determine the better-performing alternative, often underutilizing the model’s advice to their detriment. When the “better” choice is uncertain, the tendency to follow an arbitrary suggestion differs, with overutilization occurring only in the bi-objective case. There is no evidence that providing an explanation of the model’s suggestion impacts decision-making. The results provide valuable insights into how engineering designers’ multi-objective decisions may be affected – positively, negatively, or not at all – by computational tools meant to assist them.
The literature suggests that product and product-service system (PSS) design problems are characteristically different. However, there is limited empirical evidence to suggest that the design cognition specific to the respective design activities is different. This article reports the findings of a comparative study of protocols of conceptual product and PSS designing carried out in a laboratory environment by 28 pairs of experienced product designers from the manufacturing industry. First, differences between product and PSS design problems were theoretically characterized in terms of their respective sources of complexity. Based on these differences, hypotheses concerning differences in the cognitive processes of conceptual product and PSS designing were developed and empirically tested. Results indicate that PSS designing by experienced product designers is more problem-focused while product designing is more solution-focused. PSS designing was found to focus more on the design issue function and the design process formulation. Further, PSS designing was more likely to apply a depth-first search strategy, while product designing was more apt to apply a breadth-first search strategy. Results point towards the need to support the analysis of derived behavior of structure and the application of a breadth-first strategy during PSS designing by product designers.
Global path planning using roadmap (RM) path-planning methods including Voronoi diagram (VD), rapidly exploring random trees (RRT), and probabilistic roadmap (PRM) has gained popularity over the years in robotics. These global path-planning methods are usually combined with other path-planning techniques to achieve collision-free robot control to a specified destination. However, it is unclear which of these methods is the best choice to compute the efficient path in terms of path length, computation time, path safety, and consistency of path computation. This article reviewed and adopted a comparative research methodology to perform a comparative analysis to determine the efficiency of these methods in terms of path optimality, safety, consistency, and computation time. A hundred maps of different complexities with obstacle occupancy rates ranging from 50.95% to 78.42% were used to evaluate the performance of the RM path-planning methods. Each method demonstrated unique strengths and limitations. The study provides critical insights into their relative performance, highlighting application-specific recommendations for selecting the most suitable RM method. These findings contribute to advancing robot path-planning techniques by offering a detailed evaluation of widely adopted methods.
Some informal arguments are valid, others are invalid. A core application of logic is to tell us which is which by capturing these validity facts. Philosophers and logicians have explored how well a host of logics carry out this role, familiar examples being propositional, first-order and second-order logic. Since natural language and standard logics are countable, a natural question arises: is there a countable logic guaranteed to capture the validity patterns of any language fragment? That is, is there a countable omega-universal logic? Our article philosophically motivates this question, makes it precise, and then answers it. It is a self-contained, concise sequel to ‘Capturing Consequence’ by A.C. Paseau (RSL vol. 12, 2019).
The alignment of artificial intelligence (AI) systems with societal values and the public interest is a critical challenge in the field of AI ethics and governance. Traditional approaches, such as Reinforcement Learning with Human Feedback (RLHF) and Constitutional AI, often rely on pre-defined high-level ethical principles. This article critiques these conventional alignment frameworks through the philosophical perspectives of pragmatism and public interest theory, arguing against their rigidity and disconnect with practical impacts. It proposes an alternative alignment strategy that reverses the traditional logic, focusing on empirical evidence and the real-world effects of AI systems. By emphasizing practical outcomes and continuous adaptation, this pragmatic approach aims to ensure that AI technologies are developed according to the principles that are derived from the observable impacts produced by technology applications.