To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper reports on the experiences of using an early assessment intervention, specifically employing a Use-Modify-Create scaffold, to teach first-year undergraduate functional programming. The particular intervention that was trialled was the use of an early assessment instrument, in which students had to use code given to them, or slightly modify it, to achieve certain goals. The intended outcome was that the students would thus engage earlier with the functional language, enabling them to be better prepared for the second piece of assessment, where they create code to solve given problems. This intervention showed promise: the difference between a student’s score on the Create assignment improved by an average of 9% in the year after the intervention was implemented, a small effect.
The aim of this paper is to give a full exposition of Leibniz’s mereological system. My starting point will be his papers on Real Addition, and the distinction between the containment and the part-whole relation. In the first part (§2), I expound the Real Addition calculus; in the second part (§3), I introduce the mereological calculus by restricting the containment relation via the notion of homogeneity which results in the parthood relation (this corresponds to an extension of the Real Addition calculus via what I call the Homogeneity axiom). I analyze in detail such a notion, and argue that it implies a gunk conception of (proper) part. Finally, in the third part (§4), I scrutinize some of the applications of the containment-parthood distinction showing that a number of famous Leibnizian doctrines depend on it.
The two main sources of difficulty for a group of mobile robots employing sensors to find a source are robot collisions and wireless ambient noise, such as light, sound, and other sounds. This paper introduces a novel approach to multi-robot system cooperation and collision avoidance: the new modified source-seeking control with noise cancelation technology. The robot team works together on an incline of a light source field; the team’s mobility is dependent upon following the upward gradient’s direction and forming a particular movement pattern. The proposed program also takes into account each robot’s size, speed limit, obstacles, and noise. The noise cancelation technique has been used to avoid the delay and false decisions to find the target point of the source. When the noise is canceled, all control inputs to the algorithm are accurate, and the feedback decision will be true. In this study, we use the MATLAB simulation tools to test the velocity, position, time delay, and performance of each robot in the used group of robots. The simulation and practical results of the robots in searching for a light source showed very satisfactory performance compared with the results in the literature.
A method is proposed for identifying robot gravity and friction torques based on joint currents. The minimum gravity term parameters are obtained using the Modified Denavit–Hartenberg (MDH) parameters, and the dynamic equations are linearized. The robot’s friction torque is identified using the Stribeck friction model. Additionally, a zero-force drag algorithm is designed to address the issue of excessive start-up torque during dragging. A sinusoidal compensation algorithm is proposed to perform periodic friction compensation for each stationary joint, utilizing the identified maximum static friction torque. Experimental results show that when the robot operates at a uniform low speed, the theoretical current calculated based on the identified gravity and friction fits the actual current well, with a maximum root mean square error within 50 mA, confirming the accuracy of the identification results. The start-up torque compensation algorithm reduces the robot’s start-up torque by an average of $ 60.58\mathrm{\%}$, improving the compliance of the dragging process and demonstrating the effectiveness of the compensation algorithm.
Global platforms present novel challenges. They are powerful conduits of commerce and global community, and their potential to influence behavior is enormous. Defeating Disinformation explores how to balance free speech and dangerous online content to reduce societal risks of digital platforms. The volume offers an interdisciplinary approach, drawing upon insights from different geographies and parallel challenges of managing global phenomena with national policies and regulations. Chapters also examine the responsibility of platforms for their content, which is limited by national laws such as Section 230 of the Communications Decency Act in the US. This balance between national rules and the need for appropriate content moderation threatens to splinter platforms and reduce their utility across the globe. Timely and expansive, Defeating Disinformation develops a global approach to address these tensions while maintaining, and even enhancing, the social contribution of platforms. This title is also available as open access on Cambridge Core.
Providing an in-depth treatment of an exciting research area, this text's central topics are initial algebras and terminal coalgebras, primary objects of study in all areas of theoretical computer science connected to semantics. It contains a thorough presentation of iterative constructions, giving both classical and new results on terminal coalgebras obtained by limits of canonical chains, and initial algebras obtained by colimits. These constructions are also developed in enriched settings, especially those enriched over complete partial orders and complete metric spaces, connecting the book to topics like domain theory. Also included are an extensive treatment of set functors, and the first book-length presentation of the rational fixed point of a functor, and of lifting results which connect fixed points of set functors with fixed points of endofunctors on other categories. Representing more than fifteen years of work, this will be the leading text on the subject for years to come.
Aiming at the problems of many path inflection points, unsmooth paths, and poor local obstacle avoidance in path planning of inspection robots in static-dynamic scenes under complex geological conditions in coal mine roadways, a hybrid path planning method based on the improved A* algorithm and dynamic window approach (DWA) algorithm is proposed. First, the inspection robot platform and system model are constructed. An improved heuristic function that incorporates target weight information is proposed based on the A* global path planning algorithm. Additionally, redundant nodes are eliminated, and the path is smoothed using the Floyd algorithm and B-spline curves. Second, the global path planning A* algorithm and the local path planning DWA algorithm are fused. The dynamic path planning is carried out by setting the key node information of the global path extracted from the improved A* algorithm as the local target point of the DWA algorithm. On this basis, a grid map is established to simulate and analyze the proposed path planning algorithm. Finally, the autonomous path planning and walking experiment of inspection robot in simulated roadway environment are carried out. The results show that the hybrid path planning method based on improved A* algorithm and DWA algorithm proposed in this paper is more efficient and safer, which can meet the motion requirements of inspection robot in coal mine roadway.
In this paper, we provide a systematic review of existing artificial intelligence (AI) regulations in Europe, the United States, and Canada. We build on the qualitative analysis of 129 AI regulations (enacted and not enacted) to identify patterns in regulatory strategies and in AI transparency requirements. Based on the analysis of this sample, we suggest that there are three main regulatory strategies for AI: AI-focused overhauls of existing regulation, the introduction of novel AI regulation, and the omnibus approach. We argue that although these types emerge as distinct strategies, their boundaries are porous as the AI regulation landscape is rapidly evolving. We find that across our sample, AI transparency is effectively treated as a central mechanism for meaningful mitigation of potential AI harms. We therefore focus on AI transparency mandates in our analysis and identify six AI transparency patterns: human in the loop, assessments, audits, disclosures, inventories, and red teaming. We contend that this qualitative analysis of AI regulations and AI transparency patterns provides a much needed bridge between the policy discourse on AI, which is all too often bound up in very detailed legal discussions and applied sociotechnical research on AI fairness, accountability, and transparency.
Pouch-type actuators have recently garnered significant interest and are increasingly utilized in diverse fields, including soft wearable robotics and prosthetics. This is largely due to their lightweight, high output force, and low cost. However, the inherent hysteresis behavior markedly affects the stability and force control of pouch-type driven systems. This study proposes a modified generalized Prandtl–Ishlinskii (MGPI) model, which includes generalized play operators, the tangent envelope function, and one-sided dead-zone operators, to describe the asymmetric and non-convex hysteresis characteristics of pouch-type actuators. Compared to a classical Prandtl–Ishlinskii (PI) model incorporating one-sided dead-zone functions, the MGPI model exhibits smaller relative errors at six different air pressures, demonstrating its capability to accurately describe asymmetric and non-convex hysteresis curves. Subsequently, the MGPI hysteresis model is integrated with displacement sensing technology to establish a load compensation control system for maintaining human posture. Four healthy subjects are recruited to conduct a 1 kg load compensation test, achieving efficiencies of 85.84%, 84.92%, 83.63%, and 68.86%, respectively.
Real-time systems need to be built out of tasks for which the worst-case execution time is known. To enable accurate estimates of worst-case execution time, some researchers propose to build processors that simplify that analysis. These architectures are called precision-timed machines or time-predictable architectures. However, what does this term mean? This paper explores the meaning of time predictability and how it can be quantified. We show that time predictability is hard to quantify. Rather, the worst-case performance as the combination of a processor, a compiler, and a worst-case execution time analysis tool is an important property in the context of real-time systems. Note that the actual software has implications as well on the worst-case performance. We propose to define a standard set of benchmark programs that can be used to evaluate a time-predictable processor, a compiler, and a worst-case execution time analysis tool. We define worst-case performance as the geometric mean of worst-case execution time bounds on a standard set of benchmark programs.
Glivenko’s theorem says that classical provability of a propositional formula entails intuitionistic provability of the double negation of that formula. This stood right at the beginning of the success story of negative translations, indeed mainly designed for converting classically derivable formulae into intuitionistically derivable ones. We now generalise this approach: simultaneously from double negation to an arbitrary nucleus; from provability in a calculus to an inductively generated abstract consequence relation; and from propositional logic to any set of objects whatsoever. In particular, we give sharp criteria for the generalisation of classical logic to be a conservative extension of the one of intuitionistic logic with double negation.
Research in decentralized computing, specifically in consensus algorithms, has focused on providing resistance to an adversary with a minority stake. This has resulted in systems that are majoritarian in the extreme, ignoring valuable lessons learned in law and politics over centuries. In this article, we first detail this phenomenon of majoritarianism and point out how minority protections in the nondigital world have been implemented. We motivate adding minority protections to collaborative systems with examples. We also show how current software deployment models exacerbate majoritarianism, highlighting the problem of monoculture in client software in particular. We conclude by giving some suggestions on how to make decentralized computing less hostile to those in the minority.
For a given graph $H$, we say that a graph $G$ has a perfect $H$-subdivision tiling if $G$ contains a collection of vertex-disjoint subdivisions of $H$ covering all vertices of $G.$ Let $\delta _{\mathrm {sub}}(n, H)$ be the smallest integer $k$ such that any $n$-vertex graph $G$ with minimum degree at least $k$ has a perfect $H$-subdivision tiling. For every graph $H$, we asymptotically determined the value of $\delta _{\mathrm {sub}}(n, H)$. More precisely, for every graph $H$ with at least one edge, there is an integer $\mathrm {hcf}_{\xi }(H)$ and a constant $1 \lt \xi ^*(H)\leq 2$ that can be explicitly determined by structural properties of $H$ such that $\delta _{\mathrm {sub}}(n, H) = \left (1 - \frac {1}{\xi ^*(H)} + o(1) \right )n$ holds for all $n$ and $H$ unless $\mathrm {hcf}_{\xi }(H) = 2$ and $n$ is odd. When $\mathrm {hcf}_{\xi }(H) = 2$ and $n$ is odd, then we show that $\delta _{\mathrm {sub}}(n, H) = \left (\frac {1}{2} + o(1) \right )n$.
Personas are hypothetical representations of real-world people used as storytelling tools to help designers identify the goals, constraints, and scenarios of particular user groups. A well-constructed persona can provide enough detail to trigger recognition and empathy while leaving room for varying interpretations of users. While a traditional persona is a static representation of a potential user group, a chatbot representation of a persona is dynamic, in that it allows designers to “converse with” the representation. Such representations are further augmented by the use of large language models (LLMs), displaying more human-like characteristics such as emotions, priorities, and values. In this paper, we introduce the term “Synthetic User” to describe such representations of personas that are informed by traditional data and augmented by synthetic data. We study the effect of one example of such a Synthetic User – embodied as a chatbot – on the designers’ process, outcome, and their perception of the persona using a between-subjects study comparing it to a traditional persona summary. While designers showed comparable diversity in the ideas that emerged from both conditions, we find in the Synthetic User condition a greater variation in how designers perceive the persona’s attributes. We also find that the Synthetic User allows novel interactions such as seeking feedback and testing assumptions. We make suggestions for balancing consistency and variation in Synthetic User performance and propose guidelines for future development.
This paper introduces a novel bipedal robot model designed for adaptive transition between walking and running gaits solely through changes in locomotion speed. The bipedal robot model comprises two sub-components: a mechanical model for the legs that accommodates both walking and running and a continuous state model that does not explicitly switch states. The mechanical model employs a structure combining a linear cylinder with springs, dampers, and stoppers, designed to have mechanistic properties of both the inverted pendulum model used for walking and the spring-loaded inverted pendulum model used for running. The state model utilizes a virtual leg representation to abstractly describe the actual support leg, capable of commonly representing both a double support leg in walking and a single support leg in running. These models enable a simple gait controller to determine the kick force and the foot touchdown point based solely on the parameter of the target speed, thus allowing a robot to walk and run stably. Hence, simulation validation demonstrates the adaptive robot transition to an energy-efficient gait depending on locomotion speed without explicit gait-type instructions and maintaining stable locomotion across a wide range of speeds.
Human activity recognition (HAR) is a vital component of human–robot collaboration. Recognizing the operational elements involved in an operator’s task is essential for realizing this vision, and HAR plays a key role in achieving this. However, recognizing human activity in an industrial setting differs from recognizing daily living activities. An operator’s activity must be divided into fine elements to ensure efficient task completion. Despite this, there is relatively little related research in the literature. This study aims to develop machine learning models to classify the sequential movement elements of a task. To illustrate this, three logistic operations in an integrated circuit (IC) design house were studied, with participants wearing 13 inertial measurement units manufactured by XSENS to mimic the tasks. The kinematics data were collected to develop the machine learning models. The time series data preprocessing involved applying two normalization methods and three different window lengths. Eleven features were extracted from the processed data to train the classification models. Model validation was carried out using the subject-independent method, with data from three participants excluded from the training dataset. The results indicate that the developed model can efficiently classify operational elements when the operator performs the activity accurately. However, incorrect classifications occurred when the operator missed an operation or awkwardly performed the task. RGB video clips helped identify these misclassifications, which can be used by supervisors for training purposes or by industrial engineers for work improvement.
Prediction of dynamic environmental variables in unmonitored sites remains a long-standing challenge for water resources science. The majority of the world’s freshwater resources have inadequate monitoring of critical environmental variables needed for management. Yet, the need to have widespread predictions of hydrological variables such as river flow and water quality has become increasingly urgent due to climate and land use change over the past decades, and their associated impacts on water resources. Modern machine learning methods increasingly outperform their process-based and empirical model counterparts for hydrologic time series prediction with their ability to extract information from large, diverse data sets. We review relevant state-of-the art applications of machine learning for streamflow, water quality, and other water resources prediction and discuss opportunities to improve the use of machine learning with emerging methods for incorporating watershed characteristics and process knowledge into classical, deep learning, and transfer learning methodologies. The analysis here suggests most prior efforts have been focused on deep learning frameworks built on many sites for predictions at daily time scales in the United States, but that comparisons between different classes of machine learning methods are few and inadequate. We identify several open questions for time series predictions in unmonitored sites that include incorporating dynamic inputs and site characteristics, mechanistic understanding and spatial context, and explainable AI techniques in modern machine learning frameworks.
For each uniformity $k \geq 3$, we construct $k$ uniform linear hypergraphs $G$ with arbitrarily large maximum degree $\Delta$ whose independence polynomial $Z_G$ has a zero $\lambda$ with $\left \vert \lambda \right \vert = O\left (\frac {\log \Delta }{\Delta }\right )$. This disproves a recent conjecture of Galvin, McKinley, Perkins, Sarantis, and Tetali.