To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In unstructured environments, Delta robots face challenges in achieving high vision-guided grasping precision due to dynamic lighting conditions and workpiece diversity. This paper designs an integrated solution that combines RGB-D multimodal learning with an enhanced Mask R-CNN framework. Initially, a dual-stream ResNet50-FPN backbone network is designed to achieve cross-modal adaptive alignment via hierarchical feature fusion. Subsequently, a depth-guided attention module is incorporated to bolster robustness against material ambiguity and reflective interference. Moreover, a dynamic depth estimation algorithm is employed to significantly improve target localization accuracy and stability. Finally, real-time trajectory tracking is realized by integrating PD control with Jacobian mapping. Experimental results validate the efficacy of the proposed method, offering an efficient and reliable approach for industrial robotic applications.
Efficient memory management is essential for the stability and long-term performance of mobile robots in Simultaneous Localization and Mapping (SLAM). However, existing methods often struggle to control redundancy in keyframes and map points, leading to reduced efficiency, increased latency, and potential system failure due to resource constraints. Achieving high accuracy in both mapping and trajectory estimation while maintaining a compact state representation remains a key challenge for scalable and efficient SLAM systems. To address this issue, this paper proposes an efficient long-term visual SLAM method based on sparse prior embedding and nonlinear score-guided sparsification for memory-constrained environments. The approach embeds keyframe information into sparse prior factors, avoiding global coupling while preserving system sparsity and consistency. Additionally, a nonlinear scoring function combining parallax and descriptor uniqueness is introduced to guide map point sparsification within the sliding window. This strategy enables efficient state graph management, achieving compact global map representations and effective observation constraints. The proposed method has been implemented in a complete visual SLAM system and evaluated through long-term real-world mapping experiments on an embedded robotic platform. Experimental results demonstrate that the approach significantly reduces memory consumption while maintaining trajectory and mapping accuracy. Furthermore, the method ensures real-time execution and deployment potential, indicating its suitability for large-scale SLAM tasks in resource-constrained and long-duration operational scenarios.
Designing complex products increasingly requires integrative methodologies that address the rising challenges of multi-disciplinary complexity and functional inter-dependencies. This article proposes a conceptual design framework that combines the abstractional design method (ADM) with a novel inter-coupling index (ICX) to model and manage inter-component dependencies within cyber-physical vehicle (CPV) systems. The ADM provides a unified object-based representation of system components through functional and attribute abstraction, facilitating shared understanding across disciplines. The ICX quantitatively captures the degree of inter-dependency among system elements, offering a new metric for evaluating design complexity. A case study of a CPV acceleration module demonstrates how indirect coupling and cascading failure risks can be identified and mitigated in the early design process. The methodology supports the decomposition and synthesis of design architectures while preserving functional intent and reducing system vulnerability. This research contributes a transferable and scalable approach to conceptual system design in multi-disciplinary domains.
Concerns around misinformation and disinformation have intensified with the rise of AI tools, with many claiming this is a watershed moment for truth, accuracy and democracy. In response, numerous laws have been enacted in different jurisdictions. Addressing Misinformation and Disinformation introduces this new legal landscape and charts a path forward. The Element identifies avoidance or alleviation of harm as a central legal preoccupation, outlines technical developments associated with AI and other technologies, and highlights social approaches that can support long-term civic resilience. Offering an expansive interdisciplinary analysis that moves beyond narrow debates about definitions, Addressing Misinformation and Disinformation shows how law can work alongside other technical and social mechanisms, as part of a coherent policy response.
For efficient wind farm management and optimized power generation under adverse weather conditions, understanding the causal meteorological drivers is essential. In this paper, we investigate the temporal causal influences of wind speed-related meteorological processes within a wind farm using the Heterogeneous Graphical Granger model (HMML). HMML is applied to synthetically generated wind power production data from Eastern Austria. To assess the plausibility of the identified causal processes, we compare the results with those obtained using the state-of-the-art LiNGAM method. Both methods are applied and evaluated across six different scenarios, each defined by distinct hydrological periods. The scenarios are defined by a set of time intervals characterized by either low/high extreme wind speeds or moderate wind speeds. We applied both methods across these scenarios and conducted causal reasoning to identify potential causes of extreme wind speeds within the wind farm. The sets of causal parameters obtained using HMML were found to be more realistic than those derived from LiNGAM. Combining the knowledge of causal variables affecting wind speed at the turbine hub, identified by HMML in each scenario, with weather forecasts can offer practical guidance for wind farm operators. Specifically, this knowledge can support more informed planning regarding when wind turbines should or should not be generating energy. For instance, the strong Granger-causal linkage identified between wind speed and temperature can inform curtailment strategies. In scenarios where rising temperatures are predictive of declining wind speeds, operators may preemptively adjust turbine output or schedule maintenance to optimize efficiency and reduce wear. Moreover, such predictive insights can feed into energy market models, where anticipated curtailment due to meteorological dependencies affects both generation forecasts and pricing strategies. By integrating these causal relationships into operational planning, the proposed tool offers a pathway toward more adaptive and economically efficient wind energy management.
Students will develop a practical understanding of data science with this hands-on textbook for introductory courses. This new edition is fully revised and updated, with numerous exercises and examples in the popular data science tool R, a new chapter on using R for statistical analysis, and a new chapter that demonstrates how to use R within a range of cloud platforms. The many practice examples, drawn from real-life applications, range from small to big data and come to life in a new end-to-end project in Chapter 11. New 'Data Science in Practice' boxes highlight how concepts introduced work within an industry context and many chapters include new sections on AI and Generative AI. A suite of online material for instructors provides a strong supplement to the book, including lecture slides, solutions, additional assessment material and curriculum suggestions. Datasets and code are available for students online. This entry-level textbook is ideal for readers from a range of disciplines wishing to build a practical, working knowledge of data science.
Students will develop a practical understanding of data science with this hands-on textbook for introductory courses. This new edition is fully revised and updated, with numerous exercises and examples in the popular data science tool Python, a new chapter on using Python for statistical analysis, and a new chapter that demonstrates how to use Python within a range of cloud platforms. The many practice examples, drawn from real-life applications, range from small to big data and come to life in a new end-to-end project in Chapter 11. New 'Data Science in Practice' boxes highlight how concepts introduced work within an industry context and many chapters include new sections on AI and Generative AI. A suite of online material for instructors provides a strong supplement to the book, including lecture slides, solutions, additional assessment material and curriculum suggestions. Datasets and code are available for students online. This entry-level textbook is ideal for readers from a range of disciplines wishing to build a practical, working knowledge of data science.
To solve the problems of precise operation and real-time interaction during the spraying process of industrial robots, a new spraying method based on digital twin technology is proposed. In view of the limitations of traditional spraying processes in complex geometric shape processing, spraying uniformity control, and operational flexibility, this study built a highly simulated virtual environment based on digital twin and human–machine collaboration technology, allowing operators to guide the robot in real time for precise spraying operations. The use of multisensor fusion technology achieves a high degree of consistency between the physical and virtual environments, ensuring that the system can maintain high-precision spraying on complex workpiece surfaces. The experimental designed spraying tasks for different geometric shapes and evaluated the performance of the system’s interactive spraying method in terms of real-time feedback guidance and path planning. The results show that the proposed method significantly improves the accuracy and efficiency of the spraying process, especially showing obvious advantages when processing complex geometric workpieces, and provides a new technical approach for future high-precision manufacturing.
Following Entman’s observation that policy frames define social problems, diagnose causes and suggest remedies, we examined the strategies that 12 U.S. governors (from states matched according to population size and density, demographic composition, per capita incomes, geographic proximity, and COVID-19 incidence) used to frame COVID-19 policy agendas. After scraping the governors’ statements about COVID-19 from press releases issued from January 2020 to May 2023 (N = 14,629), we leveraged ChatGPT (GPT) to identify and assess the intensity of public health, economic stability, and civic vitality frames. Subsequent analysis explored differences in the framing strategies according to the governors’ political party and gender. In the process, this study underscores the importance of AI prompt engineering to realize GPT’s transformative potential to facilitate communication research by efficiently identifying and assessing the content of policy frames.
Neuromorphic vision-based robotic tactile sensors fuse touch and vision, enabling manipulators to efficiently grip and identify objects. Precise robotic manipulation requires early detection of slips on the grasped object, which is crucial for maintaining grip stability and safety. Modern closed-loop feedback technologies use measurements from neuromorphic vision-based tactile sensors to control and prevent object slippage. Unfortunately, most of these sensors measure and report data-based rather than model-based information, resulting in less efficient control capabilities. This work proposes physical and mathematical modeling of an in-house-developed neuromorphic vision-based robotic tactile sensor that utilizes a protruded marker design to demonstrate the model-based approach. This sensor is mounted on the UR10 robotic manipulator, enabling manipulation tasks such as approaching, pressing, and slipping. The neuromorphic vision-based robotic tactile sensor-derived mathematical model revealed first-order system behavior for three manipulation-related actions under study. Experimental robotic manipulator grasping work is conducted to verify and validate the sensor’s derived mathematical FOS model. Two data analysis approaches, temporal and spatial–temporal model based, are adopted to classify the manipulator-sensor actions. A long short-term memory (LSTM) temporal classifier is engineered to exploit the sensor’s derived model. Also, the LSTM spatial–temporal classifier is designed using an event-weighted centroid of the region-of-interest features. Both LSTM methods successfully identified the robotic actions performed with an accuracy of more than 99%. Additionally, quantitative slip rate estimation is carried out based on centroid estimation, and qualitative assessment of pressing force is performed using a fuzzy logic classifier.
Lorenz dominance is a classical criterion for comparing income distributions with respect to inequality and social welfare. However, its binary nature, in which one distribution either dominates another or does not, often leads to inconclusive results when empirical Lorenz curves intersect. To overcome this limitation, we introduce the Lorenz dominance index (LDI), a continuous measure that quantifies the extent to which one Lorenz curve lies above another. The LDI provides an interpretable assessment based on the population, allowing for the evaluation of partial or near dominance and improving its usefulness in empirical settings. We derive the asymptotic distribution of the LDI and propose a nonparametric bootstrap procedure to construct confidence intervals and perform inference. Monte Carlo simulations confirm the estimator’s strong performance in finite samples and its nominal coverage. An application to household income data from China highlights the practical value of the LDI in distributional analysis.
Emphasizing how and why machine learning algorithms work, this introductory textbook bridges the gap between the theoretical foundations of machine learning and its practical algorithmic and code-level implementation. Over 85 thorough worked examples, in both Matlab and Python, demonstrate how algorithms are implemented and applied whilst illustrating the end result. Over 75 end-of-chapter problems empower students to develop their own code to implement these algorithms, equipping them with hands-on experience. Matlab coding examples demonstrate how a mathematical idea is converted from equations to code, and provide a jumping off point for students, supported by in-depth coverage of essential mathematics including multivariable calculus, linear algebra, probability and statistics, numerical methods, and optimization. Accompanied online by instructor lecture slides, downloadable Python code and additional appendices, this is an excellent introduction to machine learning for senior undergraduate and graduate students in Engineering and Computer Science.
Onlife criminology is the study of crime and social harm produced by the blurring lines between digital engagement and our everyday lives. This thought-provoking book analyses the threats of surveillance, indoctrination and abuse of personal data that can potentially affect us all.
For far too long, tech titans peddled promises of disruptive innovation - fabricating benefits and minimizing harms. The promise of quick and easy fixes overpowered a growing chorus of critical voices, driving a sea of private and public investments into increasingly dangerous, misguided, and doomed forms of disruption, with the public paying the price. But what's the alternative? Upgrades - evidence-based, incremental change. Instead of continuing to invest in untested, high-risk innovations, constantly chasing outsized returns, upgraders seek a more proven path to proportional progress. This book dives deep into some of the most disastrous innovations of recent years - the metaverse, cryptocurrency, home surveillance, and AI, to name a few - while highlighting some of the unsung upgraders pushing real progress each day. Timely and corrective, Move Slow and Upgrade pushes us past the baseless promises of innovation, towards realistic hope.
Chapter 6 looks at the failures of educational innovation during the Covid-19 crisis. As schools scrambled to adapt to remote learning, remote proctoring technologies rapidly expanded. They implemented surveillance systems that violated student privacy and disproportionately harmed vulnerable students. Despite claims of maintaining academic integrity, remote proctoring created a stressful, punitive environment that prioritized monitoring over genuine educational support while failing to do nearly enough to address the inequalities at the heart of accessing and using digital resources. Sadly, the rush to innovate missed crucial opportunities to upgrade core educational infrastructure and truly support students during a time of unprecedented challenge. As if this wasn’t bad enough, some schools continue to use remote proctoring software. A pandemic problem has thus become the new normal.
Chapter 2 shows how when the emperor of innovation isn’t wearing any clothes, upgraders can still see the naked truth of the situation. Zuckerberg promised a metaverse, a new digital reality, that would transform human connection, interaction, and commerce. But this handwavy conception of the future lacked any clear vision, let alone consumer demand. Upgraders were able to spot the folly long before it became one of the largest corporate boondoggles in modern commerce, a shorthand for corporate disfunction. In contrast to the unbridled enthusiasm of innovators, upgraders would have started with the question of why the public would ever want this product in the first place. Instead, Meta tried to sway public opinion with overly rosy futuristic promises, trying to move the market to meet their innovation, rather than solving problems that actually mattered to the public. Like other innovations, the metaverse shows how tech companies ignore the fundamentals of human behavior and social change, dooming their grand visions.
Polychrony is a virtual or artificial tempor[e]ality that is constructed by the fine augmentation or tempering of a natural set of latencies that articulate a complex networked acoustic. The art is to optimise the alignment of these disjunct temporalities as they merge in a new chronotopic fusion. This fooling with Mother Nature, however, does not come without consequences: due to the significant latency effects intrinsic to a planetary-scale network, a phenomenon called topo-rhythmia emerges. Toporhythms are derived simply as a feature of communication over distance; they are the multiple versions of a rhythm that occur at each node of a networked piece due to the temporal offsets caused by delay. To work with this feature more intentionally, rather than as an accident of relativity, we must tune or temper the network latency. Tempering is a general tactic for ontological negotiation, bringing observers and complex systems into some kind of coherency. The purpose of this article is to explore the tempering of musical time-space on networks and how that underlies the notational practices (and the alien compositional assumptions) built upon this novel orientation.
Chapter 5, “The Failed Promise of Covid Innovation,” presents the pandemic as a crucial case study of how innovative thinking let us down at a time of great vulnerability. Simply put, the early days of massive fatalities made COVID-19 a health crisis. But those days also can be seen as a powerful lens for understanding high-tech failure. From contact tracing apps to thermal imaging cameras and digital vaccine passports, there was a fever pitch of government and corporate enthusiasm for innovative solutionism that was predestined to be unreliable and, thus, in context, dangerous. While we acknowledge remarkable breakthroughs like the rapid development of mRNA vaccines, we also make the case that additional effective responses could have come from upgrading existing systems rather than trying to do things entirely new.