To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Traditional path planning algorithms often encounter challenges in complex dynamic environments, including local optima, excessive path lengths, and inadequate dynamic obstacle avoidance. Thus, the development of innovative path planning algorithms is essential. This article addresses the challenges of mobile robot path planning in complex environments, where traditional methods often converge to local optima, leading to suboptimal path lengths, and struggle with dynamic obstacle avoidance. To overcome these limitations, we propose an integrated algorithm, the enhanced sparrow search algorithm combined with the dynamic window approach (ESSA-DWA). The algorithm first utilizes ESSA for global path planning, followed by local path planning facilitated by the DWA. Specifically, ESSA incorporates Tent chaotic initialization to enhance population diversity, effectively mitigating the risk of premature convergence to local optima. Moreover, dynamic adjustments to the inertia weight during the search process enable an adaptive balance between exploration and exploitation. The integration of a local search strategy further refines individual updates, thereby improving local search performance. To enhance path smoothness, the Floyd algorithm is employed for path optimization, ensuring a more continuous trajectory. Finally, the combination of ESSA and DWA uses key nodes from the global path generated by ESSA as reference points for the local planning process of DWA. This approach ensures that the local path closely follows the global path while also enabling real-time dynamic obstacle detection and avoidance. The effectiveness of the algorithm has been validated through both simulations and practical experiments, offering an efficient and viable solution to the path planning problem.
Spherical robots face significant challenges in motion control on non-horizontal terrains, such as slopes, due to their unique spherical structure. This paper systematically investigates the motion stability of spherical robots on inclined surfaces through modeling, control algorithm design, and experimental validation. Precise Equilibrium Modeling: Using the virtual displacement method, the precise equilibrium equation for spherical robots on slopes is derived, addressing the issue of insufficient accuracy in describing the actual center of gravity in existing studies. Control Algorithm Design: For known slope conditions, a Backstepping Control (BSC) algorithm is designed, demonstrating excellent tracking performance. For unknown slope conditions, an Adaptive Backstepping Control (ABSC) algorithm is proposed, which significantly reduces tracking errors and enhances system robustness through parameter adaptation. Simulation and Physical Validation: Simulations confirm the effectiveness of the algorithms: BSC achieves high-precision control under known slopes, while ABSC exhibits strong adaptability under unknown slopes. Physical experiments validate the stability of the algorithms in a $5^\circ$ slope environment, demonstrating reliable performance across different control angles.
The underwater target detection is affected by image blurring caused by suspended particles in water bodies and light scattering effects. To tackle this issue, this paper proposes a reparameterized feature enhancement and fusion network for underwater blur object recognition (REFNet). First, this paper proposes the reparameterized feature enhancement and gathering (REG) module, which is designed to enhance the performance of the backbone network. This module integrates the concepts of reparameterization and global response normalization to enhance the network’s feature extraction capabilities, addressing the challenge of feature extraction posed by image blurriness. Next, this paper proposes the cross-channel information fusion (CIF) module to enhance the neck network. This module combines detailed information from shallow features with semantic information from deeper layers, mitigating the loss of image detail caused by blurring. Additionally, this paper replace the CIoU loss function with the Shape-IoU loss function improves target localization accuracy, addressing the difficulty in accurately locating bounding boxes in blurry images. Experimental results indicate that REFNet achieves superior performance compared to state-of-the-art methods, as evidenced by higher mAP scores on the underwater robot professional competitionand detection underwater objects datasets. REFNet surpasses YOLOv8 by approximately 1.5% in $mAP_{50:95}$ on the URPC dataset and by about 1.3% on the DUO dataset. This enhancement is achieved without significantly increasing the model’s parameters or computational load. This approach enhances the precision of target detection in challenging underwater environments.
The documentation of sound art installation has received scant research attention. This ARTICLE investigates the sensory experience of spatial audio recordings of two sound art installations: Écosystème(s) by Estelle Schorpp and Générateur Stochastique by Jean-Pierre Gauthier. Interactive listening sessions WERE CONDUCTED with participants from different fields of expertise: sound artists, sound engineers, new media and sound art curators, and new media and sound art conservators. Listening sessions were followed by semi-structured interviews questioning the selection of significant positions in time and space in the recordings. The analysis revealed a broad range of listening strategies which expand the literature on documentation frameworks. This research shows the potential for methodologically including the sensory experience in the documentation of sound art installations and discusses the use of spatial recording as a tool for the specification of documentation in a multi-expertise context.
The selection of random sampling points is crucial for the path quality generated by probabilistic roadmap (PRM) algorithm. Increasing the number of sampling points can enhance path quality. However, it may also lead to extended convergence time and reduced computational efficiency. Therefore, an improved probabilistic roadmap algorithm (TL-PRM) is proposed based on topological discrimination and lazy collision. TL-PRM algorithm first generates a circular grid area among start and goal points. Then, it constructs topological nodes. Subsequently, elliptical sampling areas are created between each pair of adjacent topological nodes. Random sampling points are generated within these areas. These sampling points are interconnected using a layer connection strategy. An initial path is generated using a delayed collision strategy. The path is then adjusted by modifying the nodes on the convex outer edges to avoid obstacles. Finally, a reconnection strategy is employed to optimize the path. This reduces the number of path waypoints. In dynamic environments, TL-PRM algorithm employs pose adjustment strategies for semi-static and dynamic obstacles. It can use either the same or opposite pose adjustments to avoid dynamic obstacles. Experimental results indicate that TL-PRM algorithm reduces the average number of generated sampling points by 70.9% and average computation time by 62.1% compared with PRM* and PRM-Astar algorithms. In winding and narrow passage maps, TL-PRM algorithm significantly decreases the number of sampling points and shortens convergence time. In dynamic environments, the algorithm can adjust its pose orientation in real time. This allows it to safely reach the goal point. TL-PRM algorithm provides an effective solution for reducing the generation of sampling points in PRM algorithm.
The robot manipulator is commonly employed in the space station experiment cabinet for the disinfection task. The challenge lies in devising a motion trajectory for the robot manipulator that satisfies both performance criteria and constraints within the confined space of an experimental cabinet. To address this issue, this paper proposes a trajectory planning method in joint space. This method constructs the optimal trajectory by transforming the original problem into a constrained multi-objective optimization problem. This is then solved and integrated with the seventh-degree B-spline curve. The optimization algorithm utilizes an indicator-based adaptive differential evolution algorithm, enhanced with improved Tent chaotic mapping and opposition-based learning for population initialization. The method employed the Fréchet distance to design a trajectory selection strategy based on the Pareto solutions to ensure that the planned trajectory complies with Cartesian space requirements. This allows the robot manipulator end-effector to approximate the desired path in Cartesian space closely. The findings indicate that the proposed method can effectively design the robot manipulator trajectory, considering both joint motion performance and end-effector motion constraints. This ensures that the robot manipulator operates efficiently and safely within the experimental cabinet.
We present a short and simple proof of the celebrated hypergraph container theorem of Balogh–Morris–Samotij and Saxton–Thomason. On a high level, our argument utilises the idea of iteratively taking vertices of largest degree from an independent set and constructing a hypergraph of lower uniformity which preserves independent sets and inherits edge distribution. The original algorithms for constructing containers also remove in each step vertices of high degree, which are not in the independent set. Our modified algorithm postpones this until the end, which surprisingly results in a significantly simplified analysis.
In this work, the problem of reliably checking collisions between robot manipulators and the surrounding environment in short time for tasks, such as replanning and object grasping in clutter, is addressed. Geometric approaches are usually applied in this context; however, they can result not suitable in highly time-constrained applications. The purpose of this paper is to present a learning-based method able to outperform geometric approaches in clutter. The proposed approach uses a neural network (NN) to detect collisions online by performing a classification task on the input represented by the depth image or point cloud containing the robot gripper projected into the application scene. Specifically, several state-of-the-art NN architectures are considered, along with some customization to tackle the problem at hand. These approaches are compared to identify the model that achieves the highest accuracy while containing the computational burden. The analysis shows the feasibility of the robot collision checker based on a deep learning approach. In fact, such approach presents a low collision detection time, of the order of milliseconds on the selected hardware, with acceptable accuracy. Furthermore, the computational burden is compared with state-of-the-art geometric techniques. The entire work is based on an industrial case study involving a KUKA Agilus industrial robot manipulator at the Technology $\&$ Innovation Center of KUKA Deutschland GmbH, Germany. Further validation is performed with the Amazon Robotic Manipulation Benchmark (ARMBench) dataset as well, in order to corroborate the reported findings.
The increasing number of applications for spatial audio technologies has led to a growing interest in the subject from academic institutions and a more capillary diffusion of techniques and practices to non-institutional contexts, especially independent sound artists. However, the lack of a methodology for learning these technologies motivated our team to develop the Open Ambisonics Toolkit (OAT). Our goal is to promote the diffusion of spatial audio technologies by combining three pedagogical components: a DIY approach to hardware, a selection of open-source software, and a step-by-step introduction to Ambisonics theory through practical applications. The present article focuses on the development of a flexible toolkit and is based in our own practical experience as sound artists and teachers. We describe the process of designing hardware and selecting software components, and report results from objective measurements and listening tests conducted to evaluate different loudspeakers and spatial configurations. To conclude, we discuss future perspectives on the development of tutorials for learning spatial audio with OAT, which we are continually testing in workshop settings with students and independent sound artists.
Screw theory serves as an influential mathematical tool, significantly contributing to mechanical engineering, with particular relevance to mechanism science and robotics. The instantaneous screw and the finite displacement screw have been used to analyse the degree of freedom and perform kinematic analysis of linkage mechanisms with only lower pairs. However, they are not suitable for higher pair mechanisms, which can achieve complex motions with a more concise structure by reasonably designing contact contours, and they possess advantages in some particular areas. Therefore, to improve the adaptability of screw theory, this paper aims to analyse higher kinematic pair (HKP) mechanisms and proposes a method to extend instantaneous screw and finite displacement screw theory. This method can not only analyse the instantaneous degree of freedom of HKP mechanisms but also determine the relationships between the motion variables of HKP mechanisms. Furthermore, this method is applied to calculate the degree of freedom and the relationships between the motion angles in both planar and spatial cam mechanisms, thereby demonstrating its efficiency and advantages.
This paper focuses on the comparison of networks on the basis of statistical inference. For that purpose, we rely on smooth graphon models as a nonparametric modeling strategy that is able to capture complex structural patterns. The graphon itself can be viewed more broadly as local density or intensity function on networks, making the model a natural choice for comparison purposes. More precisely, to gain information about the (dis-)similarity between networks, we extend graphon estimation towards modeling multiple networks simultaneously. In particular, fitting a single model implies aligning different networks with respect to the same graphon estimate. To do so, we employ an EM-type algorithm. Drawing on this network alignment consequently allows a comparison of the edge density at local level. Based on that, we construct a chi-squared-type test on equivalence of network structures. Simulation studies and real-world examples support the applicability of our network comparison strategy.
Structural health monitoring (SHM) is increasingly applied in civil engineering. One of its primary purposes is detecting and assessing changes in structure conditions to increase safety and reduce potential maintenance downtime. Recent advancements, especially in sensor technology, facilitate data measurements, collection, and process automation, leading to large data streams. We propose a function-on-function regression framework for (nonlinear) modeling the sensor data and adjusting for covariate-induced variation. Our approach is particularly suited for long-term monitoring when several months or years of training data are available. It combines highly flexible yet interpretable semi-parametric modeling with functional principal component analysis and uses the corresponding out-of-sample Phase-II scores for monitoring. The method proposed can also be described as a combination of an “input–output” and an “output-only” method.
We interrogate efforts to legislate artificial intelligence (AI) through Canada’s Artificial Intelligence and Data Act (AIDA) and argue it represents a series of missed opportunities that so delayed the Act that it died. We note how much of this bill was explicitly tied to economic development and implicitly tied to a narrow jurisdictional form of shared prosperity. Instead, we contend that the benefits of AI are not shared but disproportionately favour specific groups, in this case, the AI industry. This trend appears typical of many countries’ AI and data regulations, which tend to privilege the few, despite promises to favour the many. We discuss the origins of AIDA, drafted by Canada’s federal Department for Innovation Science and Economic Development (ISED). We then consider four problems: (1) AIDA relied on public trust in a digital and data economy; (2) ISED tried to both regulate and promote AI and data; (3) Public consultation was insufficient for AIDA; and (4) Workers’ rights in Canada and worldwide were excluded in AIDA. Without strong checks and balances built into regulation like AIDA, innovation will fail to deliver on its claims. We recommend the Canadian government and, by extension, other governments invest in an AI act that prioritises: (1) Accountability mechanisms and tools for the public and private sectors; (2) Robust workers’ rights in terms of data handling; and (3) Meaningful public participation in all stages of legislation. These policies are essential to countering wealth concentration in the industry, which would stifle progress and widespread economic growth.
This paper presents an efficient trajectory planning method for a 4-DOF robotic arm designed for pick-and-place manipulation tasks. The method addresses several challenges, where traditional optimization approaches struggle with high dimensionality, and data-driven methods are costly to collect enough data. The proposed approach leverages Bézier curves for computationally efficient, smooth trajectory generation, minimizing abrupt changes in motion. When continuous solutions for the end-effector angle are unavailable, joint angles are interpolated using Bézier or Hermite interpolation. Additionally, we use custom metrics to evaluate deviation between the interpolated trajectory and the original trajectory, as well as the overall smoothness of the path. When a continuous solution exists, the trajectory is treated as a Gaussian process, where a prior factor is generated using the centerline. This prior is then combined with a smoothness factor to optimize the trajectory, ensuring it remains as smooth as possible within the feasible solution space through stochastic gradient descent. The method is evaluated through simulations in Nvidia Isaac Sim; results highlight the method’s suitability, and future work will explore enhancements in prior trajectory integration and smoothing techniques.
Veltman semantics is the basic Kripke-like semantics for interpretability logic. Verbrugge semantics is a generalization of Veltman semantics. An appropriate notion of bisimulation between a Verbrugge model and a Veltman model is developed in this paper. We show that each Verbrugge model can be transformed to a bisimilar Veltman model.
The problem of reconstructing a distribution with bounded support from its moments is practically relevant in many fields, such as chemical engineering, electrical engineering, and image analysis. The problem is closely related to a classical moment problem, called the truncated Hausdorff moment problem (THMP). We call a method that finds or approximates a solution to the THMP a Hausdorff moment transform (HMT). In practice, selecting the right HMT for specific objectives remains a challenge. This study introduces a systematic and comprehensive method for comparing HMTs based on accuracy, computational complexity, and precision requirements. To enable fair comparisons, we present approaches for generating representative moment sequences. The study also enhances existing HMTs by reducing their computational complexity. Our findings show that the performances of the approximations differ significantly in their convergence, accuracy, and numerical complexity and that the decay order of the moment sequence strongly affects the accuracy requirement.
This commentary examines the dual role of artificial intelligence (AI) in shaping electoral integrity and combating misinformation, with a focus on the 2025 Philippine elections. It investigates how AI has been weaponised to manipulate narratives and suggests strategies to counteract disinformation. Drawing on case studies from the Philippines, Taiwan, and India—regions in the Indo-Pacific with vibrant democracies, high digital engagement, and recent experiences with election-related misinformation—it highlights the risks of AI-driven content and the innovative measures used to address its spread. The commentary advocates for a balanced approach that incorporates technological solutions, regulatory frameworks, and digital literacy to safeguard democratic processes and promote informed public participation. The rise of generative AI tools has significantly amplified the risks of disinformation, such as deepfakes, and algorithmic biases. These technologies have been exploited to influence voter perceptions and undermine democratic systems, creating a pressing need for protective measures. In the Philippines, social media platforms have been used to spread revisionist narratives, while Taiwan employs AI for real-time fact-checking. India’s proactive approach, including a public misinformation tipline, showcases effective countermeasures. These examples highlight the complex challenges and opportunities presented by AI in different electoral contexts. The commentary stresses the need for regulatory frameworks designed to address AI’s dual-use nature, advocating for transparency, real-time monitoring, and collaboration between governments, civil society, and the private sector. It also explores the criteria for effective AI solutions, including scalability, adaptability, and ethical considerations, to guide future interventions. Ultimately, it underscores the importance of digital literacy and resilient information ecosystems in supporting informed democratic participation.
Increasing sustainability expectations requires support for the design of systems that are reactive in minimizing potential negative impact and proactive in guiding engineering decision-making toward more value-robust long-term decisions. This article identifies a gap in the methodological support for the design of circular systems, building on the hypothesis that computer-based simulation models will drive the development of more value-robust systems designed to behave positively in a changeable operational environment during the whole lifecycle. The article presents a framework for value-robust circular systems design, complementing the current approaches for circular design aimed at increasing decision-makers’ awareness about the complexity of circular systems to be designed. The framework is theoretically described and demonstrated through its applications in four case studies in the field of construction machinery investigating new circular solutions for the future of mining, quarrying and road construction. The framework supports the development of more resilient and sustainable systems, strengthening the feedback loop between exploring new technologies, proposing innovative concepts and evaluating system performance.
This article surveys spatial music and sonic art influenced by the traditional Japanese concept of ma – translated as space, interval, or pause – against the cultural backdrop of Shintoism and Zen Buddhism. Works by Jōji Yuasa, Midori Takada, Michael Fowler, Akiko Hatakeyama, Kaija Saariaho and Jim Franklin created in conscious engagement with ma are analysed with respect to diverse manifestations of ma in Japanese arts and social sciences, including theatre, poetry, painting, rock garden, shakuhachi and psychotherapy. Jean-Baptiste Barrière provided the Max patch for Saariaho’s Only the Sound Remains (2015) for this survey. I propose a framework of six interlinking dimensions of ma – temporal, physical, musical, semantic, therapeutic and spiritual – for discussing creative approaches to ma, alongside their resonance with Hisamatsu Shin’ichi’s seven interconnected characteristics of Zen art: Asymmetry, Simplicity, Austere Sublimity/Lofty Dryness, Naturalness, Subtle Profundity/Deep Reserve, Freedom from Attachment and Tranquility. The aim is first to examine how each composer uses different techniques, technologies and systems to engage with specific dimensions of ma. Second, to illuminate possible futures of exploring these dimensions in spatial music and sonic art through three methods: Inspiration, Transmediation and Expansion.
The rise of generative artificial intelligences (AIs) has quickly made them auxiliary tools in numerous fields, especially in the creative one. Many scientific works already discuss the comparison of the creative capacity of AIs with human beings. In the field of Engineering Design, however, numerous design methodologies have been developed that enhance the creativity of the designer in their idea generation phase. Therefore, this work aims to expand previous works by leading a Generative Pre-trained Transformer 4 (GPT-4) based generative AI to use a design methodology to generate creative concepts. The results suggest that these types of tools can be useful for designers in that they can inspire novel ideas, but they still lack the necessary capacity to discern technically feasible ideas.