To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This text accompanies the performance A Foot, A Mouth, A Hundred Billion Stars, which premiered at the Lapworth Museum of Geology in the United Kingdom on 18 March 2023, as part of the Flatpack film festival. It includes both the text and a film version, developed during a residency at the museum. Over 18 months, I had full access to the collection and archives, selecting objects that served as prompts for stories about time and memory. A central theme of the work is slippage – misremembering and misunderstanding – as a generative methodology for exploring the connection between the collection, our past, and possible futures.
A Foot, A Mouth, A Hundred Billion Stars combines analogue media and digital technologies to examine our understanding of remembering and forgetting. I used a live digital feed and two analogue slide projectors to explore the relationships between image and memory. This article does not serve as a guide to the performance but instead reflects on the process and the ideas behind the work. My goal is to share my practice of rethinking memory through direct engagement with materials. In line with the performance’s tangential narrative, this text weaves together diverse references, locations, thoughts, and ideas, offering a deeper look into the conceptual framework of the work.
Earth’s forests play an important role in the fight against climate change and are in turn negatively affected by it. Effective monitoring of different tree species is essential to understanding and improving the health and biodiversity of forests. In this work, we address the challenge of tree species identification by performing tree crown semantic segmentation using an aerial image dataset spanning over a year. We compare models trained on single images versus those trained on time series to assess the impact of tree phenology on segmentation performance. We also introduce a simple convolutional block for extracting spatio-temporal features from image time series, enabling the use of popular pretrained backbones and methods. We leverage the hierarchical structure of tree species taxonomy by incorporating a custom loss function that refines predictions at three levels: species, genus, and higher-level taxa. Our best model achieves a mean Intersection over Union (mIoU) of 55.97%, outperforming single-image approaches particularly for deciduous trees where phenological changes are most noticeable. Our findings highlight the benefit of exploiting the time series modality via our Processor module. Furthermore, leveraging taxonomic information through our hierarchical loss function often, and in key cases significantly, improves semantic segmentation performance.
A wrist-hand exoskeleton designed to assist individuals with wrist and hand limitations is presented in this paper. The novel design is developed based on specific selection criteria, addressing all the Degrees of Freedom (DOF). In the conceptual design phase, design concepts are created and assessed before being screened and scored to determine which concept is the most promising. Performance and possible restrictions are assessed using kinematic and dynamic analysis. Using polylactic acid material, the exoskeleton is prototyped to ensure structural integrity and fit. Manual control, master-slave control, and electroencephalography (EEG) dataset-based control are among the control strategies that have been investigated. Direct manipulation is possible with manual control, nevertheless, master-slave control uses sensors to map user motions. Brain signals for hand opening and closing are interpreted by EEG dataset-based control, which manages the hand open-close of the exoskeleton. This study introduces a novel wrist-hand exoskeleton that improves usefulness, modularity, and mobility. While the numerous control techniques give versatility based on user requirements, the 3D printing process assures personalization and flexibility in design.
A finite-time adaptive composite integral sliding mode control strategy based on a fast finite-time observer is proposed for trajectory tracking of the Stewart parallel robot, considering unmodeled uncertainties and external disturbances. First, a global finite-time converging sliding mode surface composed of intermediate variables and integral terms is established to eliminate steady-state tracking errors. Next, a fast finite-time extended state observer is designed to compensate for uncertainties and external disturbances, improving the robustness of the control system. Finally, based on this, a finite-time sliding mode control rate is designed. The gain value is adjusted through an adaptive reaching law to reduce sliding mode chattering, and global finite-time convergence of the system is theoretically proven using Lyapunov theory. Experimental verification shows that the proposed control strategy has stronger robustness to uncertainties and external disturbances, faster error convergence, less chattering, and higher stability accuracy.
Federated learning (FL) is a machine learning technique that distributes model training to multiple clients while allowing clients to keep their data local. Although the technique allows one to break free from data silos keeping data local, to coordinate such distributed training, it requires an orchestrator, usually a central server. Consequently, organisational issues of governance might arise and hinder its adoption in both competitive and collaborative markets for data. In particular, the question of how to govern FL applications is recurring for practitioners. This research commentary addresses this important issue by inductively proposing a layered decision framework to derive organisational archetypes for FL’s governance. The inductive approach is based on an expert workshop and post-workshop interviews with specialists and practitioners, as well as the consideration of real-world applications. Our proposed framework assumes decision-making occurs within a black box that contains three formal layers: data market, infrastructure, and ownership. Our framework allows us to map organisational archetypes ex-ante. We identify two key archetypes: consortia for collaborative markets and in-house deployment for competitive settings. We conclude by providing managerial implications and proposing research directions that are especially relevant to interdisciplinary and cross-sectional disciplines, including organisational and administrative science, information systems research, and engineering.
The remote center of motion (RCM) mechanism is one of the key components of minimally invasive surgical robots. Nevertheless, the most widely used parallelogram-based RCM mechanism tends to have a large footprint, thereby increasing the risk of collisions between the robotic arms during surgical procedures. To solve this problem, this study proposes a compact RCM mechanism based on the coupling of three rotational motions realized by nonlinear transmission. Compared to the parallelogram-based RCM mechanism, the proposed design offers a smaller footprint, thereby reducing the risk of collisions between the robotic arms. To address the possible errors caused by the elasticity of the transmission belts, an error model is established for the transmission structure that includes both circular and non-circular pulleys. A prototype is developed to verify the feasibility of the proposed mechanism, whose footprint is further compared with that of the parallelogram-based RCM mechanism. The results indicate that our mechanism satisfies the constraints of minimally invasive surgery, provides sufficient stiffness, and exhibits a more compact design. The current study provides a new direction for the miniaturization design of robotic arms in minimally invasive surgical robots.
This study explores the potential of applying machine learning (ML) methods to identify and predict areas at risk of food insufficiency using a parsimonious set of publicly available data sources. We combine household survey data that captures monthly reported food insufficiency with remotely sensed measures of factors influencing crop production and maize price observations at the census enumeration area (EA) in Malawi. We consider three machine-learning models of different levels of complexity suitable for tabular data (TabNet, random forests, and LASSO) and classical logistic regression and examine their performance against the historical occurrence of food insufficiency. We find that the models achieve similar accuracy levels with differential performance in terms of precision and recall. The Shapley additive explanation decomposition applied to the models reveals that price information is the leading contributor to model fits. A possible explanation for the accuracy of simple predictors is the high spatiotemporal path dependency in our dataset, as the same areas of the country are repeatedly affected by food crises. Recurrent events suggest that immediate and longer-term responses to food crises, rather than predicting them, may be the bigger challenge, particularly in low-resource settings. Nonetheless, ML methods could be useful in filling important data gaps in food crises prediction, if followed by measures to strengthen food systems affected by climate change. Hence, we discuss the tradeoffs in training these models and their use by policymakers and practitioners.
We present a deep learning architecture that reconstructs a source of data at given spatio-temporal coordinates using other sources. The model can be applied to multiple sources in a broad sense: the number of sources may vary between samples, the sources can differ in dimensionality and sizes, and cover distinct geographical areas at irregular time intervals. The network takes as input a set of sources that each include values (e.g., the pixels for two-dimensional sources), spatio-temporal coordinates, and source characteristics. The model is based on the Vision Transformer, but separately embeds the values and coordinates and uses the embedded coordinates as relative positional embedding in the computation of the attention. To limit the cost of computing the attention between many sources, we employ a multi-source factorized attention mechanism, introducing an anchor-points-based cross-source attention block. We name the architecture MoTiF (multi-source transformer via factorized attention). We present a self-supervised setting to train the network, in which one source chosen randomly is masked and the model is tasked to reconstruct it from the other sources. We test this self-supervised task on tropical cyclone (TC) remote-sensing images, ERA5 states, and best-track data. We show that the model is able to perform TC ERA5 fields and wind intensity forecasting from multiple sources, and that using more sources leads to an improvement in forecasting accuracy.
In this article, we focus on the systemic expected shortfall and marginal expected shortfall in a multivariate continuous-time risk model with a general càdlàg process. Additionally, we conduct our study under a mild moment condition that is easily satisfied when the general càdlàg process is determined by some important investment return processes. In the presence of heavy tails, we derive asymptotic formulas for the systemic expected shortfall and marginal expected shortfall under the framework that includes wide dependence structures among losses, covering pairwise strong quasi-asymptotic independence and multivariate regular variation. Our results quantify how the general càdlàg process, heavy-tailed property of losses, and dependence structures influence the systemic expected shortfall and marginal expected shortfall. To discuss the interplay of dependence structures and heavy-tailedness, we apply an explicit order 3.0 weak scheme to estimate the expectations related to the general càdlàg process. This enables us to validate the moment condition from a numerical perspective and perform numerical studies. Our numerical studies reveal that the asymptotic dependence structure has a significant impact on the systemic expected shortfall and marginal expected shortfall, but heavy-tailedness has a more pronounced effect than the asymptotic dependence structure.
To address the problems of accuracy degradation, localization drift, and even failure of Simultaneous Localization and Mapping (SLAM) algorithms in unstructured environments with sparse geometric features, such as outdoor parks, highways, and urban roads, a multi-metric light detection and ranging (LiDAR) SLAM system based on the fusion of geometric and intensity features is proposed. Firstly, an adaptive method for extracting multiple types of geometric features and salient intensity features is proposed to address the issue of insufficient sparse feature extraction. In addition to extracting traditional edge and planar features, vertex features are also extracted to fully utilize the geometric information, and intensity edge features are extracted in areas with significant intensity changes to increase multi-level perception of the environment. Secondly, in the state estimation, a multi-metric error estimation method based on point-to-point, point-to-line, and point-to-plane is used, and a two-step decoupling strategy is employed to enhance pose estimation accuracy. Finally, qualitative and quantitative experiments on public datasets demonstrate that compared to state-of-the-art pure geometric and intensity-assisted LiDAR SLAM algorithms, our proposed algorithm achieves superior localization accuracy and mapping clarity, with an ATE accuracy improvement of 28.93% and real-time performance of up to 62.9 ms. Additionally, test conducted in real campus environments further validates the effectiveness of our approach in complex, unstructured scenarios.
As artificial intelligence grows, human–robot collaboration becomes more common for efficient task completion. Effective communication between humans and AI-assisted robots is crucial for maximizing collaboration potential. This study explores human–robot interactions, focusing on the differing mental models used by humans and collaborative robots. Humans communicate using knowledge, skills, and emotions, while robotic systems rely on algorithms and technology. This communication disparity can hinder productivity. Integrating emotional intelligence with cognitive intelligence is key for successful collaboration. To address this, a communication model tailored for human–robot teams is proposed, incorporating robots’ observation of human emotions to optimize workload allocation. The model’s efficacy is demonstrated through a case study in an SAP system. By enhancing understanding and proposing practical solutions, this study contributes to optimizing teamwork between humans and AI-assisted robots.
Static analysis is an essential component of many modern software development tools. Unfortunately, the ever-increasing complexity of static analyzers makes their coding error-prone. Even analysis tools based on rigorous mathematical techniques, such as abstract interpretation, are not immune to bugs. Ensuring the correctness and reliability of software analyzers is critical if they are to be inserted in production compilers and development environments. While compiler validation has seen notable success, formal validation of static analysis tools remains relatively unexplored. In this paper we present checkification, a simple, automatic method for testing static analyzers. Broadly, it consists in checking, over a suite of benchmarks, that the properties inferred statically are satisfied dynamically. The main advantage of our approach lies in its simplicity, which stems directly from framing it within the Ciao assertion-based validation framework, and its blended static/dynamic assertion checking approach. We demonstrate that in this setting, the analysis can be tested with little effort by combining the following components already present in the framework: 1) the static analyzer, which outputs its results as the original program source with assertions interspersed; 2) the assertion run-time checking mechanism, which instruments a program to ensure that no assertion is violated at run time; 3) the random test case generator, which generates random test cases satisfying the properties present in assertion preconditions; and 4) the unit-test framework, which executes those test cases. We have applied our approach to the CiaoPP static analyzer, resulting in the identification of many bugs with reasonable overhead. Most of these bugs have been either fixed or confirmed, helping us detect a range of errors not only related to analysis soundness but also within other aspects of the framework.
Every directed graph $G$ induces a locally ordered metric space $\mathcal{X}_{(G)}$ together with a local order $\tilde {\mathcal{X}}_{(G)}$ that is locally dihomeomorphic to the standard pospace $\mathbb{R}$; both are related by a morphism ${\beta }_{(G)} G:\tilde {\mathcal{X}}_{(G)}\to {\mathcal{X}}_{(G)}$ satisfying a universal property. The underlying set of $\tilde {\mathcal{X}_{(G)}}$ admits a non-Hausdorff atlas $\mathcal{A}_{G}$ equipped with a non-vanishing vector field ${{f}}_{G}$; the latter is associated to $\tilde {\mathcal{X}}_{(G)}$ through the correspondence between local orders and cone fields on manifolds. The above constructions are compatible with Cartesian products, so the geometric model of a conservative program is lifted through ${{\beta }_{G_1}} \times \cdots \times {{\beta }}_{G_n}$ to a subset $M$ of the parallelized manifold $\mathcal{A}_{G_1} \times \cdots \times \mathcal{A}_{G_n}$. By assigning the suitable norm to each tangent space of $\mathcal{A}_{G_1} \times \cdots \times \mathcal{A}_{G_n}$, the length of every directed smooth path $\gamma$ on $M$, i.e. $\int {{|\gamma '(t)|}}_{\gamma (t)}dt$, corresponds to the execution time of the sequence of multi-instructions associated to $\gamma$. This induces a pseudometric ${{d}}_{\mathcal{A}}$ whose restrictions to sufficiently small open sets of $\mathcal{A}_{G_1} \times \cdots \times \mathcal{A}_{G_n}$ (we refer to the manifold topology, which is strictly finer than the pseudometric topology) are isometric to open subspaces of ${\mathbb{R}}^n$ with the $\alpha$-norm for some $\alpha \in [{{1}},{{\infty }}]$. The transition maps of $\mathcal{A}_{G}$ are translations, so the representation of a tangent vector does not depend on the chart of $\mathcal{A}_{G}$ in which it is represented; consequently, differentiable maps between open subsets of $\mathcal{A}_{G_{1}} \times \cdots \times \mathcal{A}_{G_{n}}$ are handled as if they were maps between open subsets of ${\mathbb{R}}^n$. For every directed path $\gamma$ on $M$ (possibly the representation of a sequence $\sigma$ of multi-instructions), there is a shorter directed smooth path on $M$ that is arbitrarily close to $\gamma$, and that can replace $\gamma$ as a representation of $\sigma$.
In this paper, we propose a novel online informative path planner for 3-D modeling of unknown structures using micro aerial vehicles. Different from the explore-then-exploit strategy, our planner can cope with exploration and coverage simultaneously and thus obtain complete and high-quality 3-D models. We first devise a set of evaluation metrics considering the perception constraints of the sensor for efficiently evaluating the coverage quality of the reconstructed surfaces. Then, the coverage quality is utilized to guide the subsequent informative path planning. Specifically, our hierarchical planner consists of two planning stages – a local coverage stage for inspecting surfaces with low coverage quality and a global exploration stage for transiting the robot to unexplored regions at the global scale. The local coverage stage computes the coverage path that takes into account both the exploration and coverage objectives based on the estimated coverage quality and frontiers, and the global exploration stage maintains a sparse roadmap in the explored space to achieve fast global exploration. We conduct both simulated and real-world experiments to validate the proposed method. The results show that our planner outperforms the state-of-the-art algorithms and especially decreases the reconstruction error (at least 12.5% lower on average).
Adaptation to climate change requires robust climate projections, yet the uncertainty in these projections performed by ensembles of Earth system models (ESMs) remains large. This is mainly due to uncertainties in the representation of subgrid-scale processes such as turbulence or convection that are partly alleviated at higher resolution. New developments in machine learning-based hybrid ESMs demonstrate great potential for systematically reduced errors compared to traditional ESMs. Building on the work of hybrid (physics + AI) ESMs, we here discuss the additional potential of further improving and accelerating climate models with quantum computing. We discuss how quantum computers could accelerate climate models by solving the underlying differential equations faster, how quantum machine learning could better represent subgrid-scale phenomena in ESMs even with currently available noisy intermediate-scale quantum devices, how quantum algorithms aimed at solving optimization problems could assist in tuning the many parameters in ESMs, a currently time-consuming and challenging process, and how quantum computers could aid in the analysis of climate models. We also discuss hurdles and obstacles facing current quantum computing paradigms. Strong interdisciplinary collaboration between climate scientists and quantum computing experts could help overcome these hurdles and harness the potential of quantum computing for this urgent topic.
Due to the ever-increasing complexity of technical products, the quantity of system requirements, which are typically expressed in natural language, is inevitably rising. Model-based formalization through the application of Model-based Systems Engineering is a common solution to cope with rising complexity. Thereby, grouping requirements to use cases forms the first step towards model-based requirements and allows to improve the understanding of the system. To support this manual and subjective task, automation by artificial intelligence and methods of natural language processing are needed. This contribution proposes a novel pipeline to derive use cases from natural language requirements by considering incomplete manual mappings and the possibility that one requirement contributes to multiple use cases. The approach utilizes semi-supervised requirements graph generation with subsequent overlapping graph clustering. Each identified use case is described by keyphrases to increase accessibility for the user. Industrial requirement sets from the automotive industry are used to evaluate the pipeline in two application scenarios. The proposed pipeline overcomes limitations of prior work in the practical application, which is emphasized by critical discussions with experts from the industry. The proposed pipeline automatically generates proposals for use cases defined in the requirement set, forming the basis for use case diagrams.
Data governance has emerged as a pivotal area of study over the past decade, yet despite its growing importance, a comprehensive analysis of the academic literature on this subject remains notably absent. This paper addresses this gap by presenting a systematic review of all academic publications on data governance from 2007 to 2024. By synthesizing insights from more than 3500 documents authored by more than 9000 researchers across various sources, this study offers a broad yet detailed perspective on the evolution of data governance research.
Designers often rely on their self-evaluations – either independently or using design tools – to make concept selection decisions. When evaluating designs for sustainability, novice designers, given their lack of experience, could demonstrate psychological distance from sustainability-related issues, leading to faulty concept evaluations. We aim to investigate the accuracy of novice designers’ self-evaluations of the sustainability of their solutions and the moderating role of their (1) trait empathy and (2) their beliefs, attitudes and intentions toward sustainability on this accuracy. We conducted an experiment with first-year engineering students comprising a sustainable design activity. In the activity, participants evaluated the sustainability of their own designs, and these self-evaluations were compared against expert evaluations. We see that participants’ self-evaluations were consistent with the expert evaluations on the following sustainable design heuristics: (1) longevity and (2) finding wholesome alternatives. Second, trait empathy moderated the accuracy of self-evaluations, with lower levels of fantasy and perspective-taking relating to more accurate self-evaluations. Finally, beliefs, attitudes and intentions toward sustainability also moderated the accuracy of self-evaluations, and these effects vary based on the sustainable design heuristic. Taken together, these findings suggest that novice designers’ individual differences (e.g., trait empathy) could moderate the accuracy of the evaluation of their designs in the context of sustainability.
New musical instruments of the electronic and digital eras have explored spatialisation through multidimensional speaker arrays. Many facets of 2D and 3D sound localisation have been investigated, often in tandem with immersive fixed-media compositions: spatial trajectory and panning; physics-based effects such as artificial acoustics, reverberation and Doppler shifts; and spatially derived synthesis methods. Within the realm of augmented spatial string instruments, the EV distinguishes itself through a unique realisation of the possibilities afforded by these technologies. Initially conceived as a tool for convolving the timbres of synthesised and acoustic string signals, the EV’s exploration of spatial sound has led to new experiments with timbre. Over time, additional sound-generation modules have been integrated, resulting in an increasingly versatile palette for immersive composition. Looking forward, the EV presents compelling opportunities for sonic innovation.