To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A significant challenge of structural health monitoring (SHM) is the lack of labeled data collected from damage states. Consequently, the collected data can be incomplete, making it difficult to undertake machine learning tasks, to detect or predict the full range of damage states a structure may experience. Transfer learning is a helpful solution, where data from (source) structures containing damage labels can be used to transfer knowledge to (target) structures, for which damage labels do not exist. Machine learning models are then developed that generalize to the target structure. In practical applications, it is unlikely that the source and the target structures contain the same damage states or experience the same environmental and operational conditions, which can significantly impact the collected data. This is the first study to explore the possibility of transfer learning for damage localisation in SHM when the damage states and the environmental variations in the source and target datasets are disparate. Specifically, using several domain adaptation methods, this article localizes severe damage states at a target structure, using labeled information from minor damage states at a source structure. By minimizing the distance between the marginal and conditional distributions between the source and the target structures, this article successfully localizes damage states of disparate severities, under varying environmental and operational conditions. The effect of partial and universal domain adaptation—where the number of damage states in the source and target datasets differ—is also explored in order to mimic realistic industrial applications of these methods.
Design, like any social activity, greatly depends on human relationships for efficiency and sustainability. Collaborative design (co-design) in particular relies on strong interactions between members, as ideas and concepts become shared, going from personal (creation) to interpersonal (co-creation). There is, then, a need to understand how interpersonal factors influence interactions in co-design, and this understanding can be achieved by using the insights gleaned from research on intersubjectivity, the field of social interactions. This literature study was conducted using a systematic literature review to identify and classify the different methods used to measure intersubjectivity and see how this knowledge could explain the influence of interpersonal factors on interactions in co-design. The review identified 66 methods, out of which 4 main categories were determined. Furthermore, 115 articles were analysed and systematized in an online database, leading to a new understanding of the role of interpersonal factors in measuring the interactive levels in co-design. They reveal a positive correlation, where a rising level of interactivity is made possible by the formation and maintenance of co-creation, leading to a state of resonance where the experiences of individuals are closely related. This paper presents a state-of-the-art report on trends in the study of intersubjectivity through interpersonal factors and proposes some directions for designers and researchers interested in taking these factors into consideration for their next co-design situation.
Deep Learning is becoming increasingly important in a technology-dominated world. However, the building of computational models that accurately represent linguistic structures is complex, as it involves an in-depth knowledge of neural networks, and the understanding of advanced mathematical concepts such as calculus and statistics. This book makes these complexities accessible to those from a humanities and social sciences background, by providing a clear introduction to deep learning for natural language processing. It covers both theoretical and practical aspects, and assumes minimal knowledge of machine learning, explaining the theory behind natural language in an easy-to-read way. It includes pseudo code for the simpler algorithms discussed, and actual Python code for the more complicated architectures, using modern deep learning libraries such as PyTorch and Hugging Face. Providing the necessary theoretical foundation and practical tools, this book will enable readers to immediately begin building real-world, practical natural language processing systems.
There are many exercises included at the ends of chapters in Parts I and II of book. This appendix provides brief solutions or at least answers to most of these exercises.
We begin our journey into state estimation by considering systems that can be modelled using linear equations corrupted by Gaussian noise. While these linear-Gaussian systems are severe approximations of real robots, the mathematics are very amenable to straightforward analysis. We discuss the difference between Bayesian estimation and maximum a posteriori estimation in the context of batch trajectory estimation; these two approaches are effectively the same for linear systems, but this contrast is crucial to understanding the results for nonlinear systems later on. After introducing batch trajectory estimation, we show how the structure of the problem gives rise to sparsity in our equations that can be exploited to provide a very efficient solution. Indeed, the famous Rauch-Tung-Striebel smoother (whose forward pass is the Kalman filter) is equivalent to solving the batch trajectory problem. Several other avenues to the Kalman filter are also explored. Although much of the book focusses on discrete-time motion models for robots, we show how to begin with continuous-time models as well; in particular, we make the connection that batch continuous-time trajectory is an example of Gaussian process regression, a popular tool from machine learning.
This appendix contains a few extra derivations relating to rotations and poses that may be of interest to some enthusiastic readers. In particular, the eigen/Jordan decomposition of rotations and poses provides some deeper insight into these quantities that are ubiquitous in robotics.
Typical robots not only translate in the world but also rotate. This chapter serves a primer on three-dimensional geometry introducing such important geometric concepts as vectors, reference frames, coordinates, rotations, and poses (rotation and translation). We introduce kinematics, how geometry changes over time, with an eye towards describing robot motion models. We also present several common sensor models using our three-dimensional tools: camera, stereo camera, lidar, and inertial measurement unit.
The final technical chapter returns to the idea of representing a robot trajectory as a continuous function of time, only now in three-dimensional space where the robot may translate and rotate. We provide a method to adapt our earlier continuous-time trajectory estimation to Lie groups that is practical and efficient. The chapter serves as a final example of pulling together many of the key ingredients of the book into a single problem: continuous time estimation as Gaussian process regression, Lie groups to handle rotations, and simultaneous localization and mapping.
Nonlinear systems provide additional challenges for robotic state estimation. We provide a derivation of the famous extended Kalman filter (EKF) and then go on to study several generalizations and extensions of recursive estimation that are commonly used: the Bayes filter, the iterated EKF, the particle filter, and the sigmapoint Kalman filter. We return to batch estimation for nonlinear systems, which we connect more deeply to numerical optimization than in the linear-Gaussian chapter. We discuss the strengths and weaknesses of the various techniques presented and then introduce sliding-window filters as a compromise between recursive and batch methods. Finally, we discuss how continuous-time motion models can be employed in batch trajectory estimation for nonlinear systems.
Rotational state variables are a problem for our estimation tools from earlier chapters, which all assumed the state to be estimated was a vector in the sense of linear algebra. Rotations cannot be globally described as vectors and as such must be handled with care. This chapter re-examines rotations as an example of a Lie group, which has many useful properties despite not being a vector space. The main takeaway of the chapter is that in estimation we can use the Lie group structure to adapt our estimation tools to work with rotations, and by association poses. The key is to consider small perturbations to rotations in the group's Lie algebra in order to make two tasks easier to handle: performing numerical optimization and representing uncertainty. The chapter can also serve as a useful reference for readers already familiar with the content.
Following on the heels of the chapter on nonlinear estimation, this chapter focusses on some of the common pitfalls and failure modes of estimation techniques. We begin by discussing some key properties that we would like healthy estimators to have (i.e., unbiased, consistent) and how to measure these properties. We delve more deeply into biases and discuss how in some cases we can fold bias estimation right into our estimator, while in other cases we cannot. We touch briefly on data association (matching measurements to the right parts of models) and how to mitigate the effect of outlier measurements using robust estimation. We close with some methods to determine good measurement covariances for use in our estimators.
This chapter opens with a brief history of estimation from astronomy, navigation at sea, and space exploration. It defines the problem of estimation and gives some modern sensor fusion examples. A description of how the book is organized and how to read it is provided. The book is compared to other great volumes on estimation and robotics in order to understand how it fits into the larger landscape.
A collision-free path planning method is proposed based on learning from demonstration (LfD) to address the challenges of cumbersome manual teaching operations caused by complex action of yarn storage, variable mechanism positions, and limited workspace in preform weaving. First, by utilizing extreme learning machines (ELM) to autonomously learn the teaching data of yarn storage, the mapping relationship between the starting and ending points and the teaching path points is constructed to obtain the imitation path with similar storage actions under the starting and ending points of the new task. Second, an improved rapidly expanding random trees (IRRT) method with adaptive direction and step size is proposed to expand path points with high quality. Finally, taking the spatical guidance point of imitation path as the target direction of IRRT, the expansion direction is biased toward the imitation path to obtain a collision-free path that meets the action yarn storage. The results of different yarn storage examples show that the ELM-IRRT method can plan the yarn storage path within 2s–5s when the position of the mechanism changes in narrow spaces, avoiding tedious manual operations that program the robot movements, which is feasible and effective.