To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Propositional temporal logic over the real number time flow is finitely axiomatisable, but its first-order counterpart is not recursively axiomatisable. We study the logic that combines the propositional axiomatisation with the usual axioms for first-order logic with identity, and develop an alternative “admissible” semantics for it, showing that it is strongly complete for admissible models over the reals. By contrast there is no recursive axiomatisation of the first-order temporal logic of admissible models whose time flow is the integers, or any scattered linear ordering.
In the application of rotorcraft atmospheric environment detection, to reflect the distribution of atmospheric pollutants more realistically and completely, the sampling points must be spread throughout the entire three-dimensional space, and the cooperation of multiple unmanned aerial vehicles (multi-UAVs) can ensure real-time performance and increase operational efficiency. In view of the problem of coordinated detection by multi-UAVs, the region division and global coverage path planning of the stereo space to be detected are studied. A whale optimization algorithm based on the simulated annealing-whale optimization algorithm (SA-WOA) is proposed, which introduces adaptive weights with the Levy flight mechanism, improves the metropolis criterion, and introduces an adaptive tempering mechanism in the SA stage. Path smoothing is subsequently performed with the help of nonuniform rational B-spline (NURBS) curves. The comparison of algorithms using the eil76 dataset shows that the path length planned by the SA-WOA algorithm in this paper is 10.15% shorter than that of the WOA algorithm, 13.25% shorter than the SA planning result, and only 0.95% difference from the optimal path length in the dataset. From the perspective of planning time, its speed is similar to WOA, with a relative speed increase of 27.15% compared to SA, proving that the algorithm proposed in this paper has good planning performance. A hardware system platform is designed and built, and environmental gas measurement experiments were conducted. The experimental results indicate that the multi-UAV collaborative environment detection task planning method proposed in this paper has certain practical value in the field of atmospheric environment detection.
Collaborative robotics is a field of growing industrial interest, within which understanding the energetic behavior of manipulators is essential. In this work, we present the electro-mechanical modeling of the UR5 e-series robot through the identification of its dynamics and electrical parameters. By means of the identified robot model, it is then possible to compute and optimize the energy consumption of the robot during prescribed trajectories. The proposed model is derived from data acquired from the robot controller during bespoke experimental tests, using model identification procedures and datasheet provided by manipulator, motors, and gearbox manufacturers. The entire procedure does not require the use of any additional sensor, so it can be easily replicated with an off-the-shelf manipulator, and applied to other robots of the same family.
Motion and constraint identification are the fundamental issue throughout the development of parallel mechanisms. Aiming at meaningful result with heuristic and visualizable process, this paper proposes a machine learning-based method for motions and constraints modeling and further develops the automatic software for mobility analysis. As a preliminary, topology of parallel mechanism is characterized by recognizable symbols and mapped to the motion of component limb through programming algorithm. A predictive model for motion and constraint with their nature meanings is constructed based on neural network. An increase in accuracy is obtained by the novel loss function, which combines the errors of network and physical equation. Based on the predictive model, an automatic framework for mobility analysis of parallel mechanisms is constructed. A software is developed with WebGL interface, providing the result of mobility analysis as well as the visualizing process particularly. Finally, five typical parallel mechanisms are taken as examples to verify the approach and its software. The method facilitates to attain motion/constraint and mobility of parallel mechanisms with both numerical and geometric features.
The early applications of Visual Simultaneous Localization and Mapping (VSLAM) technology were primarily focused on static environments, relying on the static nature of the environment for map construction and localization. However, in practical applications, we often encounter various dynamic environments, such as city streets, where moving objects are present. These dynamic objects can make it challenging for robots to accurately understand their own position. This paper proposes a real-time localization and mapping method tailored for dynamic environments to effectively deal with the interference of moving objects in such settings. Firstly, depth images are clustered, and they are subdivided into sub-point clouds to obtain clearer local information. Secondly, when processing regular frames, we fully exploit the structural invariance of static sub-point clouds and their relative relationships. Among these, the concept of the sub-point cloud is introduced as novel idea in this paper. By utilizing the results computed based on sub-poses, we can effectively quantify the disparities between regular frames and reference frames. This enables us to accurately detect dynamic areas within the regular frames. Furthermore, by refining the dynamic areas of keyframes using historical observation data, the robustness of the system is further enhanced. We conducted comprehensive experimental evaluations on challenging dynamic sequences from the TUM dataset and compared our approach with state-of-the-art dynamic VSLAM systems. The experimental results demonstrate that our method significantly enhances the accuracy and robustness of pose estimation. Additionally, we validated the effectiveness of the system in dynamic environments through real-world scenario tests.
High-immersion virtual reality (HiVR) attracts increasing attention among language learning researchers because of its potential to enhance language learning. Prior studies focused mainly on HiVR and linguistic knowledge acquisition, and little is known about HiVR and emotions in language learning. Foreign language speaking anxiety (FLSA) is a common emotion that inhibits language learning and use, so it is important to explore approaches to alleviate it. This study investigated the potential use of HiVR for coping with FLSA in which 140 Chinese EFL learners were randomly assigned to four groups (35 students each) with a different combination of learning environments (HiVR or classroom) and learning principles (situated learning or teacher-centred learning). Students’ pre- and post-test of FLSA levels within and among four groups were compared via t-tests and ANOVA. Participants’ descriptions of FLSA change and perceptions of the effects of HiVR on FLSA were integrated with quantitative results for analysis. The integration of analysis showed that although most students perceived HiVR as a useful tool for alleviating FLSA, it is difficult for them to apply the reduced anxiety experienced in HiVR to real-life situations. The statistical results also showed that HiVR did not influence students’ real-life FLSA significantly. Most participants reported that HiVR offered them an authentic environment and enjoyable learning activities, which engaged them in learning, but the use of avatars in HiVR sometimes created an obstacle to communication. Implications for using HiVR technology to enhance foreign language learning are provided.
Deep learning (DL) has been widely used in bearing fault diagnosis. In particular, convolutional neural networks (CNNs) improve diagnosis accuracy by extracting excellent fault features. However, CNN lacks an explicit learning mechanism to distinguish between different fault characteristics in the input signal to the diagnosis results. This article presents a new end-to-end depth framework called multi-head self-attention convolution neural network (MSA-CNN) for bearing fault diagnosis. Firstly, we adopt a data pre-processing method that directly converts one-dimensional (1D) original signals into two-dimensional (2D) grayscale images, which is simple to implement and preserves the complete information of the original signal. Secondly, multi-head self-attention (MSA) is first constructed to aggregate the global information and adaptively assign weights to the input signal's features. Thirdly, the CNN with small-scale kernels extracted detailed local features. Finally, the learned high-level representations are fed into the full connect (FC) layer for fault diagnosis. The performance of the MSA-CNN is validated on different datasets. The results show that the proposed MSA-CNN can significantly improve fault diagnosis accuracy compared with the other state-of-the-art methods and has excellent noise immunity performance.
In many applications, dimensionality reduction is important. Uses of dimensionality reduction include visualization, removing noise, and decreasing compute and memory requirements, such as for image compression. This chapter focuses on low-rank approximation of a matrix. There are theoretical models for why big matrices should be approximately low rank. Low-rank approximations are also used to compress large neural network models to reduce computation and storage. The chapter begins with the classic approach to approximating a matrix by a low-rank matrix, using a nonconvex formulation that has a remarkably simple singular value decomposition solution. It then applies this approach to the source localization application via the multidimensional scaling method and to the photometric stereo application. It then turns to convex formulations of low-rank approximation based on proximal operators that involve singular value shrinkage. It discusses methods for choosing the rank of the approximation, and describes the optimal shrinkage method called OptShrink. It discusses related dimensionality reduction methods including (linear) autoencoders and principal component analysis. It applies the methods to learning low-dimensionality subspaces from training data for subspace-based classification problems. Finally, it extends the method to streaming applications with time-varying data. This chapter bridges the classical singular value decomposition tool with modern applications in signal processing and machine learning.
An important operation in signal processing and machine learning is dimensionality reduction. There are many such methods, but the starting point is usually linear methods that map data to a lower-dimensional set called a subspace. When working with matrices, the notion of dimension is quantified by rank. This chapter reviews subspaces, span, dimension, rank, and nullspace. These linear algebra concepts are crucial to thoroughly understanding the SVD, a primary tool for the rest of the book (and beyond). The chapter concludes with a machine learning application, signal classification by nearest subspace, that builds on all the concepts of the chapter.
This chapter contains topics related to matrices with special structures that arise in many applications. It discusses companion matrices that are a classic linear algebra topic. It constructs circulant matrices from a particular companion matrix and describes their signal processing applications. It discusses the closely related family of Toeplitz matrices. It describes the power iteration that is used later in the chapter for Markov chains. It discusses nonnegative matrices and their relationships to graphs, leading to the analysis of Markov chains. The chapter ends with two applications: Google’s PageRank method and spectral clustering using graph Laplacians.
Services offered by genealogy companies are increasingly underpinned by computational remediation and algorithmic power. Users are encouraged to employ a variety of mobile web and app plug-ins to create progressively more sophisticated forms of synthetic media featuring their (often deceased) ancestors. As the promotion of deepfake and voice-synthesizing technologies intensifies within genealogical contexts – aggrandised as mechanisms for ‘bringing people back to life’ – we argue it is crucial that we critically examine these processes and the socio-technical infrastructures that underpin them, as well as their mnemonic impacts. In this article, we present a study of two AI-enabled services released by the genealogy company MyHeritage: Deep Nostalgia (launched 2020), and DeepStory (2022). We carry out a close critical reading of these services and the outputs they produce which we understand as examples of ‘remediated memory’ (Kidd and Nieto McAvoy 2023) shaped by corporate interests. We examine the distribution of agency where the promotion by these platforms of unique and personalised experiences comes into tension with the propensity of algorithms to homogenise. The analysis intersects with nascent ethical debates about the exploitative and extractive qualities machine learning. Our research unpacks the social and (techno-)material implications of these technologies, demonstrating an enduring individual and collective need to connect with our past(s), and to test and extend our memories and recollections through increasingly intense and proximate new media formats.
Many of the preceding chapters involved optimization formulations: linear least squares, Procrustes, low-rank approximation, multidimensional scaling. All these have analytical solutions, like the pseudoinverse for minimum-norm least squares problems and the truncated singular value decomposition for low-rank approximation. But often we need iterative optimization algorithms, for example if no closed-form minimizer exists, or if the analytical solution requires too much computation and/or memory (e.g., singular value decomposition for large problems. To solve an optimization problem via an iterative method, we start with some initial guess and then the algorithm produces a sequence that hopefully converges to a minimizer. This chapter describes the basics of gradient-based iterative optimization algorithms, including preconditioned gradient descent (PGD) for the linear LS problem. PGD uses a fixed step size, whereas preconditioned steepest descent uses a line search to determine the step size. The chapter then considers gradient descent and accelerated versions for general smooth convex functions. It applies gradient descent to the machine learning application of binary classification via logistic regression. Finally, it summarizes stochastic gradient descent.
This chapter introduces matrix factorizations – somewhat like the reverse of matrix multiplication. It starts with the eigendecomposition of symmetric matrices, then generalizes to normal and asymmetric matrices. It introduces the basics of the singular value decomposition (SVD) of general matrices. It discusses a simple application of the SVD that uses the largest singular value of a matrix (the spectral norm), posed as an optimization problem, and then describes optimization problems related to eigenvalues and the smallest singular value. (The “real” SVD applications appear in subsequent chapters.) It discusses the special situations when one can relate the eigendecomposition and an SVD of a matrix, leading to the special class of positive (semi)definite matrices. Along the way there are quite a few small eigendecomposition and SVD examples.
At present, industrial scenes with sparse features and weak textures are widely encountered, and the three-dimensional reconstruction of such scenes is a recognized problem. Pressure pipelines have a wide range of applications in fields such as petroleum engineering, chemical engineering, and hydropower station engineering. However, there is no mature solution for the three-dimensional reconstruction of pressure pipes. The main reason is that the typical scenes in which pressure pipes are found also have relatively few features and textures. Traditional three-dimensional reconstruction algorithms based on feature extraction are largely ineffective for such scenes that are lacking in features. In view of the above problems, this paper proposes an improved interframe registration algorithm based on point cloud fitting with cylinder axis vector constraints. By incorporating geometric feature parameters of a cylindrical pressure pipeline, specifically the axis vector of the cylinder, to constrain the traditional iterative closest point algorithm, the accuracy of point cloud registration can be improved in scenarios lacking features and textures, and some environmental uncertainties can be overcome. Finally, using actual laser point cloud data collected from pressure pipelines, the proposed fitting-based point cloud registration algorithm with cylinder axis vector constraints is tested. The experimental results show that under the same conditions, compared with other open-source point cloud registration algorithms, the proposed method can achieve higher registration accuracy. Moreover, integrating this algorithm into an open-source three-dimensional reconstruction algorithm framework can lead to better reconstruction results.
This article explores the nature and dynamics of mnemonic communities within the context of social media platforms and proposes to identify mnemonic communities using hashtag co-occurrence analysis. The article distinguishes between ‘explicit’ and ‘latent’ mnemonic communities, arguing that while some digital mnemonic communities may exhibit characteristics of offline communities, others exist latently as discursive spaces or semiospheres without direct awareness. On platforms like Instagram, hashtags function as semiotic markers, but also as user-chosen indexes to the content. As hashtags link the social and semantic aspects of community formation, hashtag co-occurrence analysis offers a robust framework for understanding and mapping these communities. This method allows to detect and analyse patterns of hashtag use that suggest the presence of networked community structures that may not be apparent or conscious to the social media users themselves. Additionally, a metric is introduced for determining the degree of ‘latentness’ of communities that quantifies the cohesion within communities compared to their external connections. The article demonstrates this approach by applying hashtag co-occurrence analysis to a dataset of Instagram posts tagged with #Juneteenth, a popular hashtag used to commemorate the ending of slavery in the United States. It identifies 87 mnemonic communities that reflect the diversity and complexity of how platforms facilitate memory-sharing practices and the role of semiotic markers in forming (latent) mnemonic networks.