To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In order to decrease the influence of system parameters and load on the dynamic performance of permanent magnet synchronous motor (PMSM) in cooperative robot joint modules, a practical model-based robust control method was proposed. It inherits the traditional proportional-integral-derivative (PID) control and robust control based on error and model-based control. We first set up the nominal controller using the dynamics model. In order to limit the influence of uncertainty on dynamic performance, a robust controller is established based on Lyapunov method. The control can be regarded as an improved PID control or a redesigned robust control. Compared with the traditional control method, it is simple to implement and has practical effects. It is proved by theoretical analysis that the controller can guarantee the uniform boundedness and uniform final boundedness of the system. In addition, the prototype of fast controller cSPACE is built on the experiment platform, which averts long-time programming and debugging. It offers immense convenience for practical operation. Finally, numerical simulation and real-time experiment results are presented. Based on cSPACE and a PMSM in the joint module of a practical cooperative robot, the availability of the control design and the achievable control performance are verified.
This paper presents a mobile robot platform, which performs both indoor and outdoor localization based on an intelligent low-cost depth–inertial fusion approach. The proposed sensor fusion approach uses depth-based localization data to enhance the accuracy obtained by the inertial measurement unit (IMU) pose data through a depth–inertial fusion. The fusion approach is based on feedforward cascade correlation networks (CCNs). The aim of this fusion approach is to correct the drift accompanied by the use of the IMU sensor, using a depth camera. This approach also has the advantage of maintaining the high frequency of the IMU sensor and the accuracy of the depth camera. The estimated mobile robot dynamic states through the proposed approach are deployed and examined through real-time autonomous navigation. It is shown that using both the planned path and the continuous localization approach, the robot successfully controls its movement toward the destination. Several tests were conducted with different numbers of layers and percentages of the training set. It is shown that the best performance is obtained with 12 layers and 80% of the pose data used as a training set for CCN. The proposed framework is then compared to the solution based on fusing the information given by the XSens IMU–GPS sensor and the Kobuki robot built-in odometry solution. As demonstrated in the results, an enhanced performance was achieved with an average Euclidean error of 0.091 m by the CCN, which is lower than the error achieved by the artificial neural network by 56%.
In many complex practical optimization cases, the dominant characteristics of the problem are often not known prior. Therefore, there is a need to develop general solvers as it is not always possible to tailor a specialized approach to each application. The previously developed multilevel selection genetic algorithm (MLSGA) already shows good performance on a range of problems due to its diversity-first approach, which is rare among evolutionary algorithms. To increase the generality of its performance, this paper proposes utilization of multiple distinct evolutionary strategies simultaneously, similarly to algorithm selection, but with coevolutionary mechanisms between the subpopulations. This distinctive approach to coevolution provides less regular communication between subpopulations with competition between collectives rather than individuals. This encourages the collectives to act more independently creating a unique subregional search, leading to the development of coevolutionary MLSGA (cMLSGA). To test this methodology, nine genetic algorithms are selected to generate several variants of cMLSGA, which incorporates these approaches at the individual level. The mechanisms are tested on 100 different functions and benchmarked against the 9 state-of-the-art competitors to evaluate the generality of each approach. The results show that the diversity divergence in the principles of working of the selected coevolutionary approaches is more important than their individual performances. The proposed methodology has the most uniform performance on the divergent problem types, from across the tested state of the art, leading to an algorithm more likely to solve complex problems with limited knowledge about the search space, but is outperformed by more specialized solvers on simpler benchmarking studies.
Higher-dimensional rewriting systems are tools to analyse the structure of formally reducing terms to normal forms, as well as comparing the different reduction paths that lead to those normal forms. This higher structure can be captured by finding a homotopy basis for the rewriting system. We show that the basic notions of confluence and wellfoundedness are sufficient to recursively build such a homotopy basis, with a construction reminiscent of an argument by Craig C. Squier. We then go on to translate this construction to the setting of homotopy type theory, where managing equalities between paths is important in order to construct functions which are coherent with respect to higher dimensions. Eventually, we apply the result to approximate a series of open questions in homotopy type theory, such as the characterisation of the homotopy groups of the free group on a set and the pushout of 1-types. This paper expands on our previous conference contribution Coherence via Wellfoundedness by laying out the construction in the language of higher-dimensional rewriting.
Many research topics in natural language processing (NLP), such as explanation generation, dialog modeling, or machine translation, require evaluation that goes beyond standard metrics like accuracy or F1 score toward a more human-centered approach. Therefore, understanding how to design user studies becomes increasingly important. However, few comprehensive resources exist on planning, conducting, and evaluating user studies for NLP, making it hard to get started for researchers without prior experience in the field of human evaluation. In this paper, we summarize the most important aspects of user studies and their design and evaluation, providing direct links to NLP tasks and NLP-specific challenges where appropriate. We (i) outline general study design, ethical considerations, and factors to consider for crowdsourcing, (ii) discuss the particularities of user studies in NLP, and provide starting points to select questionnaires, experimental designs, and evaluation methods that are tailored to the specific NLP tasks. Additionally, we offer examples with accompanying statistical evaluation code, to bridge the gap between theoretical guidelines and practical applications.
Dynamic Network Actor Models (DyNAMs) assume that an observed sequence of relational events is the outcome of an actor-oriented decision process consisting of two decision levels. The first level represents the time until an actor initiates the next relational event, modeled by an exponential distribution with an actor-specific activity rate. The second level describes the choice of the receiver of the event, modeled by a conditional multinomial logit model. The DyNAM assumes that the parameters are constant over the actors and the context. This homogeneity assumption, albeit statistically and computationally convenient, is difficult to justify, e.g., in the presence of unobserved differences between actors or contexts. In this paper, we extend DyNAMs by including random-effects parameters that vary across actors or contexts and allow controlling for unknown sources of heterogeneity. We illustrate the model by analyzing relational events among the users of an online community of aspiring and professional digital and graphic designers.
Human-centered intelligent human–robot interaction can transcend the traditional keyboard and mouse and have the capacity to understand human communicative intentions by actively mining implicit human clues (e.g., identity information and emotional information) to meet individuals’ needs. Gait is a unique biometric feature that can provide reliable information to recognize emotions even when viewed from a distance. However, the insufficient amount and diversity of training data annotated with emotions severely hinder the application of gait emotion recognition. In this paper, we propose an adversarial learning framework for emotional gait dataset augmentation, with which a two-stage model can be trained to generate a number of synthetic emotional samples by separating identity and emotion representations from gait trajectories. To our knowledge, this is the first work to realize the mutual transformation between natural gait and emotional gait. Experimental results reveal that the synthetic gait samples generated by the proposed networks are rich in emotional information. As a result, the emotion classifier trained on the augmented dataset is competitive with state-of-the-art gait emotion recognition works.
A force sensorless impedance controller is proposed in this paper for robot manipulators without using force estimators. From the observation of the impedance control law, the force feedback term can be canceled if the inertia matrix in the target impedance is the same as the robot inertia matrix. However, the inertia matrix in the target impedance is almost always a constant matrix, while the robot inertia matrix is a function of the robot configuration, and hence, they may not be identical in general. A modification of the coefficient matrix for the contact force term in the target impedance is suggested in this paper to enable cancelation of the force feedback term in the impedance control law so that a force sensorless impedance controller without using force estimators can be obtained. The tracking performance in the free space phase and the motion trajectory in the compliant motion phase of the new design are almost the same as those in the traditional impedance control. Modification of the inertia matrix in the target impedance will result in small variations of the contact force which is acceptable in practical applications. For robot manipulators containing uncertainties, an adaptive version of the new controller is also developed in this paper to give satisfactory performance without the need for force sensors. Rigorous mathematical justification in closed-loop stability is given in detail, and computer simulations are performed to verify the efficacy of the proposed design.
In this paper, we mainly study the function spaces related to H-sober spaces. For an irreducible subset system H and $T_{0}$ spaces X and Y, it is proved that the following three conditions are equivalent: (1) the Scott space $\Sigma \mathcal O(X)$ of the lattice of all open sets of X is H-sober; (2) for every H-sober space Y, the function space $\mathbb{C}(X, Y)$ of all continuous mappings from X to Y equipped with the Isbell topology is H-sober; (3) for every H-sober space Y, the Isbell topology on $\mathbb{C}(X, Y)$ has property S with respect to H. One immediate corollary is that for a $T_{0}$ space X, Y is a d-space (resp., well-filtered space) iff the function space $\mathbb{C}(X, Y)$ equipped with the Isbell topology is a d-space (resp., well-filtered space). It is shown that for any $T_0$ space X for which the Scott space $\Sigma \mathcal O(X)$ is non-sober, the function space $\mathbb{C}(X, \Sigma 2)$ equipped with the Isbell topology is not sober. The function spaces $\mathbb{C}(X, Y)$ equipped with the Scott topology, the compact-open topology and the pointwise convergence topology are also discussed. Our study also leads to a number of questions, whose answers will deepen our understanding of the function spaces related to H-sober spaces.
This paper studies a Pareto-optimal reinsurance problem when the contract is subject to default of the reinsurer. We assume that the reinsurer can invest a share of its wealth in a risky asset and default occurs when the reinsurer's end-of-period wealth is insufficient to cover the indemnity. We show that without the solvency regulation, the optimal indemnity function is of excess-of-loss form, regardless of the investment decision. Under the solvency regulation constraint, by assuming the investment decision remains unchanged, the optimal indemnity function is characterized element-wisely. Partial results are derived when both the indemnity function and investment decision are impacted by the solvency regulation. Numerical examples are provided to illustrate the implications of our results and the sensitivity of solution to the model parameters.
Given a family $\mathcal{F}$ of bipartite graphs, the Zarankiewicz number$z(m,n,\mathcal{F})$ is the maximum number of edges in an $m$ by $n$ bipartite graph $G$ that does not contain any member of $\mathcal{F}$ as a subgraph (such $G$ is called $\mathcal{F}$-free). For $1\leq \beta \lt \alpha \lt 2$, a family $\mathcal{F}$ of bipartite graphs is $(\alpha,\beta )$-smooth if for some $\rho \gt 0$ and every $m\leq n$, $z(m,n,\mathcal{F})=\rho m n^{\alpha -1}+O(n^\beta )$. Motivated by their work on a conjecture of Erdős and Simonovits on compactness and a classic result of Andrásfai, Erdős and Sós, Allen, Keevash, Sudakov and Verstraëte proved that for any $(\alpha,\beta )$-smooth family $\mathcal{F}$, there exists $k_0$ such that for all odd $k\geq k_0$ and sufficiently large $n$, any $n$-vertex $\mathcal{F}\cup \{C_k\}$-free graph with minimum degree at least $\rho (\frac{2n}{5}+o(n))^{\alpha -1}$ is bipartite. In this paper, we strengthen their result by showing that for every real $\delta \gt 0$, there exists $k_0$ such that for all odd $k\geq k_0$ and sufficiently large $n$, any $n$-vertex $\mathcal{F}\cup \{C_k\}$-free graph with minimum degree at least $\delta n^{\alpha -1}$ is bipartite. Furthermore, our result holds under a more relaxed notion of smoothness, which include the families $\mathcal{F}$ consisting of the single graph $K_{s,t}$ when $t\gg s$. We also prove an analogous result for $C_{2\ell }$-free graphs for every $\ell \geq 2$, which complements a result of Keevash, Sudakov and Verstraëte.
Shakespeare education is being reimagined around the world. This book delves into the important role of collaborative projects in this extraordinary transformation. Over twenty innovative Shakespeare partnerships from the UK, US, Australia, New Zealand, the Middle East, Europe and South America are critically explored by their leaders and participants. –Structured into thematic sections covering engagement with schools, universities, the public, the digital and performance, the chapters offer vivid insights into what it means to teach, learn and experience Shakespeare in collaboration with others. Diversity, equality, identity, incarceration, disability, community and culture are key factors in these initiatives, which together reveal how complex and humane Shakespeare education can be. Whether you are interested in practice or theory, this collection showcases an abundance of rich, inspiring and informative perspectives on Shakespeare education in our contemporary world.
At the forefront of cutting-edge technologies, this text provides a comprehensive treatment of a crucial network performance metric, ushering in new opportunities for rethinking the whole design of communication systems. Detailed exposition of the communication and network theoretic foundations of Age of Information (AoI) gives the reader a solid background, and discussion of the implications on signal processing and control theory shed light on the important potential of recent research. The text includes extensive real-world applications of this vital metric, including caching, the Internet of Things (IoT), and energy harvesting networks. The far-reaching applications of AoI include networked monitoring systems, cyber-physical systems such as the IoT, and information-oriented systems and data analytics applications ranging from the stock market to social networks. The future of this exciting subject in 5G communication systems and beyond make this a vital resource for graduate students, researchers and professionals.
New digital technologies, from AI-fired 'legal tech' tools to virtual proceedings, are transforming the legal system. But much of the debate surrounding legal tech has zoomed out to a nebulous future of 'robo-judges' and 'robo-lawyers.' This volume is an antidote. Zeroing in on the near- to medium-term, it provides a concrete, empirically minded synthesis of the impact of new digital technologies on litigation and access to justice. How far and fast can legal tech advance given regulatory, organizational, and technological constraints? How will new technologies affect lawyers and litigants, and how should procedural rules adapt? How can technology expand – or curtail – access to justice? And how must judicial administration change to promote healthy technological development and open courthouse doors for all? By engaging these essential questions, this volume helps to map the opportunities and the perils of a rapidly digitizing legal system – and provides grounded advice for a sensible path forward. This book is available as Open Access on Cambridge Core.
Multiphase segmentation of pore-scale features and identification of mineralogy from digital images of materials is critical for many applications in the natural resources sector. However, the materials involved (rocks, catalyst pellets, and synthetic alloys) have complex and unpredictable composition. Algorithms that can be extended for multiphase segmentation of images of these materials are relatively few and very human-intensive. Challenges lie in designing algorithms that are context free, can function with less training data, and can handle the unpredictability of material composition. Semisupervised algorithms have shown success in classification in situations characterized by limited training data; they use unlabeled data in addition to labeled data to produce classification. The segmentation obtained can be more accurate than fully supervised learning approaches. This work proposes using a semisupervised clustering algorithm named Continuous Iterative Guided Spectral Class Rejection (CIGSCR) toward multiphase segmentation of digital scans of materials. CIGSCR harnesses spectral cohesion, splitting the intensity histogram of the input image down into clusters. This splitting provides the foundation for classification strategies that can be implemented as postprocessing steps to get the final segmentation. One classification strategy is presented. Micro-computed tomography scans of rocks are used to present the results. It is demonstrated that CIGSCR successfully enables distinguishing features up to the uniqueness of grayscale values, and extracting features present in full image stacks (3D), including features not presented in the training data. Results including instances of success and limitations are presented. Scalability to data sizes $ \mathcal{O}\left({10}^9\right) $ voxels is briefly discussed.
Is it always beneficial to create a new relationship (have a new follower/friend) in a social network? This question can be formally stated as a property of the centrality measure that defines the importance of the actors of the network. Score monotonicity means that adding an arc increases the centrality score of the target of the arc; rank monotonicity means that adding an arc improves the importance of the target of the arc relatively to the remaining nodes. It is known that most centralities are both score and rank monotone on directed, strongly connected graphs. In this paper, we study the problem of score and rank monotonicity for classical centrality measures in the case of undirected networks: in this case, we require that score, or relative importance, improves at both endpoints of the new edge. We show that, surprisingly, the situation in the undirected case is very different, and in particular that closeness, harmonic centrality, betweenness, eigenvector centrality, Seeley’s index, Katz’s index, and PageRank are not rank monotone; betweenness and PageRank are not even score monotone. In other words, while it is always a good thing to get a new follower, it is not always beneficial to get a new friend.
In engineering and architecture, different approaches have been developed that share the use of graph transformation to automate design processes or to search for design solutions by means of computational design synthesis. In order to give an overview of these approaches, we provide a review of articles published in the last decade. Forty-eight articles were reviewed to determine similarities and differences of these approaches. Research fields in method development for the representation of design problems and the processing of graph transformations, as well as the application of graph transformations in engineering, architecture, and shape grammars were identified. Different approaches for the documentation of the vocabulary and the rules were examined. Finally, different approaches for rule applications were analyzed. Based on found limitations, future research directions are suggested.
It’s no secret that the commercial application of NLP technologies has exploded in recent years. From chatbots and virtual assistants to machine translation and sentiment analysis, NLP technologies are now being used in a wide variety of applications across a range of industries. With the increasing demand for technologies that can process human language, investors have been eager to get a piece of the action. In this article, we look at NLP startup funding over the past year, identifying the applications and domains that have received investment.