To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite recent breakthroughs in Machine Learning for Natural Language Processing, the Natural Language Inference (NLI) problems still constitute a challenge. To this purpose, we contribute a new dataset that focuses exclusively on the factivity phenomenon; however, our task remains the same as other NLI tasks, that is prediction of entailment, contradiction, or neutral (ECN). In this paper, we describe the LingFeatured NLI corpus and present the results of analyses designed to characterize the factivity/non-factivity opposition in natural language. The dataset contains entirely natural language utterances in Polish and gathers 2432 verb-complement pairs and 309 unique verbs. The dataset is based on the National Corpus of Polish (NKJP) and is a representative subcorpus in regard to syntactic construction [V][że][cc]. We also present an extended version of the set (3035 sentences) consisting more sentences with internal negations. We prepared deep learning benchmarks for both sets. We found that transformer BERT-based models working on sentences obtained relatively good results ($\approx 89\%$ F1 score on base dataset). Even though better results were achieved using linguistic features ($\approx 91\%$ F1 score on base dataset), this model requires more human labor (humans in the loop) because features were prepared manually by expert linguists. BERT-based models consuming only the input sentences show that they capture most of the complexity of NLI/factivity. Complex cases in the phenomenon—for example, cases with entitlement (E) and non-factive verbs—still remain an open issue for further research.
Building scenarios for training recognition skills in complex domains requires the addition of hard-to-detect cues and unexpected events. This chapter describes the Periphery Principle, which emphasizes the importance of including critical cues in nonobvious ways so trainees learn how to seek them, and the Perturbation Principle, which encourages training designers to incorporate unexpected events into training scenarios so trainees learn to adapt to novel situations. The chapter presents methods for identifying peripheral cues and important perturbations for a particular domain or task, and gives examples of critical cue inventories and complexity tables that can be useful tools for training designers.
Augmented reality offers the opportunity to increase the fidelity of training. This chapter describes three principles related to fidelity that augmented reality can effectively support in ways that are difficult for other training modalities. The first, the Sensory Fidelity Principle, describes how realistic cues are needed for perceptual skill development. Training designers often need to make decisions about which cues require high levels of fidelity; domain familiarization activities can help guide these decisions. According to the Scaling Fidelity Principle, virtual props should be represented close to their real-world size. This allows trainees to practice important physical skills, such as body positioning. The Assessment-Action Pairing Principle describes how being able to seamlessly assess a situation and act yields better transfer of training to on-the-job performance than part-task training approaches that separate assessment from acting.
Mental models are the internal representations that guide interactions with the world. Mental models are experience based and they inform an individual’s understanding of what is going on, how things work, and how a situation is likely to evolve. This chapter provides two principles for supporting the development of robust mental models in trainees. The Mental Model Articulation Principle emphasizes building training experiences that encourage learners to verbalize aspects of their mental models to identify flaws and gaps. The Many Variations Principle highlights the value of providing learners with a range of experiences with the intent of expanding their mental models to support performance in diverse conditions.
Augmented reality technology enables the creation of training that more closely resembles real-world environments without the cost and complexity of organizing large- scale training exercises in high-stakes domains that require recognition skills (e.g., military operations, emergency medicine). Augmented reality can be used to project virtual objects such as patients, medical equipment, colleagues, and terrain features onto any surface, transforming any space into a simulation center. Augmented reality can also be integrated into an existing simulation center. For example, a virtual patient can be mapped onto a physical manikin so learners can practice assessments skills on the highly tailorable virtual patient, and practice interventions on the physical manikin using the tools they use in their everyday work. This chapter sets the stage by describing how the author drew from their own experiences, reviewed scientific literature, and consulted with skilled instructors to articulate eleven design principles for creating augmented reality training.
This chapter describes two principles for supporting trainees. The Scaffolding Principle describes how support should be adapted to accommodate a trainee’s current skill to keep them in a “high challenge/high support” learning mode. The Reflection Principle describes ways in which training should encourage active reflection in trainees so they can learn how to constantly reflect on their own performance and apply new insights to future situations. The chapter provides examples of intelligent tutoring systems that employ adaptive scaffolding techniques, along with other types of learning applications. The chapter also discusses strategies for encouraging learners to reflect on their training experiences.
This paper proposes a sequence-to-sequence model for data-to-text generation, called DM-NLG, to generate a natural language text from structured nonlinguistic input. Specifically, by adding a dynamic memory module to the attention-based sequence-to-sequence model, it can store the information that leads to generate previous output words and use it to generate the next word. In this way, the decoder part of the model is aware of all previous decisions, and as a result, the generation of duplicate words or incomplete semantic concepts is prevented. To improve the generated sentences quality by the DM-NLG decoder, a postprocessing step is performed using the pretrained language models. To prove the effectiveness of the DM-NLG model, we performed experiments on five different datasets and observed that our proposed model is able to reduce the slot error rate rate by 50% and improve the BLEU by 10%, compared to the state-of-the-art models.
We define and develop two-level type theory (2LTT), a version of Martin-Löf type theory which combines two different type theories. We refer to them as the ‘inner’ and the ‘outer’ type theory. In our case of interest, the inner theory is homotopy type theory (HoTT) which may include univalent universes and higher inductive types. The outer theory is a traditional form of type theory validating uniqueness of identity proofs (UIP). One point of view on it is as internalised meta-theory of the inner type theory. There are two motivations for 2LTT. Firstly, there are certain results about HoTT which are of meta-theoretic nature, such as the statement that semisimplicial types up to level n can be constructed in HoTT for any externally fixed natural number n. Such results cannot be expressed in HoTT itself, but they can be formalised and proved in 2LTT, where n will be a variable in the outer theory. This point of view is inspired by observations about conservativity of presheaf models. Secondly, 2LTT is a framework which is suitable for formulating additional axioms that one might want to add to HoTT. This idea is heavily inspired by Voevodsky’s Homotopy Type System (HTS), which constitutes one specific instance of a 2LTT. HTS has an axiom ensuring that the type of natural numbers behaves like the external natural numbers, which allows the construction of a universe of semisimplicial types. In 2LTT, this axiom can be assumed by postulating that the inner and outer natural numbers types are isomorphic. After defining 2LTT, we set up a collection of tools with the goal of making 2LTT a convenient language for future developments. As a first such application, we develop the theory of Reedy fibrant diagrams in the style of Shulman. Continuing this line of thought, we suggest a definition of $(\infty,1)$-category and give some examples.
The paper addresses a significant challenge in on-orbit robotics servicing and assembly, which is to overcome the saturation setback of force/torque on robot joint and spacecraft actuators during the post-capture stage while controlling a target spacecraft with uncontrolled large angular and linear momentums. The authors propose a novel solution based on two robust and efficient control algorithms: Optimal Control Allocation (OCA) and Non-linear Model Predictive Control (NMPC). Both algorithms aim to minimize joint torques, spacecraft actuator moments, contact forces, and moments of a compound redundant system that includes a common payload (target spacecraft) grasped by dual n-degree space robotics manipulators mounted on a chaser spacecraft. The OCA algorithm minimizes a quadratic cost function using only the current states and the system dynamics, but the NMPC also considers the future state estimates and the control inputs over a specified prediction horizon. It is computationally more involved but provides superior results in reducing joint torques. The literature to date in application of MPC to robotics mainly focuses on linear models but the dual arm coordination is highly non-linear and there is no MPC application on dual arm coordination. The proposed discretized technique offers exact realizations (of a non-linear model) with elegance and simplicity and yet considers the full non-linear model of the dual arm coordinating system. It is computationally very efficient. The computer simulation results show that the proposed algorithms work efficiently, and the minimum torques, contact forces, and moments are realized. The developed algorithm also is very efficient in tracking problems.
Craniovertebral junction (CVJ) is one of the more complex parts of the spinal column. It provides mobility to the cranium and houses the spinal cord. In a healthy subject, the CVJ contributes 25% of the flexion–extension motion and 50% of the axial rotation of the neck. This work reports instrumentation development and results for evaluating implant performance in the stabilized CVJ after surgical procedures. Typically, some bony parts of the vertebrae causing compression to the spinal cord are removed and subsequently stabilized by the instrumenting implant in the CVJ. Pose estimation of the Cadaveric CVJ region is estimated using a monocular vision-based setup. The cervical spine’s first three vertebrae surround the CVJ area, where most cervical spine mobility originates. We aim to evaluate the performance of vision-based intervertebral motion estimation of the Cadaver’s CVJ in the Indian population, particularly in older people. A series of tests were performed on the Cadaver’s CVJ to evaluate the vision system-based motion estimation performance.
Automatic paraphrase detection is the task of measuring the semantic overlap between two given texts. A major hurdle in the development and evaluation of paraphrase detection approaches, particularly for South Asian languages like Urdu, is the inadequacy of standard evaluation resources. The very few available paraphrased corpora for these languages are manually created. As a result, they are constrained to smaller sizes and are not very feasible to evaluate mainstream data-driven and deep neural networks (DNNs)-based approaches. Consequently, there is a need to develop semi- or fully automated corpus generation approaches for the resource-scarce languages. There is currently no semi- or fully automatically generated sentence-level Urdu paraphrase corpus. Moreover, no study is available to localize and compare approaches for Urdu paraphrase detection that focus on various mainstream deep neural architectures and pretrained language models.
This research study addresses this problem by presenting a semi-automatic pipeline for generating paraphrased corpora for Urdu. It also presents a corpus that is generated using the proposed approach. This corpus contains 3147 semi-automatically extracted Urdu sentence pairs that are manually tagged as paraphrased (854) and non-paraphrased (2293). Finally, this paper proposes two novel approaches based on DNNs for the task of paraphrase detection in Urdu text. These are Word Embeddings n-gram Overlap (henceforth called WENGO), and a modified approach, Deep Text Reuse and Paraphrase Plagiarism Detection (henceforth called D-TRAPPD). Both of these approaches have been evaluated on two related tasks: (i) paraphrase detection, and (ii) text reuse and plagiarism detection. The results from these evaluations revealed that D-TRAPPD ($F_1 = 96.80$ for paraphrase detection and $F_1 = 88.90$ for text reuse and plagiarism detection) outperformed WENGO ($F_1 = 81.64$ for paraphrase detection and $F_1 = 61.19$ for text reuse and plagiarism detection) as well as other state-of-the-art approaches for these two tasks. The corpus, models, and our implementations have been made available as free to download for the research community.
With the development of intelligent manufacturing, more and more nonstandard parts are used in high-precision assembly. The robotic assembly method based on attractive region in environment (ARIE) has been proven to have good performance in the high-precision assembly under the limitation of robot system accuracy or sensing accuracy. However, for the assembly of nonstandard parts, especially nonconvex parts, the existing ARIE-based strategy lacks a targeted design. In the assembly process, the nonconvex structure may cause blocking problems, which will lead to assembly failure when using the strategy. In order to solve this problem, this paper proposes a new assembly method by using the geometric features of constraint region based on the concept of ARIE. Specifically, first, when using the ARIE-based classic strategy, the reasons for the possible blocking problem in the assembly of a class of nonconvex axisymmetric parts are analyzed in detail. Second, a multi-step sliding strategy is proposed based on the theory of ARIE to solve the possible blocking problem in the assembly process. Third, impedance control is used to enable the peg to achieve the desired compliant motion in the proposed strategy. The improvement in the success rate of the proposed method is verified by the comparison experiment of small clearance peg-in-hole assembly, where the structure of the peg is nonconvex and axisymmetric.
Robotic rovers equipped with articulated rocker-bogie suspension have aroused great interest after the explorations on Mars; this interest has also shifted to different types of terrestrial applications such as in the agriculture, military, and rescue fields. The suspension can be designed so that, when the rover is on flat and horizontal ground, the weight is evenly distributed among the wheels; in this way, all wheels have the same traction capability and offer the same rolling resistance. As the operating conditions vary due to sloping ground, uneven ground surface, or different payload position, the weight distribution can undergo considerable variations. This type of suspension is statically determined with respect to weight, but it is indeterminate with respect to traction forces; the traction control system aims to avoid the wheels slippage. In this paper, the traction contribution that each wheel can provide, to overcome a step obstacle, is shown. Furthermore, the possibility of regulating the distribution of vertical loads among the wheels adopting a torsion spring, with adjustable preload, arranged between rocker and bogie, is evaluated. A suitable spring preload facilitates the initial phase of the obstacle overcoming if the rover advances with the bogie forward. Numerical simulations show that to increase the possibility of overcoming an obstacle it is sufficient for the spring preload to reduce the vertical load on the front wheel; in any case, a higher load variation would not be advisable as it could involves an excessive load difference among the wheels.
Wearable robots, sometimes known as exoskeletons, are incredible devices for improving human strength, reducing fatigue, and restoring impaired mobility. The control of powered exoskeletons, on the other hand, is still a challenge. This necessitates the development of a technique to simulate exoskeleton–wearer interaction. This study uses a two-dimensional human skeletal model with a powered knee exoskeleton to predict the optimal lifting motion and assistive torque. For lifting motion prediction, an inverse dynamics optimization formulation is utilized. In addition, the electromechanical dynamics of the exoskeleton DC motor are modeled in the lifting optimization formulation. The design variables are human joint angle profiles and exoskeleton motor current profiles. The human joint torque square is minimized subject to physical and lifting task constraints. Then, the lifting optimization problem is solved by the gradient-based sparse nonlinear optimizer (SNOPT). Furthermore, the optimal exoskeleton torque is implemented through a two-phase control strategy to provide optimal assistance in lifting. Experimental validations of the optimal control with 6 Nm and 16 Nm maximum assistive torque are presented. Both 6 Nm and 16 Nm maximum optimal assistance of the exoskeletons reduce the mean values of vastus lateralis, biceps femoris, and latissimus dorsi muscle activations compared to the lifting without the exoskeleton. However, the mean value of the vastus medialis activation is increased by a small amount for the exoskeleton case, although its peak value is reduced. Finally, the experimental results demonstrate that the proposed lifting optimization formulation and control strategy are promising for powered knee exoskeleton for lifting tasks.
This paper focuses on the design, analysis, and multi-objective optimization of a novel 5-degrees of freedom (DOF) double-driven parallel mechanism. A novel 5-DOF parallel mechanism with two double-driven branch chains is proposed, which can serve as a machine tool. By installing two actuators on one branch chain, the proposed parallel mechanism can achieve 5-DOF of the moving platform with only three branch chains. Afterwards, analytical solution for inverse kinematics is derived. The 5$\times$5 homogeneous Jacobian matrix is obtained by transforming actuator velocities into linear velocities at three points on the moving platform. Meanwhile, the workspace, dexterity, and volume are analyzed based on the kinematic model. Ultimately, a stage-by-stage Pareto optimization method is proposed to solve the multi-objective optimization problem of this parallel mechanism. The optimization results show that the workspace, compactness, and dexterity of this mechanism can be improved efficiently.
How can we provide guarantees of behaviours for autonomous systems such as driverless cars? This tutorial text, for professionals, researchers and graduate students, explains how autonomous systems, from intelligent robots to driverless cars, can be programmed in ways that make them amenable to formal verification. The authors review specific definitions, applications and the unique future potential of autonomous systems, along with their impact on safer decisions and ethical behaviour. Topics discussed include the use of rational cognitive agent programming from the Beliefs-Desires-Intentions paradigm to control autonomous systems and the role model-checking in verifying the properties of this decision-making component. Several case studies concerning both the verification of autonomous systems and extensions to the framework beyond the model-checking of agent decision-makers are included, along with complete tutorials for the use of the freely-available verifiable cognitive agent toolkit Gwendolen, written in Java.