To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We present a methodology for the extraction of narrative information from a large corpus. The key idea is to transform the corpus into a network, formed by linking the key actors and objects of the narration, and then to analyse this network to extract information about their relations. By representing information into a single network it is possible to infer relations between these entities, including when they have never been mentioned together. We discuss various types of information that can be extracted by our method, various ways to validate the information extracted and two different application scenarios. Our methodology is very scalable, and addresses specific research needs in social sciences.
We discuss our outreach efforts to introduce school students to network science and explain why researchers who study networks should be involved in such outreach activities. We provide overviews of modules that we have designed for these efforts, comment on our successes and failures, and illustrate the potentially enormous impact of such outreach efforts.
Comparisons sort objects based on their superiority or inferiority and they may have major effects on a variety of evaluation processes. The Web facilitates qualitative and quantitative comparisons via online debates, discussion forums, product comparison sites, etc., and comparison analysis is becoming increasingly useful in many application areas. This study develops a method for classifying sentences in Korean text documents into several different comparative types to facilitate their analysis. We divide our study into two tasks: (1) extracting comparative sentences from text documents and (2) classifying comparative sentences into seven types. In the first task, we investigate many actual comparative sentences by referring to previous studies and construct a lexicon of comparisons. Sentences that contain elements from the lexicon are regarded as comparative sentence candidates. Next, we use machine learning techniques to eliminate non-comparative sentences from the candidates. In the second task, we roughly classify the comparative sentences using keywords and use a transformation-based learning method to correct initial classification errors. Experimental results show that our method could be suitable for practical use. We obtained an F1-score of 90.23% in the first task, an accuracy of 81.67% in the second task, and an overall accuracy of 88.59% for the integrated system with both tasks.
An open problem in natural language processing is word sense disambiguation (WSD). A word may have several meanings, but WSD is the task of selecting the correct sense of a polysemous word based on its context. Proposed solutions are based on supervised and unsupervised learning methods. The majority of researchers in the area focused on choosing proper size of ‘n’ in n-gram that is used for WSD problem. In this research, the concept has been taken to a new level by using variable ‘n’ and variable size window. The concept is based on the iterative patterns extracted from the text. We show that this type of sequential pattern is more effective than many other solutions for WSD. Using regular data mining algorithms on the extracted features, we significantly outperformed most monolingual WSD solutions. The state-of-the-art results were obtained using external knowledge like various translations of the same sentence. Our method improved the accuracy of the multilingual system more than 4 percent, although we were using monolingual features.
Most statistical machine translation systems typically rely on word alignments to extract translation rules. This approach would suffer from a practical problem that even one spurious word alignment link can prevent some desirable translation rules from being extracted. To address this issue, this paper presents two approaches, referred to as sub-tree alignment and phrase-based forced decoding methods, to automatically learn translation span alignments from parallel data. Then, we improve the translation rule extraction by deleting spurious links and inserting new links based on bilingual translation span correspondences. Some comparison experiments are designed to demonstrate the effectiveness of the proposed approaches.
Navigating along a set of programmed points in a completely unknown environment is a challenging task which mostly depends on the way the robot perceives and symbolizes the environment and decisions it takes in order to avoid the obstacles while it intends to reach subsequent goals. Tenacity and Traversability (T2)1-based strategies have demonstrated to be highly effective for reactive navigation, extending the benefits of the artificial Potential Field method to complex situations, such as trapping zones or mazes. This paper presents a new approach for reactive mobile robot behavior control which rules the actions to be performed to avoid unexpected obstacles while the robot executes a mission between several defined sites. This new strategy combines the T2 principles to escape from trapping zones together with additional criteria based on the Nearness Diagram (ND)13 strategy to move in cluttered or densely occupied scenarios. Success in a complete set of experiments, using a mobile robot equipped with a single camera, shows extensive environmental conditions where the strategy can be applied.
In the last decades, most works in the literature have been devoted to study argumentation formalisms that focus on a defeat relation among arguments. Recently, the study of a support relation between arguments regained attention among researchers; the bulk of the research has been centered on the study of support within the context of abstract argumentation by considering support as an explicit interaction between arguments. However, there exist other approaches that take support into account in a different setting. This article surveys several interpretations of the notion of support as proposed in the literature, such as deductive support, necessary support, evidential support, subargument, and backing, among others. The aim is to provide a comprehensive study where similarities and differences among these interpretations are highlighted, as well as discuss how they are addressed by different argumentation formalisms.
Automata theory lies at the foundation of computer science, and is vital to a theoretical understanding of how computers work and what constitutes formal methods. This treatise gives a rigorous account of the topic and illuminates its real meaning by looking at the subject in a variety of ways. The first part of the book is organised around notions of rationality and recognisability. The second part deals with relations between words realised by finite automata, which not only exemplifies the automata theory but also illustrates the variety of its methods and its fields of application. Many exercises are included, ranging from those that test the reader, to those that are technical results, to those that extend ideas presented in the text. Solutions or answers to many of these are included in the book.
This Handbook offers an unparalleled view of wireless personal area networking technologies and their associated protocols. It lifts the lid on their growing adoption within the consumer electronics, home automation, sports, health and well-being markets. Bluetooth low energy, ZigBee, EnOcean and ANT+ are comprehensively covered, along with other WPAN technologies including NFC, Wi-Fi, Bluetooth classic and high speed, and WHDI. It also features 802.11ac, the Internet of Things, Wireless USB, WiGig and WirelessHD. The Handbook shows how white space radio, cellular and Femtocells have inadvertently blurred the boundaries between personal and wide area communications, creating disruptive topologies through technology convergence. It explores how pervasive WAN technologies have spawned a new generation of consumers through the Lawnmower Man Effect and explains how our personal space has become integral to social media streams, including Twitter, Facebook and Pinterest. An essential read for students, software engineers and developers, product planners, technical marketers and analysts.
This book explores the history of hypertext, an influential concept that forms the underlying structure of the World Wide Web and innumerable software applications. Barnet tells both the human and the technological story by weaving together contemporary literature and her exclusive interviews with those at the forefront of hypertext innovation, tracing its evolutionary roots back to the analogue machine imagined by Vannevar Bush in 1945.
Together, Sets and Proofs and its sister volume Models and Computability will provide readers with a comprehensive guide to mathematical logic. All the authors are leaders in their fields and are drawn from the invited speakers at 'Logic Colloquium '97' (the major international meeting of the Association of Symbolic Logic). It is expected that the breadth and timeliness of these two volumes will prove an invaluable and unique resource for specialists, post-graduate researchers, and the informed and interested nonspecialist.
This paper proposes a new effective kinematics method based on the dynamic visual window (DVW) for a surgical robot that is equipped with two instrument arms and one laparoscope arm, to enable doctors to achieve operations with their visual habits under the laparoscopic visual environment. The problem of the consistency principle between the doctor's operations under the visual window's feedback and the master–slave operations of the surgical robot is solved. The kinematics models of the surgical robotic arms are established, and the new kinematics methods based on the DVW of the laparoscope and instrument arms are proposed according to their inverse kinematics with respect to the visual coordinate system. Finally, the proposed kinematics method is verified by simulation experiments based on the theoretical algorithm and the mechanism model; the multiple sets of the simulation data are presented to illustrate the correctness and feasibility of the new method in this research.
The tensor product (G1,G2) of a graph G1 and a pointed graph G2 (containing one distinguished edge) is obtained by identifying each edge of G1 with the distinguished edge of a separate copy of G2, and then removing the identified edges. A formula to compute the Tutte polynomial of a tensor product of graphs was originally given by Brylawski. This formula was recently generalized to coloured graphs and the generalized Tutte polynomial introduced by Bollobás and Riordan. In this paper we generalize the coloured tensor product formula to relative Tutte polynomials of relative graphs, containing zero edges to which the usual deletion/contraction rules do not apply. As we have shown in a recent paper, relative Tutte polynomials may be used to compute the Jones polynomial of a virtual knot.
Agent programming languages have often avoided the use of automated (first principles or hierarchical) planners in favour of predefined plan/recipe libraries for computational efficiency reasons. This allows for very efficient agent reasoning cycles, but limits the autonomy and flexibility of the resulting agents, oftentimes with deleterious effects on the agent's performance. Planning agents can, for instance, synthesise a new plan to achieve a goal for which no predefined recipe worked, or plan to make viable the precondition of a recipe belonging to a goal being pursued. Recent work on integrating automated planning with belief-desire-intention (BDI)-style agent architectures has yielded a number of systems and programming languages that exploit the efficiency of standard BDI reasoning, as well as the flexibility of generating new recipes at runtime. In this paper, we survey these efforts and point out directions for future work.
In order to obtain a more human-like walking and less energy consumption, a it foot rotation phase is considered in the single support phase of a 3D biped robot, in which the stance heel lifts from the ground and the stance foot rotates about the toe. Since there is no actuation at the toe, a walking phase of the robot is composed of a fully actuated phase and an under-actuated phase. The objective of this paper is to present an asymptotically stable walking controller that integrates these two phases. To get around the under-actuation issue, a strict monotonic parameter of the robot is used to describe the reference trajectory instead of using the time parameter. The overall control law consists of a zero moment point (ZMP) controller, a swing ankle rotation controller and a partial joint angles controller. The ZMP controller guarantees that the ZMP follows the desired ZMP. The swing ankle rotation controller assures a flat-foot impact at the end of the swinging phase. Each of these controllers creates two constraints on joint accelerations. In order to determine all the desired joint accelerations from the control law, a partial joint angles controller is implemented. A word “partial” emphasizes the fact that not all the joint angles can be controlled. The outputs controlled by a partial joint angles controller are defined as a linear combination of all the joint angles. The most important question addressed in this paper is how this linear combination can be defined in order to ensure walking stability. The stability of the walking gait under closed-loop control is evaluated with the linearization of the restricted Poincaré map of the hybrid zero dynamics. Finally, simulation results validate the effectiveness of the control law even in presence of initial errors and modelling errors.
A robot mapping procedure using a modified speeded-up robust feature (SURF) is proposed for building persistent maps with visual landmarks in robot simultaneous localization and mapping (SLAM). SURFs are scale-invariant features that automatically recover the scale and orientation of image features in different scenes. However, the SURF method is not originally designed for applications in dynamic environments. The repeatability of the detected SURFs will be reduced owing to the dynamic effect. This study investigated and modified SURF algorithms to improve robustness in representing visual landmarks in robot SLAM systems. Many modifications of the SURF algorithms are proposed in this study including the orientation representation of features, the vector dimension of feature description, and the number of detected features in an image. The concept of sparse representation is also used to describe the environmental map and to reduce the computational complexity when using extended Kalman filter (EKF) for state estimation. Effective procedures of data association and map management for SURFs in SLAM are also designed to improve accuracy in robot state estimation. Experimental works were performed on an actual system with binocular vision sensors to validate the feasibility and effectiveness of the proposed algorithms. The experimental examples include the evaluation of state estimation using EKF SLAM and the implementation of indoor SLAM. In the experiments, the performance of the modified SURF algorithms was compared with the original SURF algorithms. The experimental results confirm that the modified SURF provides better repeatability and better robustness for representing the landmarks in visual SLAM systems.
In this paper we study the weak approximation problem of $E[\phi (x(T))] $ by $E[\phi (y(T))] $, where $x(T)$ is the solution of a stochastic differential delay equation and $y(T)$ is defined by the Euler scheme. For $\phi \in { C}_{b}^{3} $, Buckwar, Kuske, Mohammed and Shardlow (‘Weak convergence of the Euler scheme for stochastic differential delay equations’, LMS J. Comput. Math. 11 (2008) 60–69) have shown that the Euler scheme has weak order of convergence $1$. Here we prove that the same results hold when $\phi $ is only assumed to be measurable and bounded under an additional non-degeneracy condition.
In the original article [LMS J. Comput. Math. 15 (2012) 71–83], the authors use a discrete form of the Itô formula, developed by Appleby, Berkolaiko and Rodkina [Stochastics 81 (2009) no. 2, 99–127], to show that the almost sure asymptotic stability of a particular two-dimensional test system is preserved when the discretisation step size is small. In this Corrigendum, we identify an implicit assumption in the original proof of the discrete Itô formula that, left unaddressed, would preclude its application to the test system of interest. We resolve this problem by reproving the relevant part of the discrete Itô formula in such a way that confirms its applicability to our test equation. Thus, we reaffirm the main results and conclusions of the original article.
Denote by $ \mathbb{Q} ( \sqrt{- m} )$, with $m$ a square-free positive integer, an imaginary quadratic number field, and by ${ \mathcal{O} }_{- m} $ its ring of integers. The Bianchi groups are the groups ${\mathrm{SL} }_{2} ({ \mathcal{O} }_{- m} )$. In the literature, so far there have been no examples of $p$-torsion in the integral homology of the full Bianchi groups, for $p$ a prime greater than the order of elements of finite order in the Bianchi group, which is at most 6. However, extending the scope of the computations, we can observe examples of torsion in the integral homology of the quotient space, at prime numbers as high as for instance $p= 80\hspace{0.167em} 737$ at the discriminant $- 1747$.