To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We generalize Reimer's Inequality [6] (a.k.a. the BKR Inequality or the van den Berg–Kesten Conjecture [1]) to the setting of finite distributive lattices.
A novel path planner is presented for the local path planning of a single robot (represented with R) in a complicated dynamic environment. Here a series of attractive points are computed based on attractive segments for guiding R to move along a shorter path. Each attractive segment is obtained by using the full environmental knowledge and will be used for several sampling times in general. A motion controller, which is designed based on artificial moments and a robot model that has a principal motion direction line(PMDline), makes R move closely to attractive points while away from obstacles. Attractive and repulsive moments are designed, which only make R's PMDline face toward attractive points and opposite to obstacles in general, as in most cases, R will move along its PMDline with its full speed. Because of the guidance of attractive points and R's full-speed motion, the global convergence is guaranteed. Simulations indicate that the proposed path planner meets the requirements of real-time property while can optimize R's traveling path.
The design of control laws for flexible manipulators is known to be a challenging problem, when using a conventional actuator, i.e., a motor with gear. This is due to the friction of the nonlinear actuator, which causes torque dead zone and stick-slip behavior, thereby hampering the good performance of the control system. The torque needed to attenuate the vibrations, although calculated by the control law, is consumed by the friction inside the actuator, rendering it ineffective to the flexible structure control. Nonlinear friction varies with different operational conditions of the actuator and a friction compensation mechanism based on these models cannot always keep a good performance. This study proposes a new control strategy using wavelet network to friction compensation. Experimental results obtained with a flexible manipulator attest to the good performance of the proposed control law.
This paper presents a motion planning method for a simple wheeled robot in two cases: (i) where translational and rotational speeds are arbitrary, and (ii) where the robot is constrained to move forwards at unit speed. The motions are generated by formulating a constrained optimal control problem on the Special Euclidean group SE(2). An application of Pontryagin's maximum principle for arbitrary speeds yields an optimal Hamiltonian which is completely integrable in terms of Jacobi elliptic functions. In the unit speed case, the rotational velocity is described in terms of elliptic integrals, and the expression for the position is reduced to quadratures. Reachable sets are defined in the arbitrary speed case, and a numerical plot of the time-limited reachable sets is presented for the unit speed case. The resulting analytical functions for the position and orientation of the robot can be parametrically optimised to match prescribed target states within the reachable sets. The method is shown to be easily adapted to obstacle avoidance for static obstacles in a known environment.
Series elastic actuators have beneficial properties for some robot applications. Several recent implementations contain alternative placements of the compliant element to improve instrumentation design. We use a class 1 versus class 2 lever model and energy-port methods to demonstrate in this paper that these alternative placements should still be classified as series elastic actuators. We also note that the compliance of proximal series elastic actuators is reflected by an augmented gear ratio dependent on the nominal gear ratio, which is significant for small gear ratios and approaches unity for large gear ratios. This reflected compliance is shown to differ depending on the sign of the gear ratio. We demonstrate that although the reflected compliance is only marginally influenced by the magnitude of the gear ratio, there are several notable differences, particularly for small gear ratios.
In this paper the design and operation for a 2-Degree-of-Freedom, leg–wheel hybrid mobile robot are presented. A prototype of a low-cost and easy-to-use system, which is capable of straight walking and steering with two actuators only, has been designed and built. Simulation and experimental tests have been carried out to verify the engineering feasibility and operation of the proposed solution. The designed robot can be used for applications such as surveillance and inspection of disaster sites.
In this paper the tip-over stability of mobile robots during manipulation with redundant arms is investigated in real-time. A new fast-converging algorithm, called the Circles Of INitialization (COIN), is proposed to calculate globally optimal postures of redundant serial manipulators. The algorithm is capable of trajectory following, redundancy resolution, and tip-over prevention for mobile robots during eccentric manipulation tasks. The proposed algorithm employs a priori training data generated from an exhaustive resolution of the arm's redundancy along a single direction in the manipulator's workspace. This data is shown to provide educated initial guess that enables COIN to swiftly converge to the global optimum for any other task in the workspace. Simulations demonstrate the capabilities of COIN, and further highlight its convergence speed relative to existing global search algorithms.
We generalise the standard construction of realizability models (specifically, of categories of assemblies) to a wide class of computability structures, which is broad enough to embrace models of computation such as labelled transition systems and process algebras. We consider a general notion of simulation between such computability structures, and show how these simulations correspond precisely to certain functors between the realizability models. Furthermore, we show that our class of computability structures has good closure properties – in particular, it is ‘cartesian closed’ in a slightly relaxed sense. Finally, we investigate some important subclasses of computability structures and of simulations between them. We suggest that our 2-category of computability structures and simulations may offer a useful framework for investigating questions of computational power, abstraction and simulability for a wide range of models.
The algebraic theory of automata was created by Schützenberger and Chomsky over 50 years ago and there has since been a great deal of development. Classical work on the theory to noncommutative power series has been augmented more recently to areas such as representation theory, combinatorial mathematics and theoretical computer science. This book presents to an audience of graduate students and researchers a modern account of the subject and its applications. The algebraic approach allows the theory to be developed in a general form of wide applicability. For example, number-theoretic results can now be more fully explored, in addition to applications in automata theory, codes and non-commutative algebra. Much material, for example, Schützenberger's theorem on polynomially bounded rational series, appears here for the first time in book form. This is an excellent resource and reference for all those working in algebra, theoretical computer science and their areas of overlap.
A series of important applications of combinatorics on words has emerged with the development of computerized text and string processing. The aim of this volume, the third in a trilogy, is to present a unified treatment of some of the major fields of applications. After an introduction that sets the scene and gathers together the basic facts, there follow chapters in which applications are considered in detail. The areas covered include core algorithms for text processing, natural language processing, speech processing, bioinformatics, and areas of applied mathematics such as combinatorial enumeration and fractal analysis. No special prerequisites are needed, and no familiarity with the application areas or with the material covered by the previous volumes is required. The breadth of application, combined with the inclusion of problems and algorithms and a complete bibliography will make this book ideal for graduate students and professionals in mathematics, computer science, biology and linguistics.
This collection of papers presents a series of in-depth examinations of a variety of advanced topics related to Boolean functions and expressions. The chapters are written by some of the most prominent experts in their respective fields and cover topics ranging from algebra and propositional logic to learning theory, cryptography, computational complexity, electrical engineering, and reliability theory. Beyond the diversity of the questions raised and investigated in different chapters, a remarkable feature of the collection is the common thread created by the fundamental language, concepts, models, and tools provided by Boolean theory. Many readers will be surprised to discover the countless links between seemingly remote topics discussed in various chapters of the book. This text will help them draw on such connections to further their understanding of their own scientific discipline and to explore new avenues for research.
The second volume of this comprehensive treatise focusses on Buchberger theory and its application to the algorithmic view of commutative algebra. In distinction to other works, the presentation here is based on the intrinsic linear algebra structure of Groebner bases, and thus elementary considerations lead easily to the state-of-the-art in issues of implementation. The same language describes the applications of Groebner technology to the central problems of commutative algebra. The book can be also used as a reference on elementary ideal theory and a source for the state-of-the-art in its algorithmization. Aiming to provide a complete survey on Groebner bases and their applications, the author also includes advanced aspects of Buchberger theory, such as the complexity of the algorithm, Galligo's theorem, the optimality of degrevlex, the Gianni-Kalkbrener theorem, the FGLM algorithm, and so on. Thus it will be essential for all workers in commutative algebra, computational algebra and algebraic geometry.
A systematic program design method can help developers ensure the correctness and performance of programs while minimizing the development cost. This book describes a method that starts with a clear specification of a computation and derives an efficient implementation by step-wise program analysis and transformations. The method applies to problems specified in imperative, database, functional, logic and object-oriented programming languages with different data, control and module abstractions. Designed for courses or self-study, this book includes numerous exercises and examples that require minimal computer science background, making it accessible to novices. Experienced practitioners and researchers will appreciate the detailed examples in a wide range of application areas including hardware design, image processing, access control, query optimization and program analysis. The last section of the book points out directions for future studies.
If you've been searching for a way to get up to speed on IEEE 802.11n and 802.11ac WLAN standards without having to wade through the entire specification, then look no further. This comprehensive overview describes the underlying principles, implementation details and key enhancing features of 802.11n and 802.11ac. For many of these features the authors outline the motivation and history behind their adoption into the standard. A detailed discussion of key throughput, robustness, and reliability enhancing features (such as MIMO, multi-user MIMO, 40/80/160 MHz channels, transmit beamforming and packet aggregation) is given, plus clear summaries of issues surrounding legacy interoperability and coexistence. Now updated and significantly revised, this 2nd edition contains new material on 802.11ac throughput, including revised chapters on MAC and interoperability, plus new chapters on 802.11ac PHY and multi-user MIMO. An ideal reference for designers of WLAN equipment, network managers, and researchers in the field of wireless communications.
The semantics of logic programs was originally described in terms of two-valued logic. Soon, however, it was realised that three-valued logic had some natural advantages, as it provides distinct values not only for truth and falsehood but also for “undefined”. The three-valued semantics proposed by Fitting (Fitting, M. 1985. A Kripke–Kleene semantics for logic programs. Journal of Logic Programming 2, 4, 295–312) and Kunen (Kunen, K. 1987. Negation in logic programming. Journal of Logic Programming 4, 4, 289–308) are closely related to what is computed by a logic program, the third truth value being associated with non-termination. A different three-valued semantics, proposed by Naish, shared much with those of Fitting and Kunen but incorporated allowances for programmer intent, the third truth value being associated with underspecification. Naish used an (apparently) novel “arrow” operator to relate the intended meaning of left and right sides of predicate definitions. In this paper we suggest that the additional truth values of Fitting/Kunen and Naish are best viewed as duals. We use Belnap's four-valued logic (Belnap, N. D. 1977. A useful four-valued logic. In Modern Uses of Multiple-Valued Logic, J. M. Dunn and G. Epstein, Eds. D. Reidel, Dordrecht, Netherlands, 8–37), also used elsewhere by Fitting, to unify the two three-valued approaches. The truth values are arranged in a bilattice, which supports the classical ordering on truth values as well as the “information ordering”. We note that the “arrow” operator of Naish (and our four-valued extension) is essentially the information ordering, whereas the classical arrow denotes the truth ordering. This allows us to shed new light on many aspects of logic programming, including program analysis, type and mode systems, declarative debugging and the relationships between specifications and programs, and successive execution states of a program.
Let $k$ be a locally compact complete field with respect to a discrete valuation $v$. Let $ \mathcal{O} $ be the valuation ring, $\mathfrak{m}$ the maximal ideal and $F(x)\in \mathcal{O} [x] $ a monic separable polynomial of degree $n$. Let $\delta = v(\mathrm{Disc} (F))$. The Montes algorithm computes an OM factorization of $F$. The single-factor lifting algorithm derives from this data a factorization of $F(\mathrm{mod~} {\mathfrak{m}}^{\nu } )$, for a prescribed precision $\nu $. In this paper we find a new estimate for the complexity of the Montes algorithm, leading to an estimation of $O({n}^{2+ \epsilon } + {n}^{1+ \epsilon } {\delta }^{2+ \epsilon } + {n}^{2} {\nu }^{1+ \epsilon } )$ word operations for the complexity of the computation of a factorization of $F(\mathrm{mod~} {\mathfrak{m}}^{\nu } )$, assuming that the residue field of $k$ is small.
This is a book about names and symmetry in the part of computer science that has to do with programming languages. Although symmetry plays an important role in many branches of mathematics and physics, its relevance to computer science may not be so clear to the reader. This introduction explains the computer science motivation for a theory of names based upon symmetry and provides a guide to what follows.
Atomic names
Names are used in many different ways in computer systems and in the formal languages used to describe and construct them. This book is exclusively concerned with what Needham calls ‘pure names’:
A pure name is nothing but a bit-pattern that is an identifier, and is only useful for comparing for identity with other such bit-patterns – which includes looking up in tables to find other information. The intended contrast is with names which yield information by examination of the names themselves, whether by reading the text of the name or otherwise. […] like most good things in computer science, pure names help by putting in an extra stage of indirection; but they are not much good for anything else.
(Needham, 1989, p. 90)
We prefer to use the adjective ‘atomic’ rather than ‘pure’, because for this kind of name, internal structure is irrelevant; their only relevant attribute is their identity. Although such names may not be much good for anything other than indirection, that one thing is a hugely important and very characteristic aspect of computer science.
This paper presents a study on the interpretation and bracketing of noun compounds (‘NCs’) based on lexical semantics. Our primary goal is to develop a method to automatically interpret NCs through the use of semantic relations. Our NC interpretation method is based on lexical similarity with tagged NCs, based on lexical similarity measures derived from WordNet. We apply the interpretation method to both two- and three-term NC interpretation based on semantic roles. Finally, we demonstrate that our NC interpretation method can boost the coverage and accuracy of NC bracketing.