To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Gaussian algorithm for lattice reduction in dimension 2 is analysed under its standard version. It is found that, when applied to random inputs in a continuous model, the complexity is constant on average, its probability distribution decays geometrically, and the dynamics are characterized by a conditional invariant measure. The proofs make use of connections between lattice reduction, continued fractions, continuants, and functional operators. Analysis in the discrete model and detailed numerical data are also presented.
We present a lexical platform that has been developed for the Spanish language. It achieves portability between different computer systems and efficiency, in terms of speed and lexical coverage. A model for the full treatment of Spanish inflectional morphology for verbs, nouns and adjectives is presented. This model permits word formation based solely on morpheme concatenation, driven by a feature-based unification grammar. The run-time lexicon is a collection of allomorphs for both stems and endings. Although not tested, it should be suitable also for other Romance and highly inflected languages. A formalism is also described for encoding a lemma-based lexical source, well suited for expressing linguistic generalizations: inheritance classes, lemma encoding, morpho-graphemic allomorphy rules and limited type-checking. From this source base, we can automatically generate an allomorph indexed dictionary adequate for efficient retrieval and processing. A set of software tools has been implemented around this formalism: lexical base augmenting aids, lexical compilers to build run-time dictionaries and access libraries for them, feature manipulation libraries, unification and pseudo-unification modules, morphological processors, a parsing system, etc. Software interfaces among the different modules and tools are cleanly defined to ease software integration and tool combination in a flexible way. Directions for accessing our e-mail and web demonstration prototypes are also provided. Some figures are given, showing the lexical coverage of our platform compared to some popular spelling checkers.
The two preceding issues within Vol. 2 of Organised Sound have focused upon the time and frequency domains, so it is perhaps appropriate that we now consider the analysis–synthesis paradigm which inevitably unites them. More specifically, the analysis and synthesis theme brings together discussions of the time and frequency domains under the umbrella of professional musical practice within the musicological, compositional, performance or technological realms. It is a theme which is inextricably linked to the working practice of many within the field, and one which has particular resonance for musical work.
A major task of organic chemistry is the synthesis of compounds of interest (“targets”) from available starting compounds. We address here the generation by computer of the best synthesis routes, or sequences of reactions, to such compounds. The number of possible routes in the search space is enormous and so the reasoning is examined in some detail, leading to a protocol for the operation of our program, SYNGEN, which has been under development for some years. It requires a new representation of the data – molecular structures and their reactions – which is digital in format and quite unlike the usual graphical depictions of organic chemistry. This is presented as a problem in artificial intelligence, and every effort has been made to present it with a minimum background of organic chemistry required of the reader.
A relationship between a new and an old graph invariant is established. The first invariant is connected to the ‘sandglass conjecture’ of [1]. The second one is graph entropy, an information theoretic functional, which is already known to be relevant in several combinatorial contexts.
The development of a knowledge-based design support system is a lengthy and costly process because various computational techniques necessary for intelligent design support are not readily available in a knowledge-based environment. The systematisation of design knowledge needs combined efforts from designers and knowledge engineers. Existing knowledge-based system development tools offer limited support to intelligent design support which require sophisticated knowledge engineering techniques in terms of knowledge representation, inference, control, truth maintenance and learning. In this paper, a knowledge-based architecture for intelligent design support is described. The existing knowledge-based design system architectures are reviewed first. Five key issues in intelligent design support using knowledge engineering techniques, i.e. design knowledge representation, structure of design knowledge base, intelligent control of design process, consistency and context management of design knowledge, and modelling of design collaboration are then discussed. These discussions provide a basis for a description of a knowledge-based design support system architecture which has been implemented in a Lisp-based environment and tested in two different domains. Current application of this architecture in the development of a design support system in the domain of mechanical engineering design at the Cambridge Engineering Design Centre is presented and evaluated.
Let G=(V, E) be a simple connected graph of order [mid ]V[mid ]=n[ges ]2 and minimum degree δ, and let 2[les ]s[les ]n. We define two parameters, the s-average distance μs(G) and the s-average nearest neighbour distance Λs(G), with respect to each of which V contains an extremal subset X of order s with vertices ‘as spread out as possible’ in G. We compute the exact values of both parameters when G is the cycle Cn, and show how to obtain the corresponding optimal arrangements of X. Sharp upper and lower bounds are then established for Λs(G), as functions of s, n and δ, and the extremal graphs described.
Artificial intelligence research in temporal reasoning focuses on designing automated solutions to complex problems in computation involving time. TIME-97, the 4th International Workshop on Temporal Representation and Reasoning, held in Daytona Beach, Florida — like the three workshops that preceded it — had the objective of creating an international forum for the exchange of information among the many researchers and knowledge engineers who are developing and applying techniques in temporal reasoning.
Over the past years, two main approaches to computational intelligence have emerged: the symbolic and the non-symbolic approach. The perhaps most prominent methods of the symbolic approach are based on logic. Logical methods exhibit a series of desirable properties:
[bull] Transparent representation of meaning
[bull] Precise understanding of the meaning of statements (semantics).
[bull] Sound reasoning methods.
[bull] Explanation capabilities.
A special session on logical methods for computational intelligence was held at the 3rd Joint Conference on Information Sciences. The field of computational logic is so broad that it is impossible to review the main developments in an article. Therefore, in the following we will restrict attention to two areas that turned out to be the focus of the special session: automated reasoning, and reasoning with incomplete and changing information.
For off-line programming to work, systematic methods must be developed to account for non-ideal performance of the parts and devices in the manufacturing cell. Although much of the literature focuses on robot inaccuracy, this paper considers practical methods for the tool control frame (TCF) calibration and rigid-body compensation required to close the inverse kinematics loop for target driven tasks.
In contrast to contemporary estimation methods, a closed-form, easily automated, solution is introduced for calibrating the position and orientation (pose) of orthogonal end-effectors when the distal robot joint is revolute. This paper also considers methods for measuring and compensating the small rigid-body perturbations that result from non-repeatable part delivery systems or from geometric distortion. These methods are designed to eliminate rθ error from the rigid-body prediction and can be conducted in real-time. Without accurate TCF calibration and rigid-body compensation, even the most accurate robot will fail to complete an off-line programmed task if the task tolerances are stringent.
This paper deals with the kinematic synthesis of manipulators. A new method based on distributed solving is used to determine the dimensional parameters of a general manipulator which is able to reach a set of given tasks specified by orientation and position. First, a general Distributed Solving Method (DSM) is presented in three steps: the problem statement, the objective functions formulations and the minimum parameters values determination. Then, this method is applied to solve the synthesis of the Denavit and Hartenberg set of parameters of a manipulator with a given kinematic structure. In this case, the kind and the number of joints are specified and a set of constraints are included such as joint limits, range of dimensional parameters and geometrical obstacles avoidance. We show that if the Denavit and Hartenberg parameters (DH) are known, the synthesis problem is reduced to an inverse kinematic problem. We show also how the problem of robot base placement can be solved by the same method. A general algorithm is given for solving the synthesis problem for all kind of manipulators. The main contribution of this paper is a general method for kinematic synthesis of all kind of manipulators and some examples are presented for a six degrees of freedom manipulator in cluttered environment.
This paper is part II of a paper published in the previous issue of Robotica. This part proceeds from the assumption that 3D features have been calssified into either a plane, a 2D corner type I or II, or a 3D corner using the Maximum Likelihood Estimator. The location of the 3D features from the results of the Maximum Likelihood Estimation are derived here. Experimental results characterising the ultrasonic sensor and its application to a robot localisation problem are presented in this paper.
This paper is concerned with a mobile robot reactive navigation in an unknown cluttered environment based on neural network and fuzzy logic. Reactive navigation is a mapping between sensory data and commands without planning. This article's task is to provide a steering command letting a mobile robot avoid a collision with obstacles. In this paper, the authors explain how to perform a currently perceptual space partitioning for a mobile robot by the use of an ART neural network, and then, how to build a 3-dimensional fuzzy controller for mobile robot reactive navigation. The results presented, whether experimented or simulation, show that our method is well adapted to this type of problem.
The effects of body geometry on walking machine performance have been investigated, and a body design procedure proposed. The relationships between static workspace, body-geometry and installed joint torques have been derived. A body design procedure that uses this data is then described, and two design examples discussed. The procedure results in a body geometry which minimises the installed joint torques, and hence the machine weight, for the desired workspace area.
In this paper we present the algebraic-λ-cube, an extension of Barendregt's λ-cube with first- and higher-order algebraic rewriting. We show that strong normalization is a modular property of all the systems in the algebraic-λ-cube, provided that the first-order rewrite rules are non-duplicating and the higher-order rules satisfy the general schema of Jouannaud and Okada. We also prove that local confluence is a modular property of all the systems in the algebraic-λ-cube, provided that the higher-order rules do not introduce critical pairs. This property and the strong normalization result imply the modularity of confluence.
Sound modelling is an important part of the analysis–synthesisprocess since it combines sound processing and algorithmic synthesis within the same formalism. Its aim is to make sound simulators by synthesis methodsbased on signal modelsor physical models, the parameters of which are directly extracted from the analysis of natural sounds. In this article thesuccessive steps for making such systems are described. These are numericalsynthesis and sound generation methods, analysis of natural sounds,particularly time–frequency and time–scale (wavelet) representations, extraction of pertinent parameters, and the determinationof the correspondence between these parameters and those corresponding tothe synthesis models. Additive synthesis, nonlinear synthesis, and waveguide synthesis are discussed.
In this paper, a systematic algorithm, the Parallel Scheme, is proposed to solve the multiple-goal problem of redundant manipulators. It can adaptively adjust all the weighting factors of the corresponding goals according to their individual needs. Therefore, conflicts due to redundancy competition are resolved. To upgrade the redundancy-release capability of the original Parallel Scheme, the Improved Parallel Scheme is also developed by using the exponential function to adjust the weightings. The Improved Parallel Scheme is shown to improve both the redundancy-release capability and the weighting-adjustment capability while retaining all the merits of the original Parallel Scheme.