To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, the problem of real-time computation of a dynamic model of manipulator is considered. In order to decrease the number of operations for dynamic model computation, an approximate model is introduced. Also, a relative error criterion is proposed, which enables one to determine the computing periods of various parts of a dynamic model of manipulator.
The Space Station Alpha is the most significant international space project of this century and the largest international technology development project ever undertaken. The space robot manipulators will be a substantial part of the space station and will perform tasks such as assembly as well as maintenance of the station. Therefore the robot manipulators need a very sophisticated real-time control capability for gross and fine motions (i.e. compliant motions) during various operations. Moreover, the proposed dual-arm robot system servicing the Space Station requires automated motion coordination, synchronization of the arms, and controlled mechanical interaction with fixed and moving objects involved in various tasks.
An iterative learning control method is proposed for a class of non-linear dynamic systems with uncertain parameters. The method, in which non-linear system model is used, employs the model algorithmic control concept in the iteration sequence. A sufficient condition for convergency is provided. Then the method is shown to be applicable to continuous-path control of a robot manipulator.
Intelligent control of mobile systems allows for hierarchical structures that utilize sensory data with various levels of accuracy. This paper discusses a rule-based approach for the control problem. The assumed inexactness in world description is represented by fuzzy memberships, and the state space is discretized into a linguistic vocabulary. Fuzzy motion control rules that have been experimentally derived, are then used in a fuzzy inference mechanism to give the final control command to robot actuators. Finally, the developed algorithm is tested for real-time control applications.
This paper presents a mathematical model of a robot with one degree of freedom and numerical investigation of its dynamics in a particular parameter scan which is close to the upper boundary of the estimates for the parameters of rigidity and friction, while the length parameter L is treated as a free control parameter. In this L-scan the quasiperiodic and frequency locked solutions, their pattern and order of appearance are studied in the interval from the parameter range of immediate engineering significance to the point of appearance of transient chaos. In particular, a fractaltype multiple splitting of Arnold tongues is found in the parameter region bordering the range of engineering significance.
Robotic vision is concerned with providing, primarily through image sensory data acquisition and analysis, the basis for planning robotic manipulator actions upon and within a restricted world of solid objects. Ideally, its function should correspond to the human visual system's capacity to guide hand/eye coordination or body/eye navigation tasks. Fundamental to the notion of functionality in a 3D space partially filled with solid objects, is the requirement to appreciate the depth dimension, from a particular viewpoint. Human vision abounds with depth cues derivable from imagery and many of these have been the subjects of study for robotic vision application. However, direct range recovery using time-of-flight methods (ultrasonic or light) has distinct advantages for robotics and it is easy to justify these alternative approaches despite (and maybe even because of) their independence from visual cues.
This paper presents work in progress in the Computer Vision and Robotics Laboratory at The Australian National University towards implementing a robotic hand/eye coordination system with applicability in the scene domain of brightly coloured, simply shaped objects with relatively untextured surfaces in arbitrary 3 dimensional configurations. The advantages of using directly acquired range data (via a laser time-of-flight range scanner) in enhancing the scene segmentation phase of analysis is emphasised and fairly convincing results presented. Actual vision-driven manipulation has not yet been developed but plans towards this end are included.
Control laws are described for solving the task of stabilization of motion and force of interaction of a robot with its environment with a prescribed quality of transient processes with respect to position and in the presence of control, motion, and interaction force constraints. The robustness of these laws to parametric perturbations and their stability with respect to initial and external perturbations and measuring sensor errors have been proven.