To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The optimum control of an industrial robot can be achieved by splitting the problem into two tasks: off-line programming of an optimum path, followed by an on-line path tracking.
The aim of this paper is to address the numerical solution of the optimum path planning problem. Because of its mixed nature, it can be expressed either in terms of Cartesian coordinates or at joint level.
Whatever the approach adopted, the optimum path planning problem can be formulated as the problem of minimizing the overall time (taken as objective function) subject to behavior and side constraints arising from physical limitations and deviation error bounds. The paper proposes a very general optimization algorithm to solve this problem, which is based on the concept of mixed approximation.
A numerical application is presented which demonstrates the computational efficiency of the proposed algorithm.
The work presented here is the description of the control strategy of two cooperating robots. A two–finger hand is an example of such a System. The control method allows for position control of the contact point by one of the robots while the other robot controls the contact force. The stability analysis of two robot manipulators has been investigated using unstructured models for dynamic behavior of robot manipulators. For the stability of two robots, there must be some initial compliance in either robot. The initial compliance in the robots can be obtained by a non-zero sensitivity function for the tracking controller or a passive compliant element such as an RCC.
The Jacobian of serial robot-arms is examined, and the matrix of cofactors of a singular Jacobian is presented as a means of explaining the physical nature of special configurations. Because the columns of both these matrices are screw coordinates, screw theory is central to proper understanding. ‘Realistic’ robot-arms are seen to behave in ways that can be explained not by particularizing from a general formulation but rather by carefully interpreting the relevant special screw systems from the outset. Higher singularities (with more than one freedom-loss) are then touched upon.
In this paper we present a series of haptic exploratory procedures, or EPs, implemented for a multi-fingered, articulated, sensate robot hand. These EPs are designed to extract specific tactile and kinesthetic information from an object via their purposive invocation by an intelligent robotic system. Taken together, they form an active robotic touch perception system to be used both in extracting information about the environment for internal representation and in acquiring grasps for manipulation. The theory and structure of this robotic haptic system is based upon models of human haptic exploration and information processing.
The haptic system presented utilizes an integrated robotic system consisting of a PUMA 560 robot arm, a JPL/Stanford robot hand, with joint torque sensing in the fingers, a wrist force/torque sensor, and a 256 element, spatially-resolved fingertip tactile array. We describe the EPs implemented for this system and provide experimental results which illustrate how they function and how the information which they extract may be used. In addition to the sensate hand and arm, the robot also contains structured-lighting vision and a Prolog-based reasoning system capable of grasp generation and object categorization. We present a set of simple tasks which show how both grasping and recognition may be enhanced by the addition of active touch perception.
A specially designed system for movement monitoring is here presented. The system has a two level architecture. At the first level, a hardware processor analyses in real-time the images provided by a set of standard TV cameras and, using a technique based on the convolution operator, recognizes in each frame objects that have a specific shape. The coordinates of these objects are fed to a computer, the second level of the system, that analyses the movement of these objects with the aid of a set of rules representing the knowledge of the context. The system was extensively tested on the field and the main results are reported.
The whole system can work as a controlling device in robotics or as a general real-time image processor as well as an automatic movement analyser in biomechanics, orthopedic and neurological medicine.
The basic robot control technique is the model based computer-torque control which is known to suffer performance degradation due to model uncertainties. Adding a neural network (NN) controller in the control system is one effective way to compensate for the ill effects of these uncertainties. In this paper a systematic study of NN controller for a robot manipulator under a unified computed-torque control framework is presented. Both feedforward and feedback NN control schemes are studied and compared using a common back-propagation training algorithm. Effects on system performance for different choices of NN input types, hidden neurons, weight update rates, and initial weight values are also investigated. Extensive simulation studies for trajectory tracking are carried out and compared with other established robot control schemes.
A partial review of some efforts in robotics research is presented. We identify two broad categories of work: one characterised by application-driven experimental engineering, the other by a more ‘scientific’ approach based on testing theoretical models through implementation. We argue that although the former represents some of the best practical results obtained to-date, this experiment-first-theory-later approach does not contribute to a homogeneous body of knowledge. If robotics is to make measured progress, sound theoretical ground is needed. We argue for a task-specific paradigm for future theoretical work founded on formal models. To this end, we present a general analysis of a sensory robotic system, and identify key elements that must be defined in any formal model before we can decide what sensory information is useful for a given task.
Python is an object-oriented programming language, which means that it provides features that support object-oriented programming.
It is not easy to define object-oriented programming, but we have already seen some of its characteristics:
Programs are made up of object definitions and function definitions, and most of the computation is expressed in terms of operations on objects.
Each object definition corresponds to some object or concept in the real world, and the functions that operate on that object correspond to the ways real-world objects interact.
For example, the Time class defined in Chapter 16 corresponds to the way people record the time of day, and the functions we defined correspond to the kinds of things people do with times. Similarly, the Point and Rectangle classes correspond to the mathematical concepts of a point and a rectangle.
So far, we have not taken advantage of the features Python provides to support object-oriented programming. These features are not strictly necessary; most of them provide alternative syntax for things we have already done. But in many cases, the alternative is more concise and more accurately conveys the structure of the program.
For example, in the Time program, there is no obvious connection between the class definition and the function definitions that follow. With some examination, it is apparent that every function takes at least one Time object as an argument.
In this paper, an adaptive learning (A-L) control scheme is proposed for cooperation of two manipulators handling a rigid object with model uncertainties. For robots performing repetitive cooperating tasks, their operations are decomposed into two modes: the single operational mode and the repetitive operational mode on which the A-L controller is based. In the single operational mode, the controller is a learning based adaptive controller in which the robotic parameters are updated by using the information of the previous operation. In the repetitive operational mode, the controller is a model-based iterative learning controller. The advantages of the A-L controller lie in the fact that it can improve the transient performance as robots repeat operations at a high speed of the learning convergence. Simulation results ascertain that the A-L algorithm is effective in controlling two cooperated robots with model uncertainties.