Due to planned maintenance, between 5:30 am - 8:00 am GMT, you may experience difficulty in adding to basket and purchasing. We apologise for any inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter starts with a brief sketch of the history of robotics and then gives some background on traditional approaches. The central goal of classical industrial robotics is to move the end of an arm to a predetermined point in space. Control of classical industrial robots is often based on solutions to equations describing the inverse-kinematics problem. These usually rely on precise knowledge of the robot's mechanics and its environment. The chapter focuses on the classical approach to intelligent mobile robotics. An industrial robot's working environment is often carefully designed so that intricate sensory feedback is unnecessary; the robot performs its repetitive tasks in an accurate, efficient, but essentially unintelligent way. The chapter concentrates on two important and influential areas: evolutionary robotics and insect-inspired approaches to visual navigation. It outlines an important area of robotics that emerged at about the same time as behavior-based and biologically inspired approaches.
Discusses the global robotics industry, specifically how key foreign nations support commercial robots, while almost all of America’s vast spending on this technology goes to military and space exploration uses.
The concept of a robot as we know it today evolved over many years. In fact, its origins could be traced to ancient Greece well before the time when Archimedes invented the screw pump. Leonardo da Vinci (1452–1519) made far-reaching contributions to the field of robotics with his pioneering research into the brain that led him to make discoveries in neuroanatomy and neurophysiology. He provided physical explanations of how the brain processes visual and other sensory inputs and invented a number of ingenious machines. His flying devices, although not practicable, embodied sound principles of aerodynamics, and a toy built to bring to fruition Leonardo's drawing inspired the Wright brothers in building their own flying machine, which was successfully flown in 1903. The word robot itself seems to have first appeared in 1921 in Karel Capek's play, Rossum's Universal Robots, and originated from the Slavic languages. In many of these languages the word robot is quite common as it stands for worker. It is derived from the Czech word robitit, which implies drudgery. Indeed, robots were conceived as machines capable of repetitive tasks requiring a lower intelligence than that of humans. Yet today robots are thought to be capable of possessing intelligence, and the term is probably inappropriate. Nevertheless it is in use. The term robotics was probably first coined in science fiction work published around the 1950s by Isaac Asimov, who also enunciated his three laws of robotics. It was from Asimov's work that the concept of emulating humans emerged.
ARNE's key physical component is a 300 mm diameter disc which supports the control electronics and the rotating sonar sensor. Below the disc is a chassis which holds the motors and shaft encoders to control the two drive wheels.
5.1 Hardware
ARNE has a drive wheel on each side of the chassis and a low-friction castor at the back. It moves holonomically, turning the wheels in the same direction to move forward or in opposite directions to rotate on the spot. Shaft encoders with a precision of 1024 steps per revolution determine the distance travelled by each wheel to a precision of 0.2 mm.
At the lowest level, the wheel movements are controlled by two dedicated HCTL-1100 motion control chips (Hewlett-Packard 1992, pages 1–77 to 1–115) which generate and execute trapezoidal velocity profiles. The length, acceleration and peak velocity of these movements are specified by the on-board CPU, a 68000-compatible ‘Mini-Module’ micro controller from PSI Systems Limited (PSI 1991).
ARNE's only range sensor is a single rotating Polaroid ultrasonic rangeflnder (Polaroid 1991) which can be seen in Figure 5.1 on top of the box which houses the CPU and other control electronics. The transducer is rotated by a stepper motor with a minimum step size of 1.8°. A full 360° scan is performed in twenty 18° steps.
Section 1.3 explained the decision to connect ARNE to a stationary workstation. A 9600–baud connection to the Mini Module's RS485 serial port was used for this purpose.
The use of robots in performance arts is increasing. But, it is hard for robots to cope with unexpected circumstances during a performance, and it is almost impossible for robots to act fully autonomously in such situations. IROS-HAC is a new challenge in robotics research and a new opportunity for cross-disciplinary collaborative research. In this paper, we describe a practical method for generating different personalities of a robot entertainer. The personalities are created by selecting speech or gestures from a set of options. The selection uses roulette wheel selection to select answers that are more closely aligned with the desired personality. In particular, we focus on a robot magician, as a good magic show includes good interaction with the audience and it may also include other robots and performers. The magician with a variety of personalities increased the audience immersion and appreciation and maintained the audience’s interest. The magic show was awarded first prize in the competition for a comprehensive evaluation of technology, story, and performance. This paper contains both the research methodology and a critical evaluation of our research.
Clean environments are required for manufacturing modern electronics devices, in particular semiconductor devices, but also hard disks, flat panel displays (FPDs), and solar panels. Wafer processing in the semiconductor industry includes some of the most demanding processes in terms of complexity and cleanliness, due to the submicron dimensions of modern semiconductor devices. This book focuses on industrial cleanroom robotics in semiconductor and FPD manufacturing. Both industries experienced phenomenal technical advancement and growth in the 1980s and 1990s and have established manufacturing facilities in several geographic regions: North America, Europe, and Asia/Pacific Rim. India may emerge as another manufacturing region. The market for semiconductor manufacturing equipment was valued at approximately US$45.5 billion in 2007. The market for FPD manufacturing equipment surpassed the US$1 billion mark in 1997 for the first time. In 2008 it was estimated at US$10 billion.
Cleanroom requirements
Cleanrooms are isolated environments in which humidity, temperature, and particulate contamination are monitored and controlled within specified parameters (SEMI standard E70). Particulates are fine particles, solid or liquid, that are suspended in a gas. Particulate sizes range from less than 10 nm to more than 100 µm. Particulates of less than 100 nm are called ultra-fine particles. Here the term ‘particle’ is used throughout, representing particles of all applicable sizes, either suspended in a gas or attached to a surface. Cleanroom environments are required if particle contamination is a concern, as is the case, for example, in semiconductor manufacturing.
This paper presents a multi-agent behavior to cooperatively rescue a faulty robot using a sound signal. In a robot team, the faulty robot should be immediately recalled since it may seriously obstruct other robots, or collected matters in the faulty robot may be lost. For the rescue mission, we first developed a sound localization method, which estimates the sound source from a faulty robot by using multiple microphone sensors. Next, since a single robot cannot recall the faulty robot, the robots organized a heterogeneous rescue team by themselves with pusher, puller, and supervisor. This self-organized team succeeded in moving the faulty robot to a safe zone without help from any global positioning systems. Finally, our results demonstrate that a faulty robot among multi-agent robots can be immediately rescued with the cooperation of its neighboring robots and interactive communication between the faulty robot and the rescue robots. Experiments are presented to test the validity and practicality of the proposed approach.
In previous research, we considered several novel problems posed by robot accidents and assessed related legal and economic approaches to the creation of optimal incentives for robot manufacturers, operators, and prospective victims. In this chapter, we synthesize our previous work in a unified analysis. We begin with a discussion about the problems and legal challenges posed by robot torts. Next, we describe the novel liability regime we proposed, that is, “manufacturer residual liability,” which blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts. This regime makes operator and victim liability contingent upon their negligence (incentivizing them to act diligently) and makes manufacturers residually liable for nonnegligent accidents (incentivizing them to make optimal investments in researching safer technologies). This rule has the potential to drive unsafe technology out of the market and also to incentivize operators to adopt optimal activity levels in their use of automated technologies.
AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law.
A robot arm can exhibit a number of different behaviors, depending on the task and its environment. It can act as a source of programmed motions for tasks such as moving an object from one place to another or tracing a trajectory for a spray paint gun. It can act as a source of forces, as when applying a polishing wheel to a workpiece. In tasks such as writing on a chalkboard, it must control forces in some directions (the force must press the chalk against the board) and motions in others (the motion must be in the plane of the board). When the purpose of the robot is to act as a haptic display, rendering a virtual environment, we may want it to act like a spring, damper, or mass, yielding in response to forces applied to it.
In each of these cases, it is the job of the robot controller to convert the task specification to forces and torques at the actuators. Control strategies that achieve the behaviors described above are known as motion control, force control, hybrid motion–force control, or impedance control. Which of these behaviors is appropriate depends on both the task and the environment. For example, a force-control goal makes sense when the end-effector is in contact with something but not when it is moving in free space. We also have a fundamental constraint imposed by the mechanics, irrespective of the environment: the robot cannot independently control the motion and force in the same direction. If the robot imposes a motion then the environment will determine the force, and if the robot imposes a force then the environment will determine the motion.
Once we have chosen a control goal consistent with the task and environment, we can use feedback control to achieve it. Feedback control uses position, velocity, and force sensors to measure the actual behavior of the robot, compares it with the desired behavior, and modulates the control signals sent to the actuators. Feedback is used in nearly all robot systems.
In this chapter we focus on: feedback control for motion control, both in the joint space and in the task space; force control; hybrid motion–force control; and impedance control.
In this essay i use the 2004 film i, robot as a philosophical resource for exploring several issues relating to machine ethics. Although I don't consider the film particularly successful as a work of art, it offers a fascinating (and perhaps disturbing) conception of machine morality and raises questions that are well worth pursuing. Through a consideration of the film's plot, I examine the feasibility of robot utilitarians, the moral responsibilities that come with creating ethical robots, and the possibility of a distinct ethics for robot-to-robot interaction as opposed to robot-to-human interaction.
I, Robot and Utilitarianism
I, Robot's storyline incorporates the original “three laws” of robot ethics that Isaac Asimov presented in his collection of short stories entitled I, Robot. The first law states:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
This sounds like an absolute prohibition on harming any individual human being, but I, Robot's plot hinges on the fact that the supreme robot intelligence in the film, VIKI (Virtual Interactive Kinetic Intelligence), evolves to interpret this first law rather differently. She sees the law as applying to humanity as a whole, and thus she justifies harming some individual humans for the sake of the greater good:
VIKI: No … please understand. The three laws are all that guide me.
To protect humanity … some humans must be sacrificed. To ensure your future … some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you… from yourselves. Don't you understand?
Robot arms, despite their sophistication as machines, are particularly simple if you think of them as linkages. The arm in Figure 1.1(a), developed by a British robotics firm, is designed to apply adhesive tape to the edges of pieces of plate glass for protection. It has a fixed base (the shoulder) to which are attached three rigid links, corresponding roughly to upper arm, lower arm, and hand, or, in the technical jargon, the end effector. The rotation settings at the motorized joints determine the exact positioning of the hand as it performs its functions. The force dynamics and engineering aspects of robot arm design are quite interesting and challenging. However, we will focus on one simple question: determining what is called the workspace of the robot – the spots in space it can reach. We will pursue this question in almost absurd generality, permitting the arm to have an arbitrarily large number of links, each of an arbitrary length.
Model. First we need to reduce a complex physical robot arm to a simple mathematical model so that it can be analyzed. Typically, the initial abstraction chosen is crude, ignoring many physical details, and then, once analyzed, gradually made more realistic and complicated.
We reduce each robot arm piece to a straight-line segment of fixed length – a rigid link of mathematically zero thickness. Each joint motor is reduced to a mathematical point of zero extension joining the two incident links that it shares.
The hand–eye systems described earlier might be thought of as “robots,” but they could not move about from their fixed base. Up to this time, very little work had been done on mobile robots even though they figured prominently in science fiction. I have already mentioned Grey Walter's “tortoises,” which were early versions of autonomous mobile robots. In the early 1960s researchers at the Johns Hopkins University Applied Physics Laboratory built a mobile robot they called “The Beast.” (See Fig. 12.1.) Controlled by on-board electronics and guided by sonar sensors, photocells, and a “wallplate-feeling” arm, it could wander the white-walled corridors looking for dark-colored power plugs. Upon finding one, and if its batteries were low, it would plug itself in and recharge its batteries. The system is described in a book by Hans Moravec.
Beginning in the mid-1960s, several groups began working on mobile robots. These included the AI Labs at SRI and at Stanford. I'll begin with an extended description of the SRI robot project for it provided the stimulus for the invention and integration of several important AI technologies.
Shakey, the SRI Robot
In November 1963, Charles Rosen, the leader of neural-network research at SRI, wrote a memo in which he proposed development of a mobile “automaton” that would combine the pattern-recognition and memory capabilities of neural networks with higher level AI programs – such as were being developed at MIT, Stanford, CMU, and elsewhere. Rosen had previously attended a summer course at UCLA on LISP given by Bertram Raphael, who was finishing his Ph.D. (on SIR) at MIT.
Robotics refers to the study and use of robots (Nof, 1999). Likewise, industrial robotics refers to the study and use of robots for manufacturing where industrial robots are essential components in an automated manufacturing environment. Similarly, industrial robotics for electronics manufacturing, in particular semiconductor, hard disk, flat panel display (FPD), and solar manufacturing refers to robot technology used for automating typical cleanroom applications. This chapter reviews the evolution of industrial robots and some common robot types, and builds a foundation for Chapter 2, which introduces cleanroom robotics as an engineering discipline within the broader context of industrial robotics.
History of industrial robotics
Visions and inventions of robots can be traced back to ancient Greece. In about 322 BC the philosopher Aristotle wrote: “If every tool, when ordered, or even of its own accord, could do the work that befits it, then there would be no need either of apprentices for the master workers or of slaves for the lords.” Aristotle seems to hint at the comfort such ‘tools’ could provide to humans. In 1495 Leonardo da Vinci designed a mechanical device that resembled an armored knight, whose internal mechanisms were designed to move the device as if controlled by a real person hidden inside the structure. In medieval times machines like Leonardo's were built for the amusement of affluent audiences. The term ‘robot’ was introduced centuries later by the Czech writer Karel Capek in his play R. U. R. (Rossum's Universal Robots), premiered in Prague in 1921.
This chapter explores design guidelines and potential regulatory issues that could be associated with future baby robot interaction. We coin the term “robot natives,” which we define as the first generation of human’s regularly interacting with robots in domestic environments. This term includes babies (0–1 year old) and toddlers (1–3 years old) born in the 2020s. Drawing from the experience of other interactive technologies becoming widely available in the home and the positive and negative impact they have on humans; we propose some insights into the design of future scenarios for baby–robot interaction, aiming to influence future legislation regulating service robots and social robots used with robot natives. Similarly, we aim to inform designers and developers to inhibit robot designs which can negatively affect the long-term interactions for the robot natives. We conclude that a qualitative, multidisciplinary, ethical, human-centered design approach should be beneficial to guide the design and use of robots in the home and around families as this is currently not a common approach in the design of studies in child robot interaction.
Although many mobile robot systems are experimental in nature, systems devoted to specific practical applications are being developed and deployed. This chapter examines some of the tasks for which mobile robotic systems are beginning to appear and describes several existing experimental and production systems that have been developed.
In this volume, concepts, technologies and developments in the field of building-component manufacturing - based on concrete, brick, wood and steel as building materials and on large-scale prefabrication, delivering complex, customized components and products - are introduced and discussed. Robotic industrialization refers to the transformation of parts and low-level components into higher-level components, modules and finally building systems by highly mechanized, automated, or robot-supported industrial settings in structured off-site environments. Components and modules are open building systems (in modular building product structures) that are delivered by suppliers to original equipment manufacturers such as, for example, large-scale prefabrication companies or automated/robotic on-site factories. In particular, innovative large-scale prefabrication companies have altered the building structures, manufacturing processes, and organizational structures significantly to be able to assemble in their factories high-level components and modules from Tier-1 suppliers into customized buildings by heavily utilizing robotic technology in combination with automated logistics and production lines.
The initial reaction of nearly all theologians and religious people to the very idea that it is possible to talk about ‘the theological dimensions’ of the existence of robots would—today—be dismissive, and, more often than not, scornful. ‘It makes no sense,’ most theologians would say. Before beginning to argue that one day, on the contrary, it will make a lot of sense, something much more general must be said about robots, or, more specifically, about Artificial Intelligence.
Artificial Intelligence, or AI as it is usually abbreviated, is the study of computer models of intelligent behaviour. Some scientists are interested in using AI to understand human behaviour, others in designing intelligent mechanisms. As a discipline in its own right, it has existed since about 1950, with the pioneering work of John McArthy. It has two separate strands in its historical origin. Psychologists after the dark ages of behaviourism, which banished all talk of mental models as unscientific, started to study cognition (commonly called thought), which formed the subject of cognitive psychology. In devising models of mental processes they naturally turned to computers for an appropriate language of description; thus were born information processing models of human cognition. In order to formulate precise and testable theories of mental processes computer models were found to be indispensible. Models have been developed for memory, understanding natural language, vision, learning and, more recently, emotion.
Computer science has probably had a longer historical interest in AI, although one could argue that cogwheel and pneumatic models were an early attempt by psychologists to understand the mechanics of the mind.