To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One of the most amazing aspects of brain function is that free will and consciousness emerges from the simple elemental functions of neurons. How do a hundred billion neurons produce global functions, such as intention, mind, and consciousness? As gathering a billion people is not equal to making a civilized society, the brain is not merely a combination of neurons. There would be rules of relation and principles of action. I have been interested for many years in the neurodynamics of situated cognition and contextual decision making, particularly focusing on synchronization mechanisms in the brain. Neural synchronization is well known in spinal motor coordination (e.g. central pattern generators, CPG), circadian rhythms and EEG recordings of human brain activities during mental tasks. Synchronized population activity plays functional roles in memory formation and context-dependent utilization of personal experiences in animal models. However, those experiments and models have dealt with a specific brain circuit in a fixed condition, or at least less attention has been given to an embodied view, where the brain, body, and environment comprise a closed whole loop. The embodied view is the natural setting for a brain functioning in the real world. I have recently become interested in building an online and on-demand experimental platform to link the robotic body with its neurodynamics. This platform is implemented in a remote computer and gives us the advantage of studying brain functions in a dynamic environment, and to offer qualitative analyses of behavioral time, in contradistinction to neuronal time, or mental time. This chapter relates past work to present work in an informal way that might be uncommon in journal papers. By taking advantage of this opportunity, I will use informal speech and explanations, as well as personal anecdotes to guide the reader to understand important trends and perspectives in this topic. Section 12.1 gives an introduction to artificial systems that makes a commitment to biology, and argues a point of biologically inspired robotics in the viewpoint of being life. Section 12.2 overviews the multiple memory systems of the brain in terms of conscious awareness. Section 12.3 describes robotic methodologies by using neural dynamics of oscillatory components to enable the system to provide online decision making in cooperation with involuntary motor controls, and discusses necessities for future work. Section 12.4 summarizes key concepts and future perspectives.
After several decades of developmental research on intelligent robotics in our lab, we began to focus on the realization of mammalian adaptability functions for our upper-body humanoid robot ISAC (Intelligent Soft Arm Control) described in Kawamura et al. (2000, 2004). Currently, most engineering solutions used in robot designs do not have this level of learning and adaptation. Mammalian adaptability is highly desirable in a robot, because mammals are singularly adaptable goal-directed agents. Mammals learn from experiences with a distinctive degree of flexibility and richness that assures goal accomplishment by a very high proportion of individuals. Thus, in the future, robot capability will be substantially advanced once robots can actively seek goal-directed experiences and learn about new tasks under dynamic and challenging environments.
Seeking inspiration for how to achieve this goal, we look to the mammalian brain; in particular, to the structural and functional commonalities observed across mammalian species. From rodents to humans, mammals share many neural mechanisms and control processes relevant to adaptability. Mammals typically accomplish goals in a timely fashion, in situations from the familiar to the new and challenging. Moreover, mammals learn how to function effectively, with few innate capabilities and with little or no supervision of their learning. Albeit with many gaps in knowledge of what makes the human brain distinctively capable, enough seems to be known about the whole mammalian brain to inform architectural analysis and embodied modeling of mammalian brains.
It has been a huge challenge to program autonomous robots for unstructured and new environments. Various modules are difficult to program and so is the coordination among modules and motors. Existing neuroanatomical studies have suggested that the brain uses similar mechanisms to coordinate the different sensor modalities (e.g. visual and auditory) and the different motor modalities (e.g. arms, legs, and the vocal tract). Via sensorimotor interactions with the robot’s internal and external environments, autonomous mental development (AMD) in this chapter models the brain as not only an information processor (e.g. brain regions and their interconnections), but also the causality for its development (e.g. why each region does what it does). The mechanisms of AMD suggest that the function of each brain region is not preset statically before birth by the genome, but is instead the emergent consequence of its interconnections with other brain regions through the lifetime experience. The experience of interactions not only greatly shapes what each region does, but also how different regions cooperate. The latter seems harder to program than a static function. As a general-purpose model of sensorimotor systems, this chapter describes the developmental program for the visuomotor system of a developmental robot. Based on the brain-inspired mechanisms, the developmental program enables a network to wire itself and to adapt “on the fly” using bottom-up signals from sensors and top-down signals from externally supervised or self-supervised acting activities. These simple mechanisms are sufficient for the neuromorphic Where What Network 1 (WWN-1) to demonstrate small-scale but practical-grade performance for the two highly intertwined problems of vision – attention and recognition – in the presence of complex backgrounds.
Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open.
The robotic approach to map building has been dominated by algorithms that optimize the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban structures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).
An increasing number of projects worldwide are investigating the possibility of including robots in assessment and therapy practices for individuals with autism. There are two major reasons for considering this possibility: the special interest of autistic people in robots and electronic tools, and the rapid developments in multidisciplinary studies on the nature of social interaction and on autism as atypical social behavior.
Several branches of the social sciences and neurosciences, which aim to understand the social brain, advocate the perspective that social behaviors (e.g. shared attention, turn taking, and imitation) have evolved as an additional functionality of a general sensorimotor system for action. The basic feature of this system is the existence of a common representation between perception for action and the action itself. An extended social brain system facilitates processing of emotional stimuli, empathy, and perspective taking.
We can easily manipulate a variety of objects with our hands. When exploring an object, we gather rich sensory information through both haptics and vision. The haptic and visual information obtained through such exploration is, in turn, key for realizing dexterous manipulation. Reproducing such codevelopment of sensing and adaptive/dexterous manipulation by a robotic hand is one of the ultimate goals of robotics, and further, it would be essential for understanding human object recognition and manipulation.
Although many robotic hands have been developed, their performance is by far inferior to that of human hands. One reason for this performance difference may be due to differences in grasping strategies. Historically, research on robotic hands has mainly focused on pinching manipulation (e.g. Nagai and Yoshikawa, 1993) because the analysis was easy with point-contact conditions. Based on the analysis, roboticists applied control schemes using force/touch sensors at the fingertips (Kaneko et al., 2007; Liu et al., 2008). Since the contact points are restricted to the fingertips, it is easy for the robot to calculate how it grasps an object (e.g. a holding polygon) and how large it should exert force based on friction analysis. However, the resultant grasping is very brittle since a slip of just one of the contacting fingertips may lead to dropping the object.
Within the past few decades, the nature of consciousness has become a central issue in neuroscience, and it is increasingly the focus of both theoretical and empirical work. Studying consciousness is vital to developing an understanding of human perception and behavior, of our relationships with one another, and of our relationships with other potentially conscious animals. Although the study of consciousness through the construction of artificial models is a recent innovation, the advantages of such an approach are clear. First, models allow us to investigate consciousness in ways that are currently not feasible using human subjects or other animals. Second, an artifact that exhibits the necessary and sufficient properties of consciousness may conceivably be the forerunner of a new and very useful class of neuromorphic robots.
A model of consciousness must take into account current theories of its biological bases. Although the field of artificial consciousness is a new one, it is striking how little attention has been given to modeling mechanisms. Instead, great – and perhaps undue – emphasis has been placed on purely phenomenological models. Many of these models are strongly reductionist in aim and fail to specify neural mechanisms.
The genesis for this book came about from a series of conversations, over a period of several years, between Jeff Krichmar and Hiro Wagatsuma. Initially, these conversations began when Krichmar was at The Neurosciences Institute in San Diego and Wagatsuma was at the Riken Brain Science Institute near Tokyo. They included discussions at each other’s institutes, several conversations and workshops at conferences, and an inspiring trip to a Robotics Exhibition at the National Museum of Nature and Science in Tokyo. In these conversations, we realized that we shared a passion for understanding the inner workings of the brain through computational neuroscience and embodied models. Moreover, we realized that: (1) there was a small, but growing, community of like-minded individuals around the world, and (2) there was a need to publicize this line of research to attract more scientists to this young field. Therefore, we contacted many of the top researchers around the world in Neuromorphic and Brain-Based Robotics. The requirements were that the researchers should be interested in some aspect of the brain sciences, and were using robotic devices as an experimental tool to further our understanding of the brain. We have been thrilled at the positive response. We know we have not included everyone in this field and apologize for any omissions. However, we feel that the contributed chapters in this book are representative of the most important areas in this line of research, and that they represent the state-of-the-art in the field at this time. We sincerely hope this book will inspire and attract a new generation of neuromorphic and brain-based roboticists.
The ethical challenges of robot development were dramatically thrust onto center stage with Asimov’s book I, Robot in 1950, where the three “Laws of Robotics” first appeared in a short story. The “laws” assume that robots are (or will be) capable of perception and reasoning and will have intelligence comparable to a child, if not better, and in addition that they will remain subservient to humans. Thus, the first law reads:
“A robot may not injure a human being, or, through inaction, allow a human being to come to harm.”
Clearly, in these days when military robots are used to kill humans, this law is (perhaps regrettably) obsolete. However, it still raises fundamental questions about the relationship between humans and robots, especially when the robots are capable of exerting lethal force. Asimov’s law also suffers from the complexities of designing machines with a sense of morality. As one of several possible approaches to control their behavior, robots could be equipped with specialized software that would ensure that they conform to the “Laws of War” and the “Rules of Engagement” of a particular conflict. After realistic simulations and testing, such software controls perhaps would not prevent all unethical behaviors, but they would ensure that robots behave at least as ethically as human soldiers do (Arkin, 2009) (though this is still an inadequate solution for many critics).
Today, military robots are autonomous in navigation capabilities, but most depend on remote humans to “pull the trigger” which releases a missile or other weapon. Research in neuromorphic and brain-based robotics may hold the key to significantly more advanced artificial intelligence and robotics, perhaps to the point where we would entrust ordinary attack decisions to robots. But what are the moral issues we ought to consider before giving machines the ability to make such life-or-death decisions?
The aim of this chapter is to present an ethical landscape for humans and autonomous robots in the future of a physicalistic world, and which will touch mainly on a framework of robot ethics rather than the concrete ethical problems possibly caused by recent robot technologies. It might be difficult to find sufficient answers to such ethical problems as those occurring with future military robots unless we understand what autonomy in autonomous robots exactly implies for robot ethics. This chapter presupposes that this “autonomy” should be understood as “being able to make intentional decisions from the internal state, and to doubt and reject any rule,” a definition which requires robots to have at least a minimal folk psychology in terms of desire and belief. And if any agent has a minimal folk psychology, we would have to say that it potentially has the same “right and duties” as we humans with a fully fledged folk psychology, because ethics for us would cover any agent as far as it is regarded to have a folk psychology – even in Daniel C. Dennett’s intentional stance (Dennett, 1987). We can see the lack of autonomy in this sense in the famous Asimov’s laws (Asimov, 2000) cited by Bekey et al. in Chapter 14 of this volume, which could be interpreted to show the rules any autonomous robots in the future have to obey (see Section 14.3).
The analysis of particular telencephalic systems has led to derivation of algorithmic statements of their operation, which have grown to include communicating systems from sensory to motor and back. Like the brain circuits from which they are derived, these algorithms (e.g. Granger, 2006) perform and learn from experience. Their perception and action capabilities are often initially tested in simulated environments, which are more controllable and repeatable than robot tests, but it is widely recognized that even the most carefully devised simulated environments typically fail to transfer well to real-world settings.
Robot testing raises the specter of engineering requirements and programming minutiae, as well as sheer cost, and lack of standardization of robot platforms. For brain-derived learning systems, the primary desideratum of a robot is not that it have advanced pinpoint motor control, nor extensive scripted or preprogrammed behaviors. Rather, if the goal is to study how the robot can acquire new knowledge via actions, sensing results of actions, and incremental learning over time, as children do, then relatively simple motor capabilities will suffice when combined with high-acuity sensors (sight, sound, touch) and powerful onboard processors.
This chapter discusses how cognitive developmental robotics (CDR) can make a paradigm shift in science and technology. A synthetic approach is revisited as a candidate for the paradigm shift, and CDR is reviewed from this viewpoint. A transdisciplinary approach appears to be a necessary condition and how to represent and design “subjectivity” seems to be an essential issue.
It is no wonder that new scientific findings are dependent on the most advanced technologies. A typical example is brain-imaging technologies such as fMRI, PET, EEG, NIRS, and so on that have been developed to expand the observations of neural activities from static local images to ones that can show dynamic and global behavior, and have therefore been revealing new mysteries of brain functionality. Such advanced technologies are presumed to be mere supporting tools for biological analysis, but is there any possibility that it could be a means for new science invention?
From hardware and software to kernels and envelopes
At the beginning of robotics research, robots were seen as physical platforms on which different behavioral programs could be run, similar to the hardware and software parts of a computer. However, recent advances in developmental robotics have allowed us to consider a reversed paradigm in which a single software, called a kernel, is capable of exploring and controlling many different sensorimotor spaces, called envelopes. In this chapter, we review studies we have previously published about kernels and envelopes to retrace the history of this concept shift and discuss its consequences for robotic designs and also for developmental psychology and brain sciences.