To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An increasing number of projects worldwide are investigating the possibility of including robots in assessment and therapy practices for individuals with autism. There are two major reasons for considering this possibility: the special interest of autistic people in robots and electronic tools, and the rapid developments in multidisciplinary studies on the nature of social interaction and on autism as atypical social behavior.
Several branches of the social sciences and neurosciences, which aim to understand the social brain, advocate the perspective that social behaviors (e.g. shared attention, turn taking, and imitation) have evolved as an additional functionality of a general sensorimotor system for action. The basic feature of this system is the existence of a common representation between perception for action and the action itself. An extended social brain system facilitates processing of emotional stimuli, empathy, and perspective taking.
We can easily manipulate a variety of objects with our hands. When exploring an object, we gather rich sensory information through both haptics and vision. The haptic and visual information obtained through such exploration is, in turn, key for realizing dexterous manipulation. Reproducing such codevelopment of sensing and adaptive/dexterous manipulation by a robotic hand is one of the ultimate goals of robotics, and further, it would be essential for understanding human object recognition and manipulation.
Although many robotic hands have been developed, their performance is by far inferior to that of human hands. One reason for this performance difference may be due to differences in grasping strategies. Historically, research on robotic hands has mainly focused on pinching manipulation (e.g. Nagai and Yoshikawa, 1993) because the analysis was easy with point-contact conditions. Based on the analysis, roboticists applied control schemes using force/touch sensors at the fingertips (Kaneko et al., 2007; Liu et al., 2008). Since the contact points are restricted to the fingertips, it is easy for the robot to calculate how it grasps an object (e.g. a holding polygon) and how large it should exert force based on friction analysis. However, the resultant grasping is very brittle since a slip of just one of the contacting fingertips may lead to dropping the object.
Within the past few decades, the nature of consciousness has become a central issue in neuroscience, and it is increasingly the focus of both theoretical and empirical work. Studying consciousness is vital to developing an understanding of human perception and behavior, of our relationships with one another, and of our relationships with other potentially conscious animals. Although the study of consciousness through the construction of artificial models is a recent innovation, the advantages of such an approach are clear. First, models allow us to investigate consciousness in ways that are currently not feasible using human subjects or other animals. Second, an artifact that exhibits the necessary and sufficient properties of consciousness may conceivably be the forerunner of a new and very useful class of neuromorphic robots.
A model of consciousness must take into account current theories of its biological bases. Although the field of artificial consciousness is a new one, it is striking how little attention has been given to modeling mechanisms. Instead, great – and perhaps undue – emphasis has been placed on purely phenomenological models. Many of these models are strongly reductionist in aim and fail to specify neural mechanisms.
The genesis for this book came about from a series of conversations, over a period of several years, between Jeff Krichmar and Hiro Wagatsuma. Initially, these conversations began when Krichmar was at The Neurosciences Institute in San Diego and Wagatsuma was at the Riken Brain Science Institute near Tokyo. They included discussions at each other’s institutes, several conversations and workshops at conferences, and an inspiring trip to a Robotics Exhibition at the National Museum of Nature and Science in Tokyo. In these conversations, we realized that we shared a passion for understanding the inner workings of the brain through computational neuroscience and embodied models. Moreover, we realized that: (1) there was a small, but growing, community of like-minded individuals around the world, and (2) there was a need to publicize this line of research to attract more scientists to this young field. Therefore, we contacted many of the top researchers around the world in Neuromorphic and Brain-Based Robotics. The requirements were that the researchers should be interested in some aspect of the brain sciences, and were using robotic devices as an experimental tool to further our understanding of the brain. We have been thrilled at the positive response. We know we have not included everyone in this field and apologize for any omissions. However, we feel that the contributed chapters in this book are representative of the most important areas in this line of research, and that they represent the state-of-the-art in the field at this time. We sincerely hope this book will inspire and attract a new generation of neuromorphic and brain-based roboticists.
The ethical challenges of robot development were dramatically thrust onto center stage with Asimov’s book I, Robot in 1950, where the three “Laws of Robotics” first appeared in a short story. The “laws” assume that robots are (or will be) capable of perception and reasoning and will have intelligence comparable to a child, if not better, and in addition that they will remain subservient to humans. Thus, the first law reads:
“A robot may not injure a human being, or, through inaction, allow a human being to come to harm.”
Clearly, in these days when military robots are used to kill humans, this law is (perhaps regrettably) obsolete. However, it still raises fundamental questions about the relationship between humans and robots, especially when the robots are capable of exerting lethal force. Asimov’s law also suffers from the complexities of designing machines with a sense of morality. As one of several possible approaches to control their behavior, robots could be equipped with specialized software that would ensure that they conform to the “Laws of War” and the “Rules of Engagement” of a particular conflict. After realistic simulations and testing, such software controls perhaps would not prevent all unethical behaviors, but they would ensure that robots behave at least as ethically as human soldiers do (Arkin, 2009) (though this is still an inadequate solution for many critics).
Today, military robots are autonomous in navigation capabilities, but most depend on remote humans to “pull the trigger” which releases a missile or other weapon. Research in neuromorphic and brain-based robotics may hold the key to significantly more advanced artificial intelligence and robotics, perhaps to the point where we would entrust ordinary attack decisions to robots. But what are the moral issues we ought to consider before giving machines the ability to make such life-or-death decisions?
The aim of this chapter is to present an ethical landscape for humans and autonomous robots in the future of a physicalistic world, and which will touch mainly on a framework of robot ethics rather than the concrete ethical problems possibly caused by recent robot technologies. It might be difficult to find sufficient answers to such ethical problems as those occurring with future military robots unless we understand what autonomy in autonomous robots exactly implies for robot ethics. This chapter presupposes that this “autonomy” should be understood as “being able to make intentional decisions from the internal state, and to doubt and reject any rule,” a definition which requires robots to have at least a minimal folk psychology in terms of desire and belief. And if any agent has a minimal folk psychology, we would have to say that it potentially has the same “right and duties” as we humans with a fully fledged folk psychology, because ethics for us would cover any agent as far as it is regarded to have a folk psychology – even in Daniel C. Dennett’s intentional stance (Dennett, 1987). We can see the lack of autonomy in this sense in the famous Asimov’s laws (Asimov, 2000) cited by Bekey et al. in Chapter 14 of this volume, which could be interpreted to show the rules any autonomous robots in the future have to obey (see Section 14.3).
The analysis of particular telencephalic systems has led to derivation of algorithmic statements of their operation, which have grown to include communicating systems from sensory to motor and back. Like the brain circuits from which they are derived, these algorithms (e.g. Granger, 2006) perform and learn from experience. Their perception and action capabilities are often initially tested in simulated environments, which are more controllable and repeatable than robot tests, but it is widely recognized that even the most carefully devised simulated environments typically fail to transfer well to real-world settings.
Robot testing raises the specter of engineering requirements and programming minutiae, as well as sheer cost, and lack of standardization of robot platforms. For brain-derived learning systems, the primary desideratum of a robot is not that it have advanced pinpoint motor control, nor extensive scripted or preprogrammed behaviors. Rather, if the goal is to study how the robot can acquire new knowledge via actions, sensing results of actions, and incremental learning over time, as children do, then relatively simple motor capabilities will suffice when combined with high-acuity sensors (sight, sound, touch) and powerful onboard processors.
This chapter discusses how cognitive developmental robotics (CDR) can make a paradigm shift in science and technology. A synthetic approach is revisited as a candidate for the paradigm shift, and CDR is reviewed from this viewpoint. A transdisciplinary approach appears to be a necessary condition and how to represent and design “subjectivity” seems to be an essential issue.
It is no wonder that new scientific findings are dependent on the most advanced technologies. A typical example is brain-imaging technologies such as fMRI, PET, EEG, NIRS, and so on that have been developed to expand the observations of neural activities from static local images to ones that can show dynamic and global behavior, and have therefore been revealing new mysteries of brain functionality. Such advanced technologies are presumed to be mere supporting tools for biological analysis, but is there any possibility that it could be a means for new science invention?
From hardware and software to kernels and envelopes
At the beginning of robotics research, robots were seen as physical platforms on which different behavioral programs could be run, similar to the hardware and software parts of a computer. However, recent advances in developmental robotics have allowed us to consider a reversed paradigm in which a single software, called a kernel, is capable of exploring and controlling many different sensorimotor spaces, called envelopes. In this chapter, we review studies we have previously published about kernels and envelopes to retrace the history of this concept shift and discuss its consequences for robotic designs and also for developmental psychology and brain sciences.
The goal of an fMRI data analysis is to analyze each voxel's time series to see whether the BOLD signal changes in response to some manipulation. For example, if a stimulus was repeatedly presented to a subject in a blocked fashion, following the trend shown in the red line in the top panel of Figure 5.1, we would search for voxel time series that match this pattern, such as the BOLD signal shown in blue. The tool used to fit and detect this variation is the general linear model (GLM), where the BOLD time series plays the role of dependent variable, and the independent variables in the model reflect the expected BOLD stimulus timecourses. Observe, though, that square wave predictor in red doesn't follow the BOLD data very well, due to sluggish response of the physiology. This leads to one major focus of this chapter: Using our understanding of the BOLD response to create GLM predictors that will model the BOLD signal as accurately as possible. The other focus is modeling and accounting for BOLD noise and other souces of variation in fMRI time series.
Throughout this chapter the models being discussed will refer to modeling the BOLD signal in a single voxel in the brain. Such a voxel-by-voxel approach is known as a mass univariate data analysis, in contrast to a multivariate approach (see Chapters 8 and 9 for uses of multivariate models).
The amount of computation that is performed, and data that are produced, in the process of fMRI research can be quite astounding. For a laboratory with multiple researchers, it becomes critical to ensure that a common scheme is used to organize the data; for example, when a student leaves a laboratory, the PI may still need to determine which data were used for a particular analysis reported in a paper in order to perform additional analyses. In this appendix, we discuss some practices that help researchers meet the computational needs of fMRI research and keep the data deluge under control, particularly as they move toward developing a research group or laboratory with multiple researchers performing data analysis.
Computing for fMRI analysis
The power of today's computers means that almost all of the data analysis methods discussed in this book can be performed on a standard desktop machine. Given this, one model for organization of a laboratory is what we might call “just a bunch of workstations” (JBOW). Under this model, each member of the research group has his or her own workstation on which to perform analyses. This model has the benefit of requiring little in the way of specialized hardware, system administration, or user training. Thus, one can get started very quickly with analysis.
One of the oldest debates in the history of neuroscience centers on the localization of function in the brain; that is, whether specific mental functions are localized to specific brain regions or instead rely more diffusely upon the entire brain (Finger, 1994). The concept of localization first arose from work by Franz Gall and the phrenologists, who attempted to localize mental functions to specific brain regions based on the shape of the skull. Although Gall was an outstanding neuroscientist (Zola-Morgan, 1995), he was wrong in his assumption about how the skull relates to the brain, and phrenology was in the end taken over by charlatans. In the early twentieth century, researchers such as Karl Lashley argued against localization of function, on the basis of research showing that cortical lesions in rats had relatively global effects on behavior. However, across the twentieth century the pendulum shifted toward a localizationist view, such that most neuroscientists now agree that there is at least some degree of localization of mental function. At the same time, the function of each of these regions must be integrated in order to acheive coherent mental function and behavior. These concepts have been referred to as functional specialization and functional integration, respectively (Friston, 1994).
Today, nearly all neuroimaging studies are centered on functional localization. However, there is increasing recognition that neuroimaging research must take functional integration seriously to fully explain brain function (Friston, 2005; McIntosh, 2000).
The goal of this book is to provide the reader with a solid background in the techniques used for processing and analysis of functional magnetic resonance imaging (fMRI) data.
A brief overview of fMRI
Since its development in the early 1990s, fMRI has taken the scientific world by storm. This growth is easy to see from the plot of the number of papers that mention the technique in the PubMed database of biomedical literature, shown in Figure 1.1. Back in 1996 it was possible to sit down and read the entirety of the fMRI literature in a week, whereas now it is barely feasible to read all of the fMRI papers that were published in the previous week! The reason for this explosion in interest is that fMRI provides an unprecedented ability to safely and noninvasively image brain activity with very good spatial resolution and relatively good temporal resolution compared to previous methods such as positron emission tomography (PET).
Blood flow and neuronal activity
The most common method of fMRI takes advantage of the fact that when neurons in the brain become active, the amount of blood flowing through that area is increased. This phenomenon has been known for more than 100 years, though the mechanisms that cause it remain only partly understood. What is particularly interesting is that the amount of blood that is sent to the area is more than is needed to replenish the oxygen that is used by the activity of the cells.
In some cases fMRI data are collected from an individual with the goal of understanding that single person; for example, when fMRI is used to plan surgery to remove a tumor. However, in most cases, we wish to generalize across individuals to make claims about brain function that apply to our species more broadly. This requires that data be integrated across individuals; however, individual brains are highly variable in their size and shape, which requires that they first be transformed so that they are aligned with one another. The process of spatially transforming data into a common space for analysis is known as intersubject registration or spatial normalization.
In this chapter we will assume some familiarity with neuroanatomy; for those without experience in this domain, we discuss a number of useful atlases in Section 10.2. Portions of this chapter were adapted from Devlin & Poldrack (2007).
Anatomical variability
At a gross level, the human brain shows remarkable consistency in its overall structure across individuals, although it can vary widely in its size and shape. With the exception of those suffering genetic disorders of brain development, every human has a brain that has two hemispheres joined by a corpus callosum whose shape diverges relatively little across individuals. A set of major sulcal landmarks (such as the central sulcus, sylvian fissure, and cingulate sulcus) are present in virtually every individual, as are a very consistent set of deep brain structures such as the basal ganglia.
The dimensionality of fMRI data is so large that, in order to understand the data, it is necessary to use visualization tools that make it easier to see the larger patterns in the data. Parts of this chapter are adapted from Devlin & Poldrack (2007) and Poldrack (2007).
Visualizing activation data
It is most useful to visualize fMRI data using a tool that provides simultaneous viewing in all three canonical orientations at once (see Figure 10.1), which is available in all of the major analysis packages.
Because we wish to view the activation data overlaid on brain anatomy, it is necessary to choose an anatomical image to serve as an underlay. This anatomical image should be as faithful as possible to the functional image being overlaid. When viewing an individual participant's activation, the most accurate representation is obtained by overlaying the statistical maps onto that individual's own anatomical scan coregistered to the functional data. When viewing activation from a group analysis, the underlay should reflect the anatomical variability in the group as well as the smoothing that has been applied to the fMRI data. Overlaying the activation on the anatomical image from a single subject, or on a single-subject image, implies a degree of anatomical precision that is not actually present in the functional data. Instead, the activation should be visualized on an average structural image from the group coregistered to the functional data, preferably after applying the same amount of spatial smoothing as was applied to the functional data.