3 results
Parental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills
- Emre Ugur, Yukie Nagai, Hande Celikkanat, Erhan Oztop
-
- Article
- Export citation
-
Parental scaffolding is an important mechanism that speeds up infant sensorimotor development. Infants pay stronger attention to the features of the objects highlighted by parents, and their manipulation skills develop earlier than they would in isolation due to caregivers' support. Parents are known to make modifications in infant-directed actions, which are often called “motionese”7. The features that might be associated with motionese are amplification, repetition and simplification in caregivers' movements, which are often accompanied by increased social signalling. In this paper, we extend our previously developed affordances learning framework to enable our hand-arm robot equipped with a range camera to benefit from parental scaffolding and motionese. We first present our results on how parental scaffolding can be used to guide the robot learning and to modify its crude action execution to speed up the learning of complex skills. For this purpose, an interactive human caregiver-infant scenario was realized with our robotic setup. This setup allowed the caregiver's modification of the ongoing reach and grasp movement of the robot via physical interaction. This enabled the caregiver to make the robot grasp the target object, which in turn could be used by the robot to learn the grasping skill. In addition to this, we also show how parental scaffolding can be used in speeding up imitation learning. We present the details of our work that takes the robot beyond simple goal-level imitation, making it a better imitator with the help of motionese.
9 - Models for the control of grasping
- Edited by Dennis A. Nowak, Joachim Hermsdörfer
-
- Book:
- Sensorimotor Control of Grasping
- Published online:
- 23 December 2009
- Print publication:
- 25 June 2009, pp 110-124
-
- Chapter
- Export citation
-
Summary
Summary
This chapter underlines the multifaceted nature of reach and grasp behavior by reviewing several computational models that focus on selected features of reach-to-grasp movements. An abstract meta-model is proposed that subsumes previous modeling efforts, and points towards the need to develop computational models that embrace all the facets of reaching and grasping behavior.
Introduction
Hand transport and hand (pre)shaping are basic components of primate grasping. The different views on their dependence and coordination lead to different explanations of human control of grasping. One can view these two components as being controlled independently but coordinated so as to achieve a secure grasp. The alternative view is that the hand and the arm are taken as a single limb and controlled using a single control mechanism. Needless to say, this distinction is not very sharp; but it becomes a choice to be made by a control engineer when it is necessary to actually implement a grasp controller. The experimental findings so far point towards the view that human grasping involves independent but coordinated control of the arm and the hand (see Jeannerod et al., 1998) (see also Chapter 10). However, reports against this view do exist as it has been suggested that human grasping is a generalized reaching movement that involves movement of digits so as to bring the fingers to their targets on the object surface (Smeets & Brenner, 1999, 2001). Although theoretically both control mechanisms are viable, from a computational viewpoint, the former is more likely.
12 - The development of grasping and the mirror system
-
- By Erhan Oztop, JST-ICORP Computational Brain Project, ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan, Michael A. Arbib, Computer Science Department, Neuroscience Program and USC Brain Project, University of Southern California, Los Angeles, CA 90089, USA, Nina Bradley, Department of Biokinesiology and Physical Therapy, University of Southern California, Los Angeles, CA 90033, USA
- Edited by Michael A. Arbib, University of Southern California
-
- Book:
- Action to Language via the Mirror Neuron System
- Published online:
- 01 September 2009
- Print publication:
- 07 September 2006, pp 397-423
-
- Chapter
- Export citation
-
Summary
Introduction: a mirror system perspective on grasp development
Neonates and young infants are innately compelled to move their arms, the range of possible spontaneous movements being biologically constrained by anatomy, environmental forces, and social opportunity. Over the first 9 postnatal months, reaching movements are transformed as infants establish an array of goal-directed behaviors, master basic sensorimotor skills to act on those goals, and acquire sufficient knowledge of interesting objects to preplan goal-directed grasping. In monkeys, it appears that the neural circuit for control of grasping also functions to understand the manual actions of other primates and humans (Arbib, Chapter 1, this volume). Within the grasp circuitry, “mirror neuron” activity encodes both the manual actions executed by the monkey and the observed goal-directed actions of others. Recent imaging studies on humans indicate that a mirror neuron network may exist in humans linking observation and execution functions. However, the link between grasp development and mirror system development is widely unexplored. To address this, we will build models based both on behavioral data concerning the course of development of reaching in human infants and on neurophysiological data concerning mirror neurons and related circuitry in macaque monkeys.
In humans, the foundation for reaching may begin as early as 10–15 weeks of fetal development when fetuses make hand contact with the face and exhibit preferential sucking of the right thumb (de Vries et al., 1982; Hepper et al., 1991).