Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter gives an overview of the relation between indexicality, deixis, and space in gesture from a semiotic and a linguistic point of view. Directive pointing gestures are not the only type of cospeech gestures that contributes to deixis. Iconic gestures that form part of the multimodal utterance may instantiate the targets to be pointed at and function as the deictic object of the deictic relation. In turn they may be interpreted as signs that stand for something else. A Peircean approach combined with a Bühlerian one, as suggested in this chapter, not only allows for a tertium comparationis with respect to the modality of the deictic and indexical signs under investigation. It also provides us with tools for representing semiotic processes like complex sign concatenation (e.g. deixis at signs vs. deixis at non-signs; deixis at metonymies or metaphors) as well as the collaborative creation of deictic space (sphere-like, map-like, screen-like; separated or shared) in multimodal interaction. The proposed schema of four semiotic subfields of space substantiates the view that space has to be thought of as a dynamic process of semiosis, not as a static entity.
Research in conversational hand gesturing shows an array of philosophical senses of intersubjectivity. Gesturing is interpersonally rational, as demonstrated in studies linking gesturing to common ground achievements and effects and to markings of communicative intent. Gesturing is an ecological and interactional activity through which copresent interlocutors codetermine their own social and environmental relatings, building as well as attending to a shared world. Gesturing is an intercorporeal experience central to what it means to live as linguistic bodies. Taken together, research indicates that hand gesturing even as a variegated phenomenon offers insight into how language works. The full story of intersubjectivity and attendant features of recognition, interpretation, normativity, conventionality, and reference begins and ends with actual bodies interacting. As these matters concern the core of pragmatic philosophy, gesture research has radical relevance for all language theorists. An enactive approach to intersubjectivity and language offers a framework for making this case.
The classical approach to gesture and sign language analysis focuses on the forms and locations of the hands. This constitutes an external point of view on the gesturing subject. The kinesiological approach presented in this chapter looks at gesture from the inside out, at how it is produced, taking a first-person perspective. This involves a physiological description of the parts of the body that are moving (the segments) and the joints at which they can move (providing the degrees of freedom of movement). This type of analysis allows for such distinctions of proper movement of segments from displacement caused by movement of another segment. Movement is distinguished according to muscular properties such as flexion versus extension, abduction versus adduction, exterior versus interior rotation, and supination versus pronation. The propagation of movement in the body is considered in terms of its flow across connected segments of the body, from more proximal to more distal segments or vice versa. These distinctions distinguish different functions of gestures (e.g. showing that you don’t care vs. expressing negation) and different meanings of signs in a sign language.
This chapter offers a toolbox of Methods for Gesture Analysis (MGA). Developed in the context of research on emerging protolinguistic structures in cospeech gestures, the present version of MGA differs from earlier publications (Bressem, Ladewig, Müller 2013; Bressem 2013) in offering sets of tools for gesture analysis that adapt flexibly to different research questions. Essential starting points for MGA are an understanding of hand gestures as temporal forms embedded in a dynamically unfolding context and an understanding of context that itself varies with the adopted framework. The baseline for any chosen tool is a microanalysis that entails some account of the form of the gesture (as temporal form), i.e. ‘form analysis’,and some analysis of how a gesture, a sequence of gestures, a multimodal sequence is placed in a given temporally unfolding context-of-use, i.e. context-analysis. Macroanalyses of gesture dynamics are briefly introduced. MGA offers a toolbox with a flexible set of tools that encourages critical reflection on the insight that can be gained from analyzing gestures in multimodal communication and interaction.
This chapter presents the role and use of gesture in first language development and its integration in the child’s multimodal communicative system. It includes an overview of theories and methods that have triggered and facilitated the study of gestures in language development. The main issues are illustrated with detailed analyses of examples extracted from longitudinal data in English and French. The human communication system develops in a space of shared meanings in which adults socialize children into language in situated activities; consequently, this overview highlights the crucial role of caregivers in child–adult interactions. We first focus on the role of gestures in adults’ communicative input and then follow children’s development into the use of the adults’ multimodal communicative system. At the end of the developmental process, speech becomes clearly predominant but is both complemented and supplemented by other semiotic resources according to variables such as linguistic context, situation, interlocutor, activity, or discourse genre. Children learn to master the dynamic multimodal communicative system used around them and with them in their daily interactions.
As there are many different methods of linguistic analysis, there are many different ways of approaching gesture analysis. This chapter gives a selective overview of the current state of art on gesture coding and annotation systems. It opens with a discussion on the difference between coding and annotation, before it considers aims and challenges in gesture coding and annotation. Afterward, the chapter reviews existing systems and reflects on the interrelation between subject, research question, coding and annotation system. The chapter emphasizes that coding and annotations systems are always influenced by the particular theoretical framework in which they are situated. Accordingly, similar to the analysis of language, a theory-neutral analysis of gestures is not possible. Rather, theoretical assumptions influence subjects, aspects and levels of analysis and as such also make themselves visible in annotation systems. This will be illustrated by exemplary research topics in gestures studies: language, language development, cognition, interaction, and human–machine interaction. The account of the individual systems thereby does not aim at an extensive discussion, but rather focuses on their general logic for answering their particular research question. Here, differences between systems addressing the same research topic (e.g. language) as well as differences across research topics (e.g. language vs. interaction) will be explored. The chapter closes with some considerations on possible future developments.
This chapter reviews the study of variation in gesture and its theoretical underpinnings in the field of gesture studies. It questions the use of culture, language, or nationality as the default unit of analysis in studies of gesture variation. Drawing on theoretical developments in sociolinguistics and recent anthropologial analyses of gesture, it argues for the possibility that social factors and divisions other than linguistic/cultural boundaries may provide a more robust and comprehensive theoretical account for variation in gesture.
We explore multimodal communication in robot agents and focus on communicative gesturing as a means to improve naturalness in the human–robot interactions and to create shared context between the user and the robot. We discuss challenges related to accurate timing and acute perception of the partner’s gestures, so as to support appropriate presentation of the message and understanding of the partner’s speech. We also discuss how such conversational behavior can be modelled for a robot agent in context-aware dialogue modelling. The chapter discusses technologies and the building of models for appropriate and adequate gesturing in HRI and presents some experimental research that addresses the challenges. The aim of the research is to gain better understanding of the gesture modality in HRI as well as to explore innovative solutions to improve human well-being and quality of life in the current society. The article draws examples from the AICO corpus which is collected for the purposes of comparative gaze and gesture studies between human–human and human–robot interactions.
In this chapter I discuss the role of motion-tracking technology in the study of gesture, both from a production perspective as well as for understanding how gestures support comprehension. I first give an overview of motion-tracking technologies in order to provide a starting point for researchers currently using or interested in using motion tracking. Next, I discuss how motion tracking has been employed in the past to understand gesture production and comprehension, as well as how it can be utilized for more complex experiments including virtual reality. This is not meant as a comprehensive review of the field of motion tracking, but rather a source of inspiration for how such methodologies can be employed in order to tackle relevant research questions. The chapter is concluded with suggestions for how to build upon previous research, asking new, previously inaccessible questions, and how motion-tracking technology can be used to move toward a more replicable and quantitative study of gesture.
Gestures associated with negation have become a well-defined area for gesture studies research. The chapter offers an overview of this area, identifies distinct empirical lines of enquiry, and highlights their contribution to aspects of linguistic and embodiment theory. After relating a surge of interest in this topic to the notion of recurrent gestures (but not restricted to it), the chapter offers a visualization of the widespread geographical coverage of studies of gestures associated with negation, then distils a set of common observations concerning the form, organizational properties, and functions of such gestures. This area of research is then further thematized by exploring distinct chains of studies that have adopted linguistic, cognitive-semantic, functional, psycholinguistic, comparative, and cultural perspectives to analyze the gestural expression of negation. Studies of gestures associated with negation are shown to have played a vital role in shaping understandings of the multimodality of grammar, the embodiment of cognition, and the relations between gestures and sign.
The chapter presents and discusses empirical data on the neuropsychology of gesture production. The focus of this chapter is on the specific contributions of the right and left hemispheres to the generation of gestures. Since the respective neuroscientific method has a substantial impact on the study results and different methodologies can even entail apparently opposing results concerning gesture production, different neuropsychological methods, their paradigms, and limitations are presented in detail. Spontaneous gesture production studies evidence a substantial contribution of the right hemisphere to gesture production, while gesture production on command studies show a relevant role of the left hemisphere. Gestures that are generated in association with right hemispheric functions such as spatial cognition, nonverbal emotional expression, global and metaphorical thinking appear to be generated in the right hemisphere, while gestures that are linked to tool use praxis are generated in the left hemisphere. The findings further provide a neuropsychological basis for understanding the complementarity but also the dissociation between gestural and verbal message.
This chapter concerns the use of manual gestures in human–computer interaction (HCI) and user experience research (UX research). Our goal is to empower gesture researchers to conduct meaningful research in these fields. We therefore give special focus to the similarities and differences between HCI research, UX research, and gesture studies when it comes to theoretical framework, relevant research questions, empirical methods, and use cases, i.e. the contexts in which gesture control can be used. As part of this, we touch on the role of various gesture-detecting technologies in conducting this kind of research. The chapter ends with our suggestions for the opportunities gesture researchers have to extend this body of knowledge and add value to the implementation and instantiation of systems with gesture control.
Emblematic gestures (or emblems) have several denominations in the literature (for instance autonomous, quotable, semiotic, folkloric or symbolic gestures). Emblems are culture-bound gestures; they differ interculturally and intraculturally, both among different cultural and linguistic areas, and among individuals and social groups within the same culture. These gestures are easily translated into verbal language, they are quotable; they are equivalent to utterances, and in many cases, they have names. Typical emblems are used – alongside or without words – for greetings, insults or mockery, to indicate places or people (deictics), to refer to the state of a person (to be drunk, to be asleep …), to give interpersonal orders or to represent actions (to eat, to drink, etc.). Many emblems show a clear perlocutionary component (to offer, to threaten, to promise or to swear …). The tradition in the study of emblems has always emphasized their autonomy from speech (they are interpretable with a high level of context independence). Moreover, emblematic capacity can be regarded as associated with illocutionary force, which is one of the most characteristic features of these units.
Iconic aspects of postures and hand movements have long been a central issue in gesture research. A speaker’s body may become a dynamic, viewpointed ‘icon’ (Peirce 1960) of someone or something else, or hands may create iconic signs. Recent research on iconicity in spoken and signed languages has (re)established its constitutive role in language (e.g. Jakobson 1990) and more broadly in multimodal interaction, which naturally includes iconic manual gestures and full-body enactments. Peircean semiotics are combined with cognitive linguistic accounts to demonstrate the role of iconicity in embodied conceptual and linguistic structures and to account for modality-specific manifestations of iconicity in gesture. We provide an overview of gestural modes of representation and techniques of depiction and exemplify the ways in which iconicity interacts with other semiotic principles, such as indexicality, viewpoint, and metonymy. The chapter also highlights empirical research into gestural iconicity as it relates to language acquisition, development, and processing, language and cognition, and the fields of computation and robotics.
A growth point captures the moment of speaking, taking a first-person view. It is thought in language, imbued with mental/social energy, and unpacked into a sentence. It is not a translation of gesture into speech. It is a process of processes. One is the psychological predicate (a notion from Vygotsky), a differentiation of context for what is newsworthy, the growth point’s core meaning – the context reshaped into a field of equivalents to make the differentiation meaningful. The core meaning has dual semiosis – opposite semiotic modes – a global-synthetic gesture and analytics-segmented speech, synchronized and coexpressive of the core. The gesture phases foster the synchronization. Cohesive threads to other growth points (a “catchment”) enrich it. A dialectic provides the growth point’s unpacking – the gesture becoming the thesis, the coexpressive speech the antithesis. Jointly, they create the dialectic synthesis. The dialectic synthesis and the unpacking are the same summoned construction-plus-gesture. The growth point, its processes fulfilled, inhabits the speaker’s being, taking up a position in the world of meaning it has created (conception from Merleau-Ponty).