To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter theorises the embodiment of timbral gesture in electronic dance music (EDM) as a convergence point between the vexed categories of affect and meaning. It is argued that timbre is inseparable from gesture in the listening experience and that the embodiment of synthesised gestures affords listeners new ways of experiencing their body-minds by exercising their perceptual agency through sonic prosthesis. In social EDM settings, the heightened potential for entrainment to both the music and other co-participants, together with the established role of entrainment in facilitating social bonding, suggests that the timbral gestures of EDM could be key to fostering intersubjectivity among those present. Considering this, the imaginative embodiment of timbral gestures is shown to constitute a necessary first step towards the communal rationalisation of the EDM experience and the social emergence of musical meaning.
Research on spoken languages has shown that response particles may indicate the truth of a previous utterance or the polarity of the response. In responses to negative antecedents, the two functions come apart and particles become ambiguous. We present the first quantitative study on response strategies in sign languages by discussing data from a production experiment in German Sign Language (Deutsche Gebärdensprache; DGS). The results indicate that DGS does not exploit the potential of simultaneous manual and nonmanual strategies to disambiguate responses. Still, the type of articulator influences the choice of response element. We propose an optimality-theoretic model to account for the role of articulator type, the disambiguation potential, and the morphosyntax of response elements in DGS.
Indicating verbs can be directed toward locations in space associated with their arguments. The primary debate about these verbs is whether this directionality is akin to grammatical agreement or whether it represents a fusion of both morphemic and gestural elements. To move the debate forward, more empirical evidence is needed. We consider linguistic and social factors in 1,436 indicating-verb tokens from the BSL Corpus. Results reveal that modification is not obligatory and that patient modification is conditioned by several factors, such as constructed action. We argue that our results provide some support for the claim that indicating verbs represent a fusion of morphemic and gestural elements.
Precision turn-taking may constitute a crucial part of the human endowment for communication. If so, it should be implemented similarly across language modalities, as in signed vs. spoken language. Here, in the first experimental study of turn-end prediction in sign language, we find support for the idea that signed language, like spoken language, involves turn-type prediction and turn-end anticipation. In both cases, turns like questions that elicit specific responses accelerate anticipation. We also show remarkable cross-modality predictive capacity: nonsigners anticipate signed turn ends surprisingly well. Finally, we show that despite nonsigners' ability to intuitively predict signed turn ends, early native signers do it much better by using their access to linguistic signals (here, question markers). As shown in prior work, question formation facilitates prediction, and age of sign language acquisition affects accuracy. The study thus sheds light on the kinds of features that may facilitate turn-taking universally, and those that are language-specific.
Serial verb constructions have often been said to refer to single conceptual events. However, evidence to support this claim has been elusive. This article introduces co-speech gestures as a new way of investigating the relationship. The alignment patterns of gestures with serial verb constructions and other complex clauses were compared in Avatime (Ka-Togo, Kwa, Niger-Congo). Serial verb constructions tended to occur with single gestures overlapping the entire construction. In contrast, other complex clauses were more likely to be accompanied by distinct gestures overlapping individual verbs. This pattern of alignment suggests that serial verb constructions are in fact used to describe single events.
Previous studies on a variety of languages have demonstrated that manual gesture is temporally aligned with prosodic prominence. However, the majority of these studies have been conducted on languages with word-level stress. In this paper, we investigate the alignment of manual beat gestures to speech in local varieties of Standard Indonesian, a language whose word prosodic system has been the subject of conflicting claims. We focus on the varieties of Indonesian spoken in the eastern part of the archipelago and Java. Our findings reveal that there is a strong tendency to align gesture to penultimate syllables in the eastern variety and a tendency to align gesture to final syllables in the Javanese variety. Additionally, while the eastern patterns appear to be word based, the Javanese pattern shows evidence of being phrase based. Surprisingly, the penultimate syllable emerges as a gestural anchor in the eastern variety even for two of the three speakers who showed little to no regular prosodic prominence on this syllable. This suggests that gestural alignment may serve to uncover prosodic anchors even when they are not employed by the phonology proper.
The theoretical position that time principally enters mental life through the portal of perceptual organization and particularly through the formation of groups is introduced. The key concept of Gestalt, that the whole is other than the sum of the parts, is illustrated through both spatial and temporal examples. Particular emphasis is placed on the emergent properties of groups experienced in music.
All animal species seem to have some sort of communication system that is (largely or completely) innate. What is the nature of such systems? We will only have space to look at a few examples, which will show that some species use very complex systems. We can then ask, assuming that the human language capacity consists of several cognitive submodules, whether it is the case that some of those modules are shared with the innate communication capacities of other species. As we have seen, in recent years Chomsky has argued that the language capacity that is uniquely human (being specific to the domain of language) is the ability to form recursive structure. This has led to research to find out whether other animal species can also “handle” recursive patterns either in their communication systems or in other cognitive systems.
The study of individuals with hippocampal damage and amnesia provides a compelling opportunity to directly test the role of declarative memory to communication and language. Over the past two decades, we have documented disruptions in discourse and conversation as well as in more basic aspects of language in individuals with hippocampal amnesia including at the word, phrase, and sentence level across offline and online language processing tasks. This work highlights the critical contribution of hippocampal-dependent memory to language and communication and suggests that hippocampal damage or dysfunction is a risk factor for a range of language and communicative disruptions even in the absence of frank disorders of amnesia or aphasia. This work also raises questions about the reality and utility of the historical distinction between communication and language in defining cognitive-communication disorders as individuals with isolated memory impairments show deficits that cut across both communication and language.
When thinking about emotional expressions, most would probably envision facial expressions (e.g., smiling, scowling) or vocalizations (e.g., crying, laughter). Here we focus on the emotional postures and movements of the body – an important, but fairly understudied, signal for emotion perception. During emotional episodes, humans often position and move their bodies in consistent ways that may (or may not) signal their underlying feelings and future actions. We briefly review the historical antecedents of this literature, as well as current knowledge on the neural processing, developmental trajectory, and cultural differences in the emotional perception of body language. We continue by examining the role of the body as a contextualizing agent for disambiguating facial expressions, as well as their inverse relationship – from faces to bodies. Future directions and speculations about how this emerging field may evolve are discussed.
With their shallow reliefs, depictions of contorted movement, and a historically inflected formal style, first century BCE and CE Neo-Attic reliefs are distinct among Greek and Roman relief sculpture. Primarily made for an elite Roman audience, the reliefs invoke stylistic techniques from different periods of Greek art and creatively combine figural types taken from earlier objects. The scenes are also characterized by a sense of spacelessness, established by the representation of figures, objects, and landscapes in shallow relief and by the frequent distorted play with depth and space. By considering a select number of examples, this chapter argues that the reliefs’ formal elements work together to evoke multiple temporalities and spaces, so that the distinct time and space created by and in these reliefs allowed them to become powerful sites of contact. In connecting their audience with an idealized past that takes place in a generic space, the reliefs offered viewers the opportunity not only to engage visually with the past temporalities of Archaic and Classical Greece, but also to become immersed in them by sharing the same space as the stylized figures, who could slip from their timeless and spaceless background to the Roman world in which they were displayed.
This Element in Construction Grammar addresses one of its hottest topics and asks: is the unimodal conception of Construction Grammar as a model of linguistic knowledge at odds with the usage-based thesis and the multimodality of language use? Are constructions verbal, i.e. unimodal form-meaning pairings, or are they, or at least are some of them, multimodal in nature? And, more fundamentally, how do we know? These questions have been debated quite controversially over the past few years. This Element presents the current state of research within the field, paying special attention to the arguments that are put forward in favour and against the uni-/multimodal nature of constructions and the various case studies that have been conducted. Although significant progress has been made over the years, the debate points towards a need for a diversification of the questions asked, the data studied, and the methods used to analyse these data.
Early language development has rarely been studied in hearing children with deaf parents who are exposed to both a spoken and a signed language (bimodal bilinguals). This study presents longitudinal data of early communication and vocabulary development in a group of 31 hearing infants exposed to British Sign Language (BSL) and spoken English, at 6 months, 15 months, 24 months and 7 years, in comparison with monolinguals (exposed to English) and unimodal bilinguals (exposed to two spoken languages). No differences were observed in early communication or vocabulary development between bimodal bilinguals and monolinguals, but greater early communicative skills in infancy were found in bimodal bilinguals compared to unimodal bilinguals. Within the bimodal bilingual group, BSL and English vocabulary sizes were positively related. These data provide a healthy picture of early language acquisition in those learning a spoken and signed language simultaneously from birth.
Communicative interaction forms the core of human experience. In this fascinating book Levinson, one of the world's leading scholars in the field, explores how human communicative interaction is structured, the demands it puts on our cognitive processing, and how its system evolved out of continuities with other primate systems. It celebrates the role of the 'interaction engine' which drives our social interaction, not only in human life, but also in the evolution of our species – showing how exchanges such as words, glances, laughter and face-to-face encounters bring us our greatest and most difficult experiences, and have come to define what it means to be human. It draws extensively on the author's fieldwork with speakers across multiple cultures and communities, and was inspired by his own experiences during the Covid lockdown, when humans were starved of the very social interaction that shapes our lives. This title is also available as open access on Cambridge Core.
This chapter provides a tour of several additional forms of human language communication apart from spoken language. Visual speech (which also contributes to audiovisual speech) requires not only visual cortex, but regions such as posterior temporal sulcus which may help integrate signals across modality. Nonverbal communication, including productions such as crying or laughter, relate to activity in the superior temporal lobes but also in other regions including the cingulate cortex and insula. Reading and the ability to decode written language highlights portions of the visual system, including the ventral occipitotemporal cortex (often referred to as the visual word form area, or VWFA). Learning to read is a complex process that involves written language, knowledge of speech sounds, and motivation. Co-speech gestures are present in children’s language development and can convey semantic information alongside spoken language; integration of such semantic gestures involves left inferior frontal gyrus and premotor cortex.
In research literature and works of art, the textual gap of Mary’s bodily action, implicit in Jesus’ phrase μή μου ἅπτου (John 20.17b), is frequently filled either with a proskynesis or a standing embrace. Against the background of Judith Butler’s theory of gesture, this article analyses attempts at filling in the gaps in the text. The notion of gesture as bodily quotation helps to interpret Mary and Jesus not as counterparts, but as a performative unit enacting continuity and difference after Jesus’ death. The reading offered in this article focuses on the interaction between bodies, and it undermines the dichotomy between speech and body, man and woman, heaven and earth. This article examines exegetical interpretations of Mary’s gesture, alongside artistic interpretations, to show that the way the textual gap is filled is significant because gestures are significant.
Prosody and gesture are two known cues for expressing information structure by emphasising new or important elements in spoken discourse while attenuating given information. Applying this potentially multimodal form-meaning mapping to a foreign language may be difficult for learners. This study investigates how native speakers and language learners use prosodic prominence and head gestures to differentiate levels of givenness.
Twenty-five Catalan learners of French and 19 native French speakers were video-recorded during a short spontaneous narrative task. Participants’ oral productions were annotated for information status, perceived prominence, pitch accents, and head gesture types. Results show that given information in French is multimodally less marked than new-er information and is accordingly perceived as less prominent. Our findings indicate that Catalan learners of French mark given information more frequently than native speakers and may transfer their use of low pitch accents to their second language (L2). The data also show that the use of head gestures depends on the presence of prosodic marking, calling into question the assumption that prosody and gesture have balanced functional roles. Finally, the type of head gesture does not appear to play a significant role in marking information status.
Religious worship is an embodied act, consisting not of words alone, but of words and gestures. But what did early modern English Protestants think they were doing when they went through the motions of worship? In Protestant Bodies, Arnold Hunt argues that the English Reformation was a gestural reformation that redefined the postures and motions of the body. Drawing on a rich array of primary sources, he shows how gestures inherited from the medieval liturgy took on new meanings within a drastically altered ritual landscape, and became central to the enforcement of religious uniformity in the sixteenth and seventeenth centuries. Protestant Bodies presents a challenging new interpretation of the English Reformation as a series of experiments in shaping and remaking the body, both individual and collective, with consequences that still persist today.
This chapter presents the current state of research in multimodal Construction Grammar with a focus on co-speech gestures. We trace the origins of the idea that constructions may have to be (re-)conceptualized as multimodal form–meaning pairs, deriving from the inherently multimodal nature of language use and the usage-based model, which attributes to language use a primordial role in language acquisition. The issue of whether constructions are actually multimodal is contested. We present two current positions in the field. The first one argues that a construction should only count as multimodal if gestures are mandatory parts of that construction. Other, more meaning-centered, approaches rely less on obligatoriness and frequency of gestural (co-)occurrences and either depart from a recurrent gesture to explore the verbal constructions it combines with or focus on a given meaning, for example, negation, and explore its multimodal conceptualization in discourse. The chapter concludes with a plea for more case studies and for the need to develop large-scale annotated corpora and apply statistical methods beyond measuring mere frequency of co-occurrence.
We present an overview of constructional approaches to signed languages, beginning with a brief history and the pioneering work of William C. Stokoe. We then discuss construction morphology as an alternative to prior analyses of sign structure that posited a set of non-compositional lexical signs and a distinct set of classifier signs. Instead, signs are seen as composed of morphological schemas containing both specific and schematic aspects of form and meaning. Grammatical construction approaches are reviewed next, including the marking of argument structure on verbs in American Sign Language (ASL). Constructional approaches have been applied to the issue of the relation between sign and gesture across a variety of expressions. This work often concludes that signs and gesture interact in complex ways. In the final section, we present an extended discussion of several grammatical and discourse phenomena using a constructional analysis based on Cognitive Grammar. The data come from Argentine Sign Language (LSA) and includes pointing constructions, agreement constructions, antecedent-anaphor relations, and constructions presenting point of view in reported narrative.