To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The chapter addresses: 1. Overview of the Positivity Principle. 2. Theoretical Rationale for the Positivity Principle. 3. Empirical Rationale for the Positivity Principle. 4. Boundary Conditions for the Positivity Principle. 5. Applications of the Positivity Principle
The temporal signatures that characterize speech – especially its prosodic qualities – are observable in the movements of the hands and bodies of its speakers. A neurobiological account of these prosodic rhythms is thus likely to benefit from insights on the neural coding principles underlying co-speech gestures. Here we consider whether the vestibular system, a sensory system that encodes movements of the body, contributes to prosodic processing. Careful review of the vestibular system’s anatomy and physiology, its role in dynamic attention and active inference, its relevance for the perception and production of rhythmic sound sequences, and its involvement in vocalization all point to a potential role for vestibular codes in the neural tracking of speech. Noting that the kinematics and time course of co-speech movements closely mirror prosodic fluctuations in spoken language, we propose that the vestibular system cooperates with other afferent networks to encode and decode prosodic features in multimodal discourse and possibly in the processing of speech presented unimodally.
In 1960, William Stokoe provided evidence that signs were not simply gestures; that signs, like spoken words, have abstract internal structures. His discovery provided the first justification for treating sign languages as real languages rather than gesture systems. Subsequent analyses of signs remained consistent with Stokoe’s analysis in that they excluded nonlexical gesturing. Some signs, however, appear to require noncategorical (gestural) specifications for directions and locations. Indicating signs, for example, can be articulated toward people and things in an unlimited number of directions. In addition, depicting verbs appear to be capable of depicting entities at an unlimited number of locations ahead of the signer. Chapter 1 explains how analysts incorporated morphemic spatial loci into their fully morphemic, nongestural analyses of these signs. These analyses were later challenged with partly lexical, partly gestural analyses. This was followed by an increased interest in how signers gesture. As a precursor to justifying the partly lexical, partly gestural analyses, the chapter proposes definitions of gesture and depiction that are applicable to the analyses in the book.
Indicating and depicting are widely understood to be fundamental, meaningful components of everyday spoken language discourse: a speaker's arms and hands are free to indicate and depict because they do not articulate words. In contrast, a signer's arms and hands do articulate signs. For this reason, linguists studying sign languages have overwhelmingly concluded that signers do not indicate and depict as a part of signed articulations. This book demonstrates that signers do, however, indicate - by incorporating non-lexical gestures into their articulations of individual signs. Fully illustrated throughout, it also shows that signers create depictions in numerous ways through conceptualizations, in which the hands, other parts of the body, and parts of the space ahead of the signer depict things. By establishing that indicating and depicting are also fundamental, meaningful aspects of sign language discourse, this book is essential reading for researchers and students of sign linguistics and gesture studies.
Gesture and speech form a tightly integrated system in first language (L1). We know less about the gesture-speech system in second language (L2) production, particularly with respect to speaker proficiency and discourse context. In this study, we focused on the speech and gestures produced by adult Persian (L1)-English (L2) bilinguals with high or low L2 proficiency and English native speakers (n = 22/group). We asked whether speaker proficiency (native, high, low) and discourse context (narratives, explanations) influence the amount, diversity and complexity of speech and gesture production. Our results showed an effect of context, with greater production of speech and gesture in narratives than explanations across proficiency levels. More importantly, we found an effect of proficiency – with lower speech complexity coupled with greater gesture complexity in bilinguals with low proficiency, particularly in the explanation context – suggesting a compensatory role for gesture among bilinguals with low L2 proficiency in more demanding communicative contexts.
This chapter analyses Wittgenstein’s transitional period and his shift from logical to linguistic models of context. Centred on his work in early 1930s and on his ‘Remarks on Frazer’s Golden Bough’, it shows how Wittgenstein moved from seeing context as singular logic to viewing it as multiple ‘logical spaces’ or ‘grammars’. This shift prefigures later anthropological moves away from formal systems while retaining some commitment to structure through language as model.
This chapter theorises the embodiment of timbral gesture in electronic dance music (EDM) as a convergence point between the vexed categories of affect and meaning. It is argued that timbre is inseparable from gesture in the listening experience and that the embodiment of synthesised gestures affords listeners new ways of experiencing their body-minds by exercising their perceptual agency through sonic prosthesis. In social EDM settings, the heightened potential for entrainment to both the music and other co-participants, together with the established role of entrainment in facilitating social bonding, suggests that the timbral gestures of EDM could be key to fostering intersubjectivity among those present. Considering this, the imaginative embodiment of timbral gestures is shown to constitute a necessary first step towards the communal rationalisation of the EDM experience and the social emergence of musical meaning.
Research on spoken languages has shown that response particles may indicate the truth of a previous utterance or the polarity of the response. In responses to negative antecedents, the two functions come apart and particles become ambiguous. We present the first quantitative study on response strategies in sign languages by discussing data from a production experiment in German Sign Language (Deutsche Gebärdensprache; DGS). The results indicate that DGS does not exploit the potential of simultaneous manual and nonmanual strategies to disambiguate responses. Still, the type of articulator influences the choice of response element. We propose an optimality-theoretic model to account for the role of articulator type, the disambiguation potential, and the morphosyntax of response elements in DGS.
Indicating verbs can be directed toward locations in space associated with their arguments. The primary debate about these verbs is whether this directionality is akin to grammatical agreement or whether it represents a fusion of both morphemic and gestural elements. To move the debate forward, more empirical evidence is needed. We consider linguistic and social factors in 1,436 indicating-verb tokens from the BSL Corpus. Results reveal that modification is not obligatory and that patient modification is conditioned by several factors, such as constructed action. We argue that our results provide some support for the claim that indicating verbs represent a fusion of morphemic and gestural elements.
Precision turn-taking may constitute a crucial part of the human endowment for communication. If so, it should be implemented similarly across language modalities, as in signed vs. spoken language. Here, in the first experimental study of turn-end prediction in sign language, we find support for the idea that signed language, like spoken language, involves turn-type prediction and turn-end anticipation. In both cases, turns like questions that elicit specific responses accelerate anticipation. We also show remarkable cross-modality predictive capacity: nonsigners anticipate signed turn ends surprisingly well. Finally, we show that despite nonsigners' ability to intuitively predict signed turn ends, early native signers do it much better by using their access to linguistic signals (here, question markers). As shown in prior work, question formation facilitates prediction, and age of sign language acquisition affects accuracy. The study thus sheds light on the kinds of features that may facilitate turn-taking universally, and those that are language-specific.
Serial verb constructions have often been said to refer to single conceptual events. However, evidence to support this claim has been elusive. This article introduces co-speech gestures as a new way of investigating the relationship. The alignment patterns of gestures with serial verb constructions and other complex clauses were compared in Avatime (Ka-Togo, Kwa, Niger-Congo). Serial verb constructions tended to occur with single gestures overlapping the entire construction. In contrast, other complex clauses were more likely to be accompanied by distinct gestures overlapping individual verbs. This pattern of alignment suggests that serial verb constructions are in fact used to describe single events.
Previous studies on a variety of languages have demonstrated that manual gesture is temporally aligned with prosodic prominence. However, the majority of these studies have been conducted on languages with word-level stress. In this paper, we investigate the alignment of manual beat gestures to speech in local varieties of Standard Indonesian, a language whose word prosodic system has been the subject of conflicting claims. We focus on the varieties of Indonesian spoken in the eastern part of the archipelago and Java. Our findings reveal that there is a strong tendency to align gesture to penultimate syllables in the eastern variety and a tendency to align gesture to final syllables in the Javanese variety. Additionally, while the eastern patterns appear to be word based, the Javanese pattern shows evidence of being phrase based. Surprisingly, the penultimate syllable emerges as a gestural anchor in the eastern variety even for two of the three speakers who showed little to no regular prosodic prominence on this syllable. This suggests that gestural alignment may serve to uncover prosodic anchors even when they are not employed by the phonology proper.
The theoretical position that time principally enters mental life through the portal of perceptual organization and particularly through the formation of groups is introduced. The key concept of Gestalt, that the whole is other than the sum of the parts, is illustrated through both spatial and temporal examples. Particular emphasis is placed on the emergent properties of groups experienced in music.
All animal species seem to have some sort of communication system that is (largely or completely) innate. What is the nature of such systems? We will only have space to look at a few examples, which will show that some species use very complex systems. We can then ask, assuming that the human language capacity consists of several cognitive submodules, whether it is the case that some of those modules are shared with the innate communication capacities of other species. As we have seen, in recent years Chomsky has argued that the language capacity that is uniquely human (being specific to the domain of language) is the ability to form recursive structure. This has led to research to find out whether other animal species can also “handle” recursive patterns either in their communication systems or in other cognitive systems.
The study of individuals with hippocampal damage and amnesia provides a compelling opportunity to directly test the role of declarative memory to communication and language. Over the past two decades, we have documented disruptions in discourse and conversation as well as in more basic aspects of language in individuals with hippocampal amnesia including at the word, phrase, and sentence level across offline and online language processing tasks. This work highlights the critical contribution of hippocampal-dependent memory to language and communication and suggests that hippocampal damage or dysfunction is a risk factor for a range of language and communicative disruptions even in the absence of frank disorders of amnesia or aphasia. This work also raises questions about the reality and utility of the historical distinction between communication and language in defining cognitive-communication disorders as individuals with isolated memory impairments show deficits that cut across both communication and language.
When thinking about emotional expressions, most would probably envision facial expressions (e.g., smiling, scowling) or vocalizations (e.g., crying, laughter). Here we focus on the emotional postures and movements of the body – an important, but fairly understudied, signal for emotion perception. During emotional episodes, humans often position and move their bodies in consistent ways that may (or may not) signal their underlying feelings and future actions. We briefly review the historical antecedents of this literature, as well as current knowledge on the neural processing, developmental trajectory, and cultural differences in the emotional perception of body language. We continue by examining the role of the body as a contextualizing agent for disambiguating facial expressions, as well as their inverse relationship – from faces to bodies. Future directions and speculations about how this emerging field may evolve are discussed.
With their shallow reliefs, depictions of contorted movement, and a historically inflected formal style, first century BCE and CE Neo-Attic reliefs are distinct among Greek and Roman relief sculpture. Primarily made for an elite Roman audience, the reliefs invoke stylistic techniques from different periods of Greek art and creatively combine figural types taken from earlier objects. The scenes are also characterized by a sense of spacelessness, established by the representation of figures, objects, and landscapes in shallow relief and by the frequent distorted play with depth and space. By considering a select number of examples, this chapter argues that the reliefs’ formal elements work together to evoke multiple temporalities and spaces, so that the distinct time and space created by and in these reliefs allowed them to become powerful sites of contact. In connecting their audience with an idealized past that takes place in a generic space, the reliefs offered viewers the opportunity not only to engage visually with the past temporalities of Archaic and Classical Greece, but also to become immersed in them by sharing the same space as the stylized figures, who could slip from their timeless and spaceless background to the Roman world in which they were displayed.
This Element in Construction Grammar addresses one of its hottest topics and asks: is the unimodal conception of Construction Grammar as a model of linguistic knowledge at odds with the usage-based thesis and the multimodality of language use? Are constructions verbal, i.e. unimodal form-meaning pairings, or are they, or at least are some of them, multimodal in nature? And, more fundamentally, how do we know? These questions have been debated quite controversially over the past few years. This Element presents the current state of research within the field, paying special attention to the arguments that are put forward in favour and against the uni-/multimodal nature of constructions and the various case studies that have been conducted. Although significant progress has been made over the years, the debate points towards a need for a diversification of the questions asked, the data studied, and the methods used to analyse these data.
Early language development has rarely been studied in hearing children with deaf parents who are exposed to both a spoken and a signed language (bimodal bilinguals). This study presents longitudinal data of early communication and vocabulary development in a group of 31 hearing infants exposed to British Sign Language (BSL) and spoken English, at 6 months, 15 months, 24 months and 7 years, in comparison with monolinguals (exposed to English) and unimodal bilinguals (exposed to two spoken languages). No differences were observed in early communication or vocabulary development between bimodal bilinguals and monolinguals, but greater early communicative skills in infancy were found in bimodal bilinguals compared to unimodal bilinguals. Within the bimodal bilingual group, BSL and English vocabulary sizes were positively related. These data provide a healthy picture of early language acquisition in those learning a spoken and signed language simultaneously from birth.