To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Several authors express doubts that the ongoing accumulation of data in language research will ever coalesce into a coherent theory of spoken language. Some point to a basic problem: there is an ontological incommensurability between basic concepts of language analysis and observations. What is the source of this incommensurabilty? More importantly, why does such a problem persist despite advances in instrumental techniques extending to neuroimaging?
The epilogue serves to recall the essential problem of the ontological incommensurability between instrumental evidence at various levels of observations and writing-induced concepts of language analysis and theory.An illustration is used to summarize how this essential problem facing language science can be addressed.
Research on semantics has largely focused on lexemes in experiments that control for context effects. Several limitations of this approach are discussed along with the ongoing debate between embodied and disembodied viewpoints on the nature of semantic representations. While studies of vocabulary development support a grounding of early semantics in sensory experiences that accompany speech, a lexico-semantic approach fails to acknowledge that meaning is not formatted in utterances in terms of words as in text and dictionaries. In examining polysynthetic languages, a reference to the sensory chunking of speech appears essential in grounding semantic representations in a biological process of utterance segmentation. Moreover, whereas an embodied approach focuses on activations of dictionary-like meanings, a demonstration is provided showing that listeners activate episodes of speech acts.On the issue of the nature of semantic representations, debates in research on lexico-semantics overlook that neural entrainment and cross-frequency coupling constitute a grounding mechanism that can bind multimodal experiences to structures of motor speech.
It has long been established through instrumental methods that conventional units of linguistic analysis are not in speech. Yet the tendency has been to overlook instrumental records and to focus instead on indirect observations, generally involving a reference to writing signs, in maintaining orthographic concepts. This has been the case of the historical debate on existence of letter-like phonemes. Indirect evidence from transcribed spoonerisms to the invention of the Greek alphabet has been interpreted as reflecting an awareness of phonemes even for illiterate speakers. Some claim that an awareness of letters arises from an innate competence for language.A critical review reveals that much indirect evidence is inherently circular: a reference to alphabet signs cannot validly serve to investigate an awareness of letter-like entities. Several authors have criticized the centrism of the assumption that alphabet systems reflect an innate awareness of phonemes. Similar problems arise with other writing-induced concepts that have guided interpretations of results and experimental designs.
A critical review of models of speech-motor control serves to illustrate the ongoing problem of elaborating an interface between speech and concepts of phonemes or “syllables” as groups of phonemes.The problem extends to neuro- and psycho-linguistic models. In addressing the issue of“the interface that never was,” several lines of evidence are presented that demonstrate syllable-size cycles as basic sequencing units of articulation, muscle activation, and representation in sequence memory. The evidence also suggests that a conceptualization of speech in terms of linguistic-type+/– features that are taken to be timed in letter-like bundles have oriented models, but fundamentally misrepresent both the timing and the graded control of muscles and articulatory motions. Instrumental records of graded control support coherent syllable-size cycles as basic sequencing units and evidence is discussed with a view on how syllable-internal timing can relate to intrinsic properties of relaxing tissues which do not imply a sequential control of closing and opening motions within a motion cycle.
In the latter half of the twentieth century, language theory adopted a formalist approach that focused on sentence generation. While generative models were originally intended as computational accounts of sentence grammaticality, they were later taken to reflect an inborn competence and a “language learning device.” Yet formal grammars were being worked out from analyses of script using orthographic units and categories. Several problems arose in bootstrapping grammatical categories in the mind to units like words and phrases, which entailed a shoehorning of concepts of syntax to children's speech. While theory-driven perception research sought to confirm children’s recognition of words, studies of speech corpora indicate that young children are not manipulating units like words as separate units but chunked clusters of items. In documenting these problems, recent evidence is discussed showing that syntactic category information may not be represented as properties of words in the brain, undermining the hypothesis of a language acquisition device.
The entrainment of neural oscillations to attributes of signals provides a key principle by which one can evaluate how the brain interfaces with structures of motor speech. For many authors, frequency-specific entrainment of delta (< 3 Hz) and theta (4–10 Hz ) oscillations to groups and syllable-size energy modulations define processing frames. However, there is little agreement on the type of information that is processed in the frames. A review is provided of diverging views on the role of entrainment and controversial claims that oscillations entrain to non-sensory units like words and phrases. A critical experiment is presented showing that, whereas theta oscillations entrain to acoustic attributes even in sequences of tones, delta entrains specifically to signature marks of chunking in speech stimuli regardless of whether the stimuli are meaningful utterances or meaningless series of syllables. By this evidence, delta waves do not entrain primarily to putative syntactic units but more generally to chunks of articulated sounds, which is consistent with a body of evidence demonstrating that chunking is a domain-general principle involved in processing motor sequences.
Utterances are communicative acts. They bear observable structures that relate to constraints on actions and the processing of sequences of actions. In viewing utterances this way, rather than as sentences on a page, it is essential to consider that oral communication rests on a basic speaker–listener parity, which is achieved through motor-sensory coupling.This coupling not only applies to articulatory-acoustic features but also, at a basic level, to multimodal information that binds to structures of motor speech and which serves to constitute semantic representations. Research on motor-sensory coupling is discussed with a focus on the adaptation of couplings with speech experience. These adaptations entail different types of learning, including reinforcement and supervised and Hebbian learning, that relatecortical and subcortical processes. Whereas motor-sensory coupling at cortical levels is well known, an outline of proposals is provided bearing on the role of subcortical systems. A process of neural entrainment is presented as a pivotal principle by which multisensory information couples to structures of motor speech.
Constraints on processes of motor speech shape syllable-size cycles and chunked sequences of these cycles. Since these units involve pulses of air, they are also structured in terms of respiratory functions. Mechanisms of speech breathing are discussed with a view on how they constrain the number of syllables and morphemes within breath units of speech, or “utterances.” Results are presented showing that a standard measure of “mean length of utterance” involving morpheme counts and viewed as an index of grammatical development is influenced by the growth of respiratory volumes. This suggests that developing combinatorial assemblies within utterances link to maturational changes in motor speech.The chapter concludes on a summary of how observable structures of spoken language address a writing bias that comes from analyzing speech in terms of words and sentences on a page. Compared to these culture-specific concepts, syllable-size cycles, chunks, and utterances as breath units of speech constitute biologically based structures of spoken language and language processing.