To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An increasing number of studies report that different forms of rhythmic stimulation influence linguistic task performance. First, this chapter aims at describing to what extent the construction of a tree-like structure in which lower-level units are combined into higher-level constituents in linguistic syntax and rhythm could be subserved by similar mechanisms. Second, we review and categorize rhythmic stimulation findings based on the temporal delay between the rhythmic stimulation and linguistic task that it influences, the precise relationship between the rhythmic and linguistic stimuli used, and the nature of the linguistic task. Lastly, this chapter discusses which categories of rhythmic stimulation effects can be interpreted in a framework based on a shared cognitive system that is responsible for hierarchical structure building.
Recent studies have shown that neural activity tracks the syntactic structure of phrases and sentences in connected speech. This work has sparked intense debate, with some researchers aiming to account for the effect in terms of the overt or imposed prosodic properties of the speech signal. In this chapter, we present four types of arguments against attempts to explain putatively syntactic tracking effects in prosodic terms. The most important limitation of such prosodic accounts is that they are architecturally incomplete, as prosodic information does not arise in speech autonomously. Prosodic and syntactic structure are interrelated, so prosodic cues are informative about the intended syntactic analysis, and syntactic information can be used to aid speech perception. Rather than trying to attribute neural tracking effects exclusively to one linguistic component, we consider it more fruitful to think about ways in which the interaction between the components drives the neural signal.
Inflectional morphology refers to the mapping from grammatical information to surface forms, which are typically realized as morphemes. This mapping often exhibits fusion, where several abstract features are expressed in a single morpheme that cannot be decomposed into meaningful parts. Here, we discuss crosslinguistic generalizations of morphological fusion. We argue that fusion reflects principles of efficient processing, as formalized by the memory–surprisal tradeoff (Hahn, Degen, & Futrell 2021), which is based on information-theoretic models of language processing from psycholinguistics. We first show that the existence of fusion itself can, in some situations, lead communicative codes to be more efficient under our processing model. Particularly, we reveal via simulation that the fusion of highly correlated features is more efficient for processing, whereas agglutination is more efficient when features are less correlated. We next discuss crosslinguistic patterns of fusion in real languages. First, we analyze well-known generalizations about features that are commonly fused across languages (e.g. tense, aspect, and mood), as well as a typological pattern regarding suppletion. In both cases, we find that the universals we study tend to reflect a tendency toward more efficient structure under our model of language processing. Finally, we use paradigm and frequency data from four languages to study informational fusion, a gradable measure of fusion defined in Rathi et al. 2021. We find that informational fusion is higher when features are highly correlated, which suggests that gradable fusion is also influenced by optimization for the memory–surprisal tradeoff.
The present study investigated the effects of (in)congruence between a referent’s lifetime (alive vs. dead) and verb tense during language processing, assessing to what extent these effects are modulated by the source of referent-lifetime knowledge. A referent’s lifetime status (dead vs. alive) was conveyed either via a known famous (Experiment 1) or unknown (Experiment 2) name, or was primed non-linguistically via a photograph of a known famous referent (Experiment 3). The findings suggest that referent-lifetime information influenced the processing of verb tense across the different context sources, but not at the earliest point possible (the verb). Instead, lifetime-tense congruence effects emerged two words later (Experiments 1 and 2), or in the sentence-final region (Experiment 3). The presence and size of nested effects were graded by lifetime context: larger congruence effects were elicited by Experiment 1 than by Experiment 2 in both tenses, with significant effects in the present perfect condition only in Experiment 3. In all, referent-lifetime status modulated tense processing in the expected direction, but with variations in whether effects emerge in post-verb regions or at sentence-end depending on how referent-lifetime knowledge was accessed. This temporal variability needs to be considered in accommodating context effects in processing accounts.
It is a well-known fact that across the world's languages there is a fairly strong asymmetry in the affixation of grammatical material, in that suffixes considerably outnumber prefixes in typological databases. This article argues that prosody, specifically prosodic phrasing, plays an important part in bringing about this asymmetry. Prosodic word and phrase boundaries may occur after a clitic function word preceding its lexical host with sufficient frequency so as to impede the fusion required for affixhood. Conversely, prosodic boundaries rarely, if ever, occur between a lexical host and a clitic function word following it. Hence, prosody does not impede the fusion process between lexical hosts and postposed function words, which therefore become affixes more easily.
Evidence for the asymmetry in prosodic phrasing is provided from two sources: disfluencies, and ditropic cliticization, that is, the fact that grammatical PRoclitics may be phonological ENclitics (i.e. phrased with a preceding host), but grammatical enclitics are never phonological proclitics. Earlier explanations for the suffixing preference have neglected prosody almost completely and thus also missed the related asymmetry in ditropic cliticization. More importantly, the evidence from prosodic phrasing suggests a new venue for explaining the suffixing preference. The asymmetry in prosodic phrasing, which, according to the hypothesis proposed here, is a major factor underlying the suffixing preference, has a natural basis in the mechanics of turn-taking as well as in the mechanics of speech production.
This experimental study explored how adopting a deceptive stance affects linguistic processes during real-time production of multi-sentence texts in speaking and writing. Language production involves planning, monitoring and editing – processes that give rise to and are shaped by fluctuations in processing demands. Deception is assumed to influence these processes as speakers and writers manage competing communicative goals: to be coherent while concealing the truth. Narratives were elicited by asking participants to account for events from four short films: two truthful and two deceitful, in both speaking and writing. In speaking, deception decreased the total number of pauses, but in longer deceptive texts, pausing instead increased, suggesting adaptive adjustments to regulate overt cues to lying. In writing, deception decreased text revisions and altered pause behaviour, suggesting that writers modified their production patterns when altering information. Together, these findings suggest that deceptive language production involves shifts in planning, monitoring and editing processes that manifest differently across modalities: while speech shows suppression of pauses, writing reveals subtle changes in revision and pausing behaviour. These results highlight modality-specific signatures of deception and demonstrate how speakers and writers dynamically adapt their language production processes to align with communicative intent.
This study investigates whether metaphors and similes are processed the same way or not. Comparison accounts of metaphor claim that metaphors and similes use the same cognitive mechanisms because metaphors are implicit similes, while Categorization accounts claim that the two figures of speech require different cognitive mechanisms. It is unclear which position has the most support. We address this by introducing the distinction between single and extended metaphors to this debate. Several experiments have shown that a metaphor preceded by another metaphor is read faster than a single metaphor. If similes in extended and non-extended contexts display a similar processing difference, this would support views saying that metaphors and similes are processed the same way. If not, it would be more in line with the view that they are processed differently. Using an eye-tracking reading paradigm, we find that the difference between processing single and extended metaphors does not hold in the case of simile comprehension. This is more compatible with Categorization accounts than with Comparison accounts; if the cognitive mechanism behind metaphor and simile processing is the same, we would expect there to be a comparable processing difference between metaphors and similes in the single and extended conditions.
Linguistic illusions are cases where we systematically misunderstand, misinterpret, or fail to notice anomalies in the linguistic input, despite our competencies. Revealing fresh insights into how the mind represents and processes language, this book provides a comprehensive overview of research on this phenomenon, with a focus on agreement attraction, the most widely studied linguistic illusion. Integrating experimental, computational, and formal methods, it shows how the systematic study of linguistic illusions offers new insights into the cognitive architecture of language and language processing mechanisms. It synthesizes past findings and proposals, offers new experimental and computational data, and identifies directions for future research, helping readers navigate the rapidly growing body of research and conflicting findings. With clear explanations and cross-disciplinary appeal, it is an invaluable guide for both seasoned researchers, and newcomers seeking to deepen their understanding of language processing, making it a vital resource for advancing the field.
Multi-word expressions (MWEs) are fixed, conventional strings of language (e.g. idioms, collocations, binomials, proverbs) which have been found to be widespread in language use. Research has shown that MWEs exhibit an online processing advantage over control phrases by first language (L1) and second language (L2) speakers. While this line of research has helped us better understand the nature of MWEs and factors that may influence their processing in real time, there remain several gaps that future research should focus on. In this piece, we focus on four main topics related to the online processing of MWEs: (1) comprehension of MWEs by L1 and L2 speakers, (2) production of MWEs by L1 and L2 speakers, (3) the processing of modified MWEs by L1 and L2 speakers, and (4) the processing of MWEs by L1 children. Under each topic, we propose nine research tasks that will further advance our understanding of MWE processing in real time. We conclude with relevance of MWE processing research to L2 teaching and learning.
Previous research has shown that motor information influences visual and semantic tasks. However, not much is known about the specific influence of structural, action-relevant information on language processing. In the current study, participants were instructed to observe a prime graspable object (e.g., a frying pan) that could be presented with the action-relevant component (that is its handle) oriented either toward the left or toward the right. Subsequently, they performed a property verification task on a following target word, which could describe an action-relevant (e.g., handle) or action-irrelevant (e.g., ceramic) characteristic of the just-encountered object. They were required to make a keypress response with either a key on the same side as the depicted action-relevant component of the prime object (that is compatible key) or on the opposite side (that is incompatible key). Results show that property verification judgements for action-relevant words were faster in the spatially compatible condition than in the spatially incompatible condition, whereas judgements for action-irrelevant target words were not affected by spatial compatibility. These findings suggest that spatialized object properties are not mandatorily linked to manual response biases. Rather, this link seems to be modulated by trial-by-trial changes in conceptual focus.
Transformer-based large language models are receiving considerable attention because of their ability to analyse scientific literature. Small language models (SLMs), however, also have potential in this area as they have smaller compute footprints and allow users to keep data in-house. Here, we quantitatively evaluate the ability of SLMs to: (i) score references according to project-specific relevance and (ii) extract and structuring data from unstructured sources (scientific abstracts). By comparing SLMs’ outputs against those of a human on hundreds of abstracts, we found that (i) SLMs can effectively filter literature and extract structured information relatively accurately (error rates as low as 10%), but not with perfect yield (as low as 50% in some cases), (ii) that there are tradeoffs between accuracy, model size and computing requirements and (iii) that clearly written abstracts are needed to support accurate data extraction. We recommend advanced prompt engineering techniques, full-text resources and model distillation as future directions.
This chapter explores the unique relationship between music and individuals with autism spectrum disorder (ASD). It highlights the remarkable musical abilities often found in people with autism, contrasting with their challenges in social interaction and communication. Research shows that music can serve as a bridge, facilitating social interaction and emotional expression for those on the spectrum. Brain imaging studies reveal how brain regions typically associated with language processing are activated in autistic individuals when they engage with music. This suggests that music may offer an alternative pathway for communication and emotional understanding. The chapter also discusses the therapeutic applications of music for individuals with autism, such as auditory-motor mapping training (AMMT), which has shown promise in improving verbal communication and social skills. Music therapy can also foster emotional expression, social connection, and a sense of belonging. The chapter concludes by emphasizing the importance of understanding and embracing the individual’s musical preferences and strengths in order to support their development and well-being.
In this chapter we address the question of whether or not language acquisition is largely implicit in nature. After reviewing key constructs (e.g., explicit and implicit knowledge, explicit and implicit processing/learning, intentional and unintentional learning), we discuss the major positions currently under scrutiny in the field: (1) Explicit learning is necessary; (2) explicit learning is beneficial; (3) explicit learning does little to nothing (i.e., acquisition is largely if not exclusively implicit in nature). A key issue in this chapter is how one defines “language” and how one construes “input processing.” We will review how definitions of these constructs color the researcher’s perspective on the issues.
This chapter offers an overview of the essentials of the cognitive-functional view adopted in the book, and situates the approach in the wider field of language studies, notably relative to the cognitive linguistic and functional linguistic traditions. It also introduces the general theoretical issues of concern in the book: it highlights the importance in language research of an active concern with conceptualization, and of assuming a dynamic relationship between conceptual and linguistic structures and processes (i.e. between meaning and form), and it points out the different practices in this regards in the strands of cognitive and functional linguistics. Finally, it presents the rudiments of a model called Functional Procedural Grammar, which serves as guide and blackboard throughout the book, and which is elaborated further in the course of it.
This chapter returns to the theoretical concerns of the study, and to the principles at the heart of a cognitive-functional approach to modeling the cognitive processes in language use. Central are the basic principles of depth and dynamism, and the three issues emerging from them when comparing cognitive and traditional functionalist approaches in current linguistics: the (non)concern with conceptualization in linguistic analysis, the processual vs. representationalist concept of grammar, and the complex meaning-form relationship. The chapter rounds up and reflects on what the analyses of the attitudinal and other semantic and functional dimensions in the preceding chapters have shown with relevance to these principles and issues. Moreover, it uses these insights to dwell on wider implications, beyond the analysis of the qualificational dimensions, for our understanding of the cognitive systems involved in language use.
Processability Theory (PT) is a psycholinguistic theory of second language acquisition. The theory builds on the fundamental assumption that learners can acquire only those linguistic forms and functions which they can process. Therefore, PT is based on the architecture of the human language processor. PT is implemented in a theory of grammar that is compatible with the basic design of the language processor. This Element gives a concise introduction to the psycholinguistic core of PT - showing that PT offers an explanation of language development and variation based on processing constraints that are specified for typologically different languages and that apply to first and second language acquisition, albeit in different ways. Processing constraints also delineate transfer from the first language and the effect of formal intervention. This Element also covers the main branches of research in the PT framework and provides an introduction to the methodology used in PT-based research.
Information processing is a process of uncertainty resolution. Information-theoretic constructs such as surprisal and entropy reflect the fine-grained probabilistic knowledge which people have accumulated over time. The information-theoretic constructs explain the extent of processing difficulty that people encounter, for example when comprehending language. Processing difficulty and cognitive effort in turn are a direct reflection of predictability.
How do we understand any sentence, from the most ordinary to the most creative? The traditional assumption is that we rely on formal rules combining words (compositionality). However, psycho- and neuro-linguistic studies point to a linguistic representation model that aligns with the assumptions of Construction Grammar: there is no sharp boundary between stored sequences and productive patterns. Evidence suggests that interpretation alternates compositional (incremental) and noncompositional (global) strategies. Accordingly, systematic processes of language productivity are explainable by analogical inferences rather than compositional operations: novel expressions are understood 'on the fly' by analogy with familiar ones. This Element discusses compositionality, alternative mechanisms in language processing, and explains why Construction Grammar is the most suitable approach for formalizing language comprehension.
The nature and processing of semantic illusions (SI; when speakers fail to notice an anomalous word in a sentence that is contextually perfectly aligned with world knowledge) have been largely studied during first language comprehension. Although this issue is not free of controversy, findings sustain The Node Structure Theory, according to which SI is a phonological and/or semantic priming effect which occurs due to phonological and/or semantic links existing between the correct and the anomalous word. However, the question as to whether the same underlying mechanisms can be found in bilinguals and whether the effect is modulated by age of language acquisition (AoA) and language dominance remains unexplored. The aim of this study was to examine this issue on sequential European Portuguese-German bilinguals (and their respective control groups) using a self-paced reading paradigm. The sentences’ language, AoA (early vs. late), and type of target word used (correct vs. anomalous) were manipulated. Results showed the occurrence of SI, independently of language and AoA. Therefore, findings suggest that SI occur due to a semantic overlap between critical words and are similarly processed in L1 and L2.