Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines how morphology has been implicated in studies of comparative syntax. A major theme is how different theories define morphology and how such definitions relate to research in morphological theory. I first look at what it might mean to reduce syntactic variation to morphology or the lexicon. While some well-known approaches have relatively little to do with morphology as understood in morphological theory, one of them provides a plausible way of encoding variation in the features of syntactic terminals. I then ask what Distributed Morphology adds to the study of universals and variation, focusing on the PF interface and showing that there are possible universals in this part of the grammar, but they must be sought at an appropriate level of abstraction. Finally, I examine a conception of PF that arises in some Minimalist discussions where it is posited that apparent syntactic variation is driven by the need to have syntactic structures connect with language-external systems. If correct, this would mean much of what has been analyzed as part of the syntax is actually part of the PF component. However, how this can be investigated empirically is an open topic.
In this chapter I first present the original conception of head-movement, as developed in the 1980s by Travis (1984), Koopman (1984), and Baker (1988), along with some of the empirical domains where the operation was applied. I then present the critique of head-movement put forward in Chomsky (2001: 37-38), followed by a discussion of some of the alternatives developed subsequently. Next, I move to a consideration of how head-movement might be integrated with Chomsky’s (2013, 2015) Labeling Algorithm, suggesting that this approach is consistent with a variant of the proposals in Roberts (2010). Finally, I will attempt to develop a new proposal based on Moro and Roberts (2022).
This chapter describes the syntactic structure and cross-linguistic variation of coordinate structures, with special attention to the nature of the coordinating conjunction, resolution rules for agreement with coordinate subjects, the opacity of coordinate structures, and signs of asymmetry within coordinate structures. The chapter also discusses the derivation of coordinate structures within generative grammar, and calls into question the commonly held view that coordinate structures are headed by the coordinating conjunction.
This chapter provides an overview of the various approaches to theoretical modeling of syntactic variation in human language. Three main types of syntactic variation, i.e., word order variation, silence and doubling, are described and then used to demonstrate how different generative theories capture these phenomena. The generative theories discussed include Transformational Generative Grammar (TGG), Government and Binding (GB) Theory, Minimalism (including Optimality Theory), Nanosyntax, Distributed Morphology, and post-GB Parameter Theory (including parameter hierarchies and the Universal Spine Hypothesis). Taking into account a great many different languages and dialects, the chapter shows that strong progress has been made in our understanding of the nature, range, and limits of syntactic variation.
This chapter introduces the reader to the field of comparative syntax and to the handbook. It briefly addresses the history and the future of the field and describes its research goals. The second part of the chapter provides an overview of the chapters, with brief summaries.
While syntactic rules are clearly conditioned by constituent structure, it is less clear whether such rules refer to linear order. In this chapter, I explore several apparently syntactic phenomena in which linear order appears to play a role and discuss to what extent the data warrant a syntactic or an extra-syntactic account. The phenomena in question are (i) mirror-image effects in word order variation, (ii) left–right asymmetries in syntactic dependencies like movement, (iii) harmony within extended projections (the tendency for dependents to appear on the same side of a lexical head), (iv) harmony between extended projections (the tendency for extended projections to share left- or right-headedness), and (v) compactness (the tendency for the verb and the object to be linearly adjacent in SVO languages but not in SOV ones ). At least some of the data may allow an extra-grammatical account, in terms of acquisition (harmony) or parsing (compactness and left–right asymmetries in dependencies). However, whether such an account merely motivates grammatical constraints that refer to linear order or makes such constraint superfluous is hard to determine.
This chapter discusses phenomena associated with the Voice head responsible for introducing external arguments in a sentence. We suggest that Voice is not a special type of functional head, but rather has a set of properties that are shared by other functional heads: e.g., whether the head is present or absent, whether it requires a specifier, assigns case, or introduces a thematic interpretation. The defining feature of Voice is that it is the highest argument-introducing head, sitting at the boundary between the verb phrase and the inflectional domain. We also highlight some of the challenges in cross-linguistic comparison of Voice phenomena, including the fact that the Voice head may be realized in various forms, including affixes, light verbs, reflexive pronouns, and sets of phi-features. We argue that a precise understanding of Voice can lead to a more fine-grained set of questions that can be applied to other phenomena in syntax, and that uncovering more details about the properties of what is and isn’t attributable to Voice can help us identify the fundamental building blocks that languages use to express the presence, absence, and interpretation of arguments in general.
This chapter discusses a number of acquisition studies, involving a variety of languages and different experimental methods, looking at both universal and language-specific properties.
On the universal side, children’s interpretation of scope-ambiguous sentences and acquisition of word order is discussed in the light of mechanisms relying on prosody and/or knowledge of universal properties of function vs. content words. On the language-specific side, the influence of L1 in early second language acquisition is debated as well as children’s interpretation of “double negative” sentences. The studies show that children’s knowledge of language is structure-dependent and that infants are sensitive to content and function words and use them to start acquiring the lexicon and the syntax of their language and that toddlers are sensitive to the order of words in their language before they combine words together. Furthermore, the acquisition of agreement morphemes in first and early second language acquisition may progress differently and as a function of their L1.
Within the domains of morphosyntax and syntax, sign languages have been shown to share many interesting properties with spoken languages. At the same time, it has been demonstrated that sign languages are not a homogeneous group. Rather, they differ from each other structurally and, what is more, the attested differences often align with typological patterns that have been identified based on the study of spoken languages. In this chapter, we offer a discussion of selected syntactic phenomena that have been studied from a cross-modal perspective, drawing data from a wide variety of sign languages. We address linearization issues (e.g., constituent order), wh-questions, and various types of complex sentences, and for each of these topics we evaluate to what extent it is shaped by the affordances of the visual-spatial modality. The general picture that emerges is that sign languages – notwithstanding certain modality-specific characteristics – generally exhibit structural complexity and variation fully on a par with spoken languages. This, in turn, strongly suggests that formal models developed for spoken languages can and should be applied to sign languages.
This chapter is about the comparative syntax of adpositions. We suggest that the category P is universal, and argue that in addition to functional adpositions, there are also lexical Ps. We make our way through the extended projections of P from bottom to top, and argue that the composition of the extended PP parallels that of the extended VP. We show that in addition to PP-internal projections for grammatical aspect (coding place vs. path distinctions) and for event aspect (coding boundedness distinctions), there are also projections dedicated to deixis and to discourse particles. The maximal extension of the PP provides an escape hatch for extraction, much like Spec, CP in the clause. We also discuss the relationship between Figure, Ground, and Theme, suggesting that Figure and Theme should be kept distinct. Finally, we suggest that cross-linguistically there are two major ways of building PPs: they can bottom out in a lexical P or in a relational noun. In the latter case, the Ground is merged as a possessor of the relational noun, and PPs show morphological parallels with possessive structures.
An active area of research in psycholinguistics concerns the cognitive mechanisms that are used to form syntactic dependencies in real-time sentence processing. Comprehenders make skilled use of working-memory resources to incrementally store and update syntactic structures (backward dependency formation) and use fine-grained probabilistic knowledge to anticipate the ways in which syntactic dependencies will eventually be resolved (forward dependency formation). While this broad picture is generally acknowledged, exactly how these processes play out in typologically diverse languages remains underexplored. In this chapter, we describe the various ways in which different grammatical systems pose diverse challenges for this general picture of sentence processing and present experimental research that addresses how core processes of backward and forward dependency play out in typologically diverse contexts. We argue that comparative sentence processing research presents evidence for shared cognitive mechanisms used for dependency formation across languages, but also points to several ways in which current theories need to be expanded to capture cross-linguistic variation.
This chapter re-examines cross-linguistic reflexes and effects of successive cyclic movement in Ā-dependencies, in particular wh-questions. It takes the overview papers of Den Dikken (2017) and Van Urk (2020) as a starting point and provides an update as well as new discussions. Assuming the standard Minimalist program with CP and vP as phases, it considers both CP-level and vP-level potential effects of successive cyclic movement. We note that there are essentially no effects that are exhibited at both the CP and vP levels. Furthermore, the CP-level effects are quite different from the vP-level effects, with the former related to phrasal material left behind at intermediate CPs. I discuss in Section 4 alternative views of some of the vP-level effects which suggest a potential source of difference between CP and vP-level effects.
Generative syntax aims to characterize knowledge of language in computational terms: What is the nature of the computational system in an individual’s mind that underlies their linguistic abilities? This chapter explores syntactic variation from this computational vantage point, making use of concepts that have emerged from the rich body of work in the study of grammar formalisms. The perspective we adopt focuses on syntactic dependency patterns and the relationship between these patterns and constituency. We show how this allows for grammatical phenomena to be characterized in terms of formal properties of the ways in which they require constituents to be combined: whether the constituents are continuous or discontinuous in the surface string; whether this surface discontinuity is bounded or unbounded; whether discontinuous constituents are interleaved in a manner that has been termed "well-nested"; and whether constituents are copied in the surface string. These properties speak to the nature of the grammatical device necessary to characterize a given dependency and allows us to provide a new kind of cross-linguistic typology in explicitly computational terms.
In this chapter, we address questions related to cross-linguistic variation in the organization of features and their morphological expression in words (or morphemes), focusing on implications for syntax. It tries to bring together topics pertaining to both words and features, two types of primitives that are not the same for most syntactic frameworks. The questions that guide the content of the chapter are: Is there a universal feature hierarchy? How are these features organized? How are they expressed? Even though there is no uniform answer to each of these questions, there seem to be clear tendencies in both the domains of features and word(building) in relation to syntax. However, the limits of variation, and the details of specific issues are still active areas of investigation. Topics that are discussed in relation to these questions are the status of features as binary or privative; bundling of features in one or multiple heads in the clausal spine; the expression of a morphosyntactic feature as an affix, clitic, or freestanding element; and polysynthetic languages.
An A-dependency is usually characterized in terms of a relationship between different syntactic positions that is based on case, agreement, theta-role assignment, or binding. This chapter starts with an examination of the defining properties of A-dependencies, focusing on case and agreement as potential driving forces for the creation of such dependencies, and the debate surrounding these topics. We then explore the cross-linguistic variation in the syntax of A-dependencies, illustrating it with dependencies that cross a clausal boundary. Specifically, we examine different instances of raising and control, as well as the theories formulated to account for such phenomena.