To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In order to understand the subtleties of sentence structure, it is necessary to understand how phrases are built from the words they contain and how phrases are combined into larger phrases and sentences. It is also necessary to understand what can happen to phrases and sentences after they are built – namely, parts of them can be moved and deleted. Movement and deletion take place under particular restrictions, and speakers “know” these restrictions, apparently without being taught. All languages share these fundamental structural properties, but the principles that underlie them are broad enough to allow considerable differences among languages. The chapter includes a sampling of these differences.
GOALS
The goals of this chapter are to:
explain how to conduct an analysis of the sentence structure of English
explain how the structure of sentences is represented in modern syntactic theory
explain the concept “poverty of the stimulus”
explain the notions “language organ” and “Universal Grammar”
present examples of subtle restrictions that limit the ways in which sentences can be constructed and interpreted
present a few examples of differences in sentence structure in languages from around the world
One of the central facts about human language that strikes us right away is its immense variability. There are anywhere from 5,000 to 8,000 languages in the world today, and all of those with more than a handful of speakers contain at least some subvarieties. In addition, languages exhibit variation over time, as any modern speaker of English who has ever tried to read Shakespeare, or even watch an old Hollywood movie, can tell you. In this chapter, our focus is on variation within languages, or dialect variation. We will explore the different types of dialect variation: regional, social class, ethnic, gender, and stylistic. We will look at patterns of variation on a number of different levels of language, including words and their meanings (lexical variation), pronunciations (phonological variation), sentence structures (syntactic variation or morphosyntactic variation), and even conventions for language use (pragmatic variation or discourse variation). We will also take a look at the intricate bond between dialect variation at any given moment in time and language variation over time, or language change. Finally, we consider the fate of dialect variation in an era of increasing globalization and find some surprising conclusions. Most of our examples of dialect variation are from English; however, nearly all languages are rich with variation. Before we begin, we must take a close look at some common beliefs about the nature of dialect variation which, upon closer inspection of actual data on dialect variation, turn out to be mistaken.
“History is universal and basic,” a history professor said during a faculty meeting, “It’s about every event that involves all people at all times and in all places.” “Yes,” observed his colleague from linguistics, “but how would you record and interpret that history without language?” Indeed, it is hard to imagine how there could even be history without language, without a means to pass a record of what has happened from one generation to the next through retold stories and sagas, even before written records. Much of the history (and prehistory) of the human species consists of the development and adaptation of various tools to meet a broad range of needs: think of the wheel, the domestication of animals, the steam engine, computers, and the internet. The development and refinement of these and all other tools could not have been accomplished without language.
The human capacity for self-awareness and abstract thought is facilitated by language, if not dependent upon it. The ability to transfer complex information, to discuss the meaning of events and possible outcomes of alternative actions, to share feelings and ideas – all these are impossible without language. The origins of language are shrouded in obscurity, but archaeological records suggest that communication with language emerged about 200,000 years ago. The ability of an individual to model the world for him/herself and to communicate using language was probably the single most advantageous evolutionary adaptation of the human species.
How children acquire language has long intrigued scholars and nonscholars alike. Parents of young children are often amazed at how quickly their babies move from cooing and babbling to forceful one- and two-word demands. Linguists and psychologists, in turn, have been interested in understanding the stages and mechanisms by which all children become competent users of language (or, in most of the world, languages) by age three or four. Indeed, children from all backgrounds, and under diverse learning conditions, tend to pass through similar phases in learning their mother tongues. Researchers are interested in exploring these processes to gain a better understanding of how children accomplish this remarkable feat, but also because the nature of these processes holds important implications for larger debates in the field of linguistics. Understanding the mechanisms of how children acquire their language(s) can shed important light on the nature of language, as well as on the nature of human learning.
This chapter will first explore how researchers gather data on child language acquisition. We will review three approaches to child language acquisition research: parental diaries, observational studies, and experimental studies. After briefly discussing the advantages and disadvantages of these research approaches, we’ll turn to some of the actual data. Specifically, we will look at the major milestones of language development in phonology, semantics, morphology, and syntax. We’ll also examine the extent to which these processes differ depending on the cultural context and the language being acquired. Lastly, after reviewing some of the major research findings, we will consider how researchers explain these data, outlining the major theoretical positions in the hotly debated field of child language acquisition.
The goal of having a computer understand and communicate in a human language has long been a dream of science fiction. In recent years part of this dream has become reality as scientists have developed computer programs that can understand and learn aspects of human languages, in both written and spoken form. These systems are also capable, to varying degrees, of translating between languages. The methods these programs use derive from both linguistics and computer science, and they reveal a relationship between the patterns found in human languages and in mathematical languages. These programs can sift through large online samples of everyday language (called corpora), counting how often particular forms actually occur in everyday use. This allows linguistic rules to be weighted based on such statistics, and these weights are used by the programs to identify more likely linguistic analyses of ambiguous utterances. The success of such programs also suggests that humans might learn by induction based on statistical regularities in their experience. The practical tools developed by computational linguists can be used to carry out linguistic analyses on a larger scale than ever before. Even more significantly, these tools can radically change the way we acquire and communicate information. All these factors make computational linguistics an intellectually lively, exciting, and influential area of study.
Language is rooted in the biology of the brain. Therefore understanding the biology of language is critical to understanding language itself. The vast majority of research on the biological bases of language has focused on brain anatomy, mainly attempting to identify which parts of the brain underlie which language functions, such as lexical or grammatical abilities. However, the biology of language can also be investigated at many other levels, from cells to molecules to genes. Moreover, the study of all these aspects of biology must be complemented by and integrated with psycholinguistic studies of how language is actually used as we speak and understand; with investigations of language acquisition; and with the theoretical linguistic examination of the architecture of language. Taken together, these provide an insight into the ultimate goal of understanding the biocognitive or neurocognitive bases of language. This in turn should eventually shed light on higher-level aspects of language use, such as discourse, historical change, dialect variation, and reading and writing, as well as important applications like language teaching and computer models of language.