To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In drawing, as in spoken language, children's “reception strategies” or understanding of graphics outrun their productive capacities by a substantial margin. Although one might hesitate to characterize children's drawings on the whole as deficient or impoverished, it is reasonable to ask why they do not move more rapidly toward a diverse interpretation of reality in line with their growing sophistication in perceiving and understanding the world.
The possible role of stereotypes
One possible explanation is that graphic inventiveness is restricted by the ubiquity of those public stereotypes that children spontaneously adopt or have forced upon them. Certainly stereotypes and other influences from external models are found in children's drawings, but they are not as pervasive as casual inspection might suggest. Even when children are in close and continuous contact with one another's drawings, there is considerable variety in their representation of even the most common objects. The drawings in Fig. 8.1 are the products of a single class of Australian public school children aged 5.6 to 7 years, drawn on the same day. The variety of forms is striking, and this is typical of much of their drawing. Even when such children do adopt stereotyped formulas, they not infrequently include their own personal versions side by side with the “public” versions, indicating that when stereotypes have been incorporated into a child's repertoire, they do not necessarily monopolize production.
It is never easy to prove in any graphic context that a failure in performance is due directly to a failure in perceptual analysis. There will always be a storage stage, however brief, between attending and acting during which information may be lost or distorted. Even when people are copying, they are continually switching backward and forward from inspecting the model to monitoring their own performance, and when the model is removed before drawing, as it is in many of these studies, storage must necessarily intervene.
Building a case for defective perceptual analysis in drawing is not made any easier by the ability of subjects to recognize relatively accurately designs that they cannot draw. Even when working from memory, they can usually select the design they are attempting to produce from a set of distractors in an “identity parade,” even when the distractors are more like the model than their own completed versions. Subjects may fail to copy a design when given generous time to inspect it, yet it is inconceivable that they would fail to recognize differences between the model and even quite subtle variants of it under such inspection conditions.
Why then raise the issue of defective perceptual analysis as a serious issue? The first reason is that there seems to be evidence from the nature of errors in drawing and copying that something associated with perception is often involved.
The studies of Jones and Stanton on repetitive drawings demonstrated that once children have developed a strategy of representation, they tend to retain it in subsequent drawings even after they have been provided with additional data about the objects they are depicting, data that would normally affect their mode of attack quite fundamentally had not graphic strategy already been developed in their minds or put into action. These experiments provide a model for the emergence of relatively stable individual graphic versions of objects by children and at the same time help to explain why children's drawings are often not as sophisticated as their knowledge. The logic is that children tend to lose their flexibility in portraying things in the very act of generating early versions of them. That is not to say that children's drawings never change, but rather that their drawings often evolve by the modulation or amendment of existing devices, rather than through a revolutionary rethinking of the basic representational strategy.
Where do innovations occur in drawings?
One prediction we can make from this is that when innovations do appear in children's drawings, they will tend to occur not in the initial strokes of the drawings, but late in the sequence. To establish empirically whether this was in fact a fair inference, I undertook to collect a substantial corpus of repeated drawings from a group of primary school children.
Two illustrations in Eng's The Psychology of Drawing (1954) depict an assemblage of dolls drawn by a girl of 5.11 years. Most of the dolls wear cloaks, which Eng says have their origins in an error that became stylized into what she calls a “formula.” There are no details given of the construction process beyond the fact that in the initial drawing a basic triangular female figure was accidentally elaborated by two long sloping lines that were later “discovered” by the child to represent a cloak.
A close examination of the dolls drawn 3 and 10 days after the initial accidental creation of the cape motif reveals that the child used at least six different construction strategies to produce what are visually rather similar figures. Fig. 10.1 (a) and (b) show the relevant parts of the two drawings, and below in Fig. 10.1(c) are diagrams that expand the microstructure of the figures to show the various procedures.
Eng does not comment on this feature of the drawing, and when the variability first caught my attention, I assumed that if it was a real effect (as opposed to an error in reproduction, which seems unlikely), it surely represented a graphic oddity. Yet these illustrations now seem to crystallize very well what I wish to document more fully – the production of similar and distinctive graphic products by a variety of different means.
This book is an attempt to explain language structure and its relation to language cognition and acquisition. By starting at the beginning and reformulating the questions asked by current linguistic theories, we analyze the essential features of language and language users. From this base we construct a theory that avoids the complexity of many contemporary language theories and requires few presuppositions about innate language abilities. At the same time the theory is able to explain a great variety of language structures and much of the data of human language performance.
Because we start with fundamental questions about the nature of language, the book makes language science accessible to the nonspecialist. In order to provide a background for understanding our theory and its implications, we consider major contemporary theories of language in detail and evaluate related claims about language. These decisions provide a framework for evaluating our own claims. We hope that publication of this book will get other people interested in our approach and solicit help in finding theoretical and empirical consequences of the theory.
Frederik Pohl and Cyril Kornbluth, when asked of a jointly-authored work “who did what?”, are reported to have replied that one of them wrote the nouns and the other the verbs. When we are asked the same question, we answer that one of us wrote the vowels and the other the consonants.
It is a privilege for me to introduce readers to what I look upon as a gem of scientific theory construction in the field of language, a new and refreshing theory of syntax and syntax acquisition that should command the attention of all those concerned with language and its learning. Moulton and Robinson provide a new and satisfying orientation to many of the issues that have occupied linguists and psycholinguists since the advent of the Chomsky an revolution more than two decades ago. It draws upon the best features of modern behavioral and cognitive approaches.
There are at least three respects in which the Moulton-Robinson theory is distinctive.
First, it emphasizes the language learner's or user's powers of conceptualization as a basis for the organization of language structure, and stresses the role of the nonlinguistic environment in helping language learners to acquire that structure as it is realized in a particular language. These matters have been too long neglected in standard linguistic and psycholinguistic theories.
Second, it is explicit about what kinds of grammatical objects or entities are involved in language use and acquisition. These grammatical objects are structures embodied in the authors' highly original “orrery” and “syntax crystal” models, based on their notions of “scope” and “dependency.” The authors specify how these structures are manifested in particular constructions and arrangements in the English language – their “syntax modules.”
In the preceding chapter, we described and criticized the transformational and case models of underlying conceptual structure. Then we introduced the orrery model of underlying structure and compared it to the transformational and case models. Table 3.1 summarizes these comparisons. In this chapter we are going to present some experimental evidence suggesting that the features characterizing the orrery are supported by data on human language performance. That is, we shall support our claim that underlying structure is hierarchical, unordered, contains dependency information, and is lean.
First we discuss the distinction between structure and function as it pertains to experimental verification of models of language. Then we review some evidence to indicate that underlying structure is organized hierarchically rather than heterarchically. These findings favor the orrery and transformational models over the LNR version of case theory. Next we describe the work of Levelt, which supports hierarchical organizations, providing that the feature of dependency is present in the structure. These data favor the orrery model alone. Next we describe an experiment by Weisberg suggesting that left-to-right sequential information is not rigidly fixed in underlying structure. This finding favors the orrery and LNR version of case theory and makes difficulties for both the standard and case versions of transformational theory.
Next we examine the lean-rich dimension, comparing the orrery to the case model. All other things being equal, a simple model is better than a more complex model to explain the same thing.
This appendix is divided into two parts. First we provide a list of the modules given in Chapter 5 along with a description of the kinds of connections they can make and the grammatical constructions they allow. This list will be divided into four sections for heuristic purposes, and also to provide a method for testing the modules – the syntax crystal game. The second part describes additional grammatical constructions not mentioned in Chapter 5 but considered important for theories of grammar. We provide syntax crystal modules for these constructions and discuss how they can be handled.
It is important to point out that all the modules we describe are supposed to show only how the constructions can be handled by modules. We make no claim that grammatical constructions must be handled in exactly this manner. Our goal is only to show what the syntax crystal model can explain, how it can explain it, and occasionally, what it has difficulty explaining. We are saying: Given this sort of rule, with these sorts of properties, here are some of the constructions it can handle. We use English, as we have mentioned before, because it is a very complex and rich language. But we believe that the principles of the syntax crystal can be used to construct modules that will work for other languages.
In the preceding chapter we introduced the syntax crystal model, demonstrating how it worked and how it could be used to generate and parse natural language strings. We showed how the model could handle many of the linguistic structures that have been used to illustrate the power of transformational grammars. We argued that the local rules of the syntax crystal were more accessible to explanation at the cognitive level. And especially important, the syntactic rules of the model use a minimum of essential features and thus are easy to map onto the conceptual structure underlying language strings.
In this chapter we are going to show how the syntax crystal model makes it easier to explain the acquisition of syntax. With very few assumptions, the model demonstrates how the information used to construct the complex hierarchical structures of syntax can be gradually acquired using a simple mediated learning principle. In advocating a model to account for syntax acquisition by learning theory principles, we are moving counter to the nativism allied with transformational theories. Therefore, we are going to divide this chapter into two parts. The first part opens the territory for a learning account of syntax acquisition by examining and challenging the view held by Chomsky and others that there are innate, specifically linguistic mechanisms for syntax. The second part discusses the acquisition of syntax using the syntax crystal model.
In this chapter we will first discuss Chomsky's criteria for the adequacy of a theory of grammar and how they are related to (1) the use of computers in linguistic research, (2) the assumption of an ideal language user, (3) the role of linguistic universals in a theory, and (4) the connection between syntax and semantics. We will look at some of the ways to compare and evaluate theories of syntax. Then different accounts of syntax will be examined and compared. In the next chapter we will discuss our own model of syntactic operations and compare it with those discussed here.
Chomsky's criteria for an adequate theory
Noam Chomsky (1957) proposed three criteria that an adequate theory of grammar should satisfy. By grammar, he meant both syntax and morphophonemics, but we will be concerned only with syntax. Theories of syntax in their present stage of development are far from providing a complete account of all language; instead they propose programs or directions for research, setting out the kinds of rules that would be necessary for a complete account. No theory actually satisfies Chomsky's three criteria, but many justify their programs in terms of the likelihood of doing so. As we shall see, the interpretation of how the criteria are to be met differ from theory to theory.
The criteria are:
A theory of grammar should provide a set of rules that can generate all and only syntactically correct strings.
The rules should also generate correct structural descriptions for each grammatical string.
This book is an attempt to answer the following questions: What are the essential features of language that permit a sentence or string of words to convey a complex idea? and What are the essential features of language users that enable them to produce and understand meaningful strings of words and to learn how to do so? The heart of these problems is syntax, and our answers constitute a new theory of syntax and syntax acquisition.
The goal of a theory of syntax
A theory of syntax must explain how someone can express a complex idea by organizing units of language into an appropriate pattern that conveys the idea, and how another person is able to identify from the language pattern, not only the concepts expressed by the individual units of language, but the relationships among the concepts that make up the idea. A theory of syntax should also explain what essential properties of language and of language users allow this method of encoding to be learned.
In trying to identify the essential features of a phenomenon, a good theory tries to represent the phenomenon as simply as possible without doing injustice to the complexity it is trying to explain. Actual syntax use (and its learning) may involve many redundant operations in order to increase speed and reliability. A theory of syntax will not be directly interested in all the properties and processes that may be involved in syntax use and acquisition, but in those that must be involved – the minimally essential features for syntax to work and to be learned.
The Syntax CRYstal Parser (SCRYP) is a computer program that uses syntax crystal rules to parse sentences of a natural language. The procedures developed for the parser constitute a set of heuristics for the bottom-up, as opposed to topdown, analysis of sentence structure. We believe such procedures reflect some processes that take place in human language comprehension. This appendix will outline some advantages the parser has over other procedures as well as presenting a description of the program itself. An important consideration is the applicability of SCRYP to descriptions of human language performance. For this reason, we emphasize parsimony of description and the applicability of structural and procedural assumptions to human language performance.
Local rules and global procedures
Parsers for descriptive and interpretive grammars
The problem of constructing a parser to derive syntactic descriptions of sentences on the basis of word order and inflections is largely the problem of selecting and implementing appropriate global procedures to coordinate the operations of local rules (see Chapters 1 and 4). Various types of local rules may be used in global procedures. The rules specify the elements of syntactic form, the procedures and their application.
In the preceding chapter, we looked at some of the properties of language strings. We examined the relations among morphemes that are available to carry information about the relations of concepts. We considered several lines of argument converging on the conclusion that language strings have an organization beyond their sequential structure. The organization is hierarchical, that is, organized at several different levels simultaneously, and the organization can be recursive, which allows parts of the organizing principle to be used over and over again.
The organization does not appear explicitly in language strings, but its existence can be inferred from the strings and how they are handled by language users. No contemporary account of the psychology of language or linguistics denies such organization (although a few may ignore it, e.g., Winokur, 1976).
The organization is the basis for the syntactic code that enables language strings to carry information about the relation of concepts. To understand a string, the language user must convert the sequence of morphemes (inflected lexemes plus function words) into something from which the idea may be extracted. When expressing an idea, the process is reversed. The cognitive operations involved can be described as the syntactic mechanism of the language user. A description of and a model for this mechanism are given in Chapter 5.
In this chapter we are going to study the contents of the second box in the flow chart of Figure 1.4: underlying conceptual structure.