To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
On standard views of sentence processing, meaning construction results largely from parsing. A string is decomposed into representations of syntactic or structural information, and of lexical semantics, or word meanings. The meaning construction system on such a view consists of two discrete components, one devoted to word meaning and one devoted to syntactic structure. Parsing is accomplished by combining these two sorts of information to assemble a context-invariant meaning. On such approaches, pragmatics is knowledge that enables speakers to adapt context-invariant meanings to the situation at hand. In contrast, Part I outlines a constructivist comprehension system in which contextual and background knowledge do more than merely clarify the application of context-invariant meanings. Rather, meaning emerges from the integration of linguistic and nonlinguistic knowledge, as meaning and background are intimately intertwined.
However, the interdependence of meaning and background presents the language user with a profound challenge. She must determine which background assumptions are relevant at a given time, and which should be ignored. In fact, the challenge becomes quite poignant when we realize that background assumptions differ from context to context, and can even conflict. For example, note the following exchange between an interviewer and a famous Shakespearean actor.
Metaphor has historically been portrayed as colorful language – aesthetically pleasing but without cognitive import (Hobbes, 1965; Quintillian, 1921–1933). However, in recent years, cognitive semanticists such as Lakoff & Johnson (1980), Sweetser (1990), and Turner (1991) have argued that metaphor is, in fact, a pervasive phenomenon in everyday language and, moreover, that it represents the output of a cognitive process by which we understandone domain in terms of another. Cognitive linguists define metaphor as reference to one domain (known as the target, theme, or base domain) with vocabulary more commonly associated with another domain (known as the source phoros, or vehicle). On this construal, metaphoric language is the manifestation of conceptual structure organized by a cross-domain mapping: a systematic set of correspondences between the source and target that result from mapping frames or cognitive models across domains.
On this view, known as conceptual metaphor theory, a speaker invokes a metaphor whenever she refers to one domain, such as verbal argumentation, with vocabulary from another domain, such as physical combat. Conceptual metaphor theory is motivated by the existence of linguistic data like the following (from Lakoff Johnson, 1980: 4), in which argument is discussed in terms that might just as well be applied to war:
As I scan the kitchen, I see a table, chairs, cupboards, a toaster, and a sink piled high with dishes. On the counter is a jar of baby food, half-empty,with a spoon still sticking out of it. While for most of us scanning a room for its contents is a trivial, seemingly effortless, activity, it is in fact an immense computational achievement. The brain is faced with the formidable task of transforming the information in the light that hits our two-dimensional retina into perceptual information that allows us to navigate the three-dimensional world. Moreover, while visual perception requires extracting shape properties and spatial relations, what we perceive are not just shapes, but tables and chairs. Similarly, on the baby food jar we see a drawing of a baby, and not just flecks of ink on paper. Moreover, in spite of the many objective differences between them, we understand the baby in the drawing to be similar in some respects to the real baby sleeping in the other room.
Interpretation always involves the integration of physical information with knowledge at multiple levels of representation. Moreover, the approach the cognitive semanticist adopts toward meaning in natural language suggests commonalities in the interpretation of language and general processes involved in the comprehension of objects, actions, and events in the world around us. Consider again our interpretation of the picture on a jar of baby food. Understanding the label involves knowledge about babies, baby food, and perhaps even the Gerber company. Normally, we interpret the smiling baby as being happy and healthy as a result of eating the baby food.
In Part II, we will see the importance of conceptual blending for the integration of knowledge structures and the development of new concepts. Conceptual Conceptual blending is a set of noncompositional processes in which the imaginative of meaning construction are invoked to produce emergent structure (Fauconnier &Turner, 1998). Although blending is frequently employed for sophisticated feats of reasoning, its intermediate products are cognitive models whose plausibility spans the gamut from chimerical, to merely bizarre, to downright trite. Analyses in Chapters 5 through 7 show how cognitive models built in blended spaces can yield productive inferences in spite of, and, sometimes even because of, their strange properties.
TRASHCAN BASKETBALL
Imagine a scenario in which two college students are up late studying for an exam. Suddenly one crumples up a piece of paper and heaves it at the wastepaper basket. As the two begin to shoot the “ball” at the “basket,” the game of trashcan basketball is born. Because it involves the integration of knowledge structures from different domains, trashcan basketball can be seen as the product of conceptual blending. In conceptual blending, frames from established domains (known as inputs) are combined to yield a hybrid frame (a blend or blended model) comprised of structure from each the inputs, as well as unique structure of its own. For example, trashcan basketball, the input domains are trash disposal and (conventional) basketball, and the resultant blend incorporates a bit of both domains. Moreover, emergent structure – that is, properties of trashcan basketball that differ from properties of the input domains – need not be explicitly computed, but arise from affordances in the environment.
In this article, I address the remarks made in Fodor and Lepore's article, “The Emptiness of the Lexicon: Critical Reflections on James Pustejovsky's The Generative Lexicon,” regarding the research program outlined in Pustejovsky (1995). My response focuses on two themes: Fodor and Lepore's misreadings and misinterpretations of the substance as well as the details of the theory, and the generally negative and unconstructive view of the study of semantics and natural language meaning inherent in their approach.
Methodological Preliminaries
I would like to address the remarks made in Fodor and Lepore's (henceforth, FL), “The Emptiness of the Lexicon: Critical Reflections on James Pustejovsky's The Generative Lexicon” (in this volume), regarding the research program outlined in Pustejovsky (1995). My response focuses on two themes: FL's misreadings and misinterpretations of the substance as well as the details of the book, and the generally misguided and unconstructive view of the study of semantics and natural language meaning inherent in their approach.
In contrast to this approach, I have proposed a framework, Generative Lexicon Theory, that faces the empirically hard problems of how words can have different meanings in different contexts, how new senses can emerge compositionally, and how semantic types predictably map to syntactic forms in language. The theory accomplishes this by means of a semantic typing system encoding generative factors, called “qualia structures,” into each lexical item. Operating over these structures are compositional rules incorporating specific devices for capturing the contextual determination of an expression's meaning.
Lexicography is often considered orthogonal to theoretical linguistics. In this paper, we show that this is a highly misguided view. As in other sciences, a careful and large-scale empirical investigation is a necessary step for testing, improving, and expanding a theoretical framework. We present results from the development of the Italian semantic lexicon in the framework of the SIMPLE project, which implements major aspects of Generative Lexicon theory. This paper focuses on the semantic properties of abstract nouns as they are conceptually more difficult to describe. For this reason, they are a good testbed for any semantic theory. The methodology – which has been developed to satisfy the requirements of building large lexicons – is more than a simple interface or a lexicographer auxiliary tool. Rather, it reveals how a real implementation greatly contributes to the underlying theory.
Introduction
“Unlike the mental grammar, the mental dictionary has had no cachet. It seems like nothing more than a humdrum list of words, each transcribed into the head by dullwitted rote memorization. In the preface to his Dictionary, Samuel Johnson wrote: “It is the fate of those who dwell at the lower employments of life, to be rather driven by the fear of evil, than attracted by the prospect of good; to be exposed to censure, without hope of praise; to be disgraced by miscarriage, or punished for neglect, where success would have been without applause, and diligence without reward. Among these unhappy mortals is the writer of dictionaries.” Johnson's own dictionary defines lexicographer as “a harmless drudge, that busies himself in tracing the original, and detailing the signification of words. […] we will see that the stereotype is unfair. The world of words is just as wondrous as the word of syntax, or even more so” (Pinker, 1995, 126-127).
In this paper, we first outline some elements related to sense variation and to sense delimitation within the perspective of the Generative Lexicon. We then develop the case of adjectival modification and a few forms of sense variations, metaphors, and metonymies for verbs and show that, in some cases, the Qualia structure can be combined with or replaced by a small number of rules, which seem to capture more adequately the relationships between the predicate and one of its arguments. We focus on the Telic role of the Qualia structure, which seems to be the most productive role to model sense variations.
Introduction
Investigations within the generative perspective aim at modeling, by means of a small number of rules, principles and constraints, linguistic phenomena (either morphological, syntactic or semantic) at a high level of abstraction, level that seems to be appropriate for research on multilinguism and language learning. These works, among other things, attempt at modeling a certain form of “creativity” in language: from a limited number of linguistic resources, a potentially infinite set of surface forms can be generated.
Among works within the generative perspective, let us concentrate on the Generative Lexicon (Pustejovsky, 1991, 1995), which has settled in the past years one of the most innovative perspective in lexical semantics. This approach introduces an abstract model radically opposed to “flat” sense enumeration lexicons. This approach, which is now well-known, is based (1) on the close cooperation of three lexical semantic structures: the argument structure (including selectional restrictions), the aspectual structure, and the Qualia structure; (2) on a detailed type theory and a type coercion procedure; and (3) on a refined theory of compositionality.
Because partitive constructions (“a grain of rice,” “a cup of coffee”) are one of the ways of individuating referents in many languages, there is a need to establish the mechanisms to obtain appropriate representations for such compounds by composing partitive nouns and noun phrases that denote a whole of reference. Furthermore, some partitives are nouns that show a polymorphic behavior. Namely, depending on the context, they may be interpreted either as object referents or as partitives (e.g., “cup,” container vs. cointainee interpretations). This work discusses these and other related issues and posits representations intended to account for such phenomena in Spanish. The main claim is that the semantics of partitives largely depends on constitutive information. More specifically, the semantics of portions seems to be grounded on a generalized construal of any entity as being made of some stuff.
Introduction
A semiotic need, which probably any existing or conceivable language has to satisfy, is the capability of referring to entities as discrete individual units. The mechanism by which languages extract individuals from a continuum of reference is usually called individuation (cf., e.g., Lyons, 1977).
Languages have different mechanisms to convey information about individuation. English uses deictic reference (“this”), pronouns (“him”), or a range of noun-headed constructions (“two trees,” “a branch,” “many cups of coffee,” “this team,” “a group of people,” “a head of cattle”).
The paper argues that Fodor and Lepore (1998) are misguided in their attack on Pustejovsky's Generative Lexicon, largely because their argument rests on a traditional, but implausible and discredited, view of the lexicon on which it is effectively empty of content, a view that stands in the long line of explaining word meaning (a) by ostension and then (b) explaining it by means of a vacuous symbol in a lexicon, often the word itself after typographic transmogrification. Both (a) and (b) share the wrong belief that to a word must correspond a simple entity that is its meaning. I then turn to the semantic rules that Pustejovsky uses and argue first that, although they have novel features, they are in a well-established Artificial Intelligence tradition of explaining meaning by reference to structures that mention other structures assigned to words that may occur in close proximity to the first. It is argued in Fodor and Lepore's view that there cannot be such rules is without foundation, and indeed systems using such rules have proved their practical worth in computational systems. Their justification descends from line of argument, whose high points were probably Wittgenstein and Quine that meaning is not to be understood by simple links to the world, ostensive or otherwise, but by the relationship of whole cultural representational structures to each other and to the world as a whole.
In this paper, we offer a novel analysis of metaphor, which attempts to capture both their conventional constraints on their meaning, and the ways in which information in the discourse context contributes to their interpretation in context. We make use of lexical rules in a constraint-based grammar to do the former task, and a formal semantics of discourse, where coherence constraints are defined in terms of discourse structure, to do the latter task. The two frameworks are linked together, to produce an analysis of metaphor that both defines what's linguistically possible and accounts for the ways in which pragmatic clues from domain knowledge and rhetorical structure influence the meaning of metaphor in context.
Introduction
This paper focuses on metaphor and the interpretation of metaphor in a discourse setting. We propose constraints on their interpretation in terms of linguistic structures. Specifically, the constraints are based on a particular conception of the lexicon, where lexical entries have rich internal structure, and derivational processes or productivity between word senses are captured in a formal, systematic way (e.g., Copestake and Briscoe, 1995; Pustejovsky, 1995). By constraining metaphor in terms of these linguistic structures, we show that their interpretation is not purely a psychological association problem (cf. Lakoff and Johnson, 1980), or purely subjective (e.g., Davidson, 1984). Recent accounts of metaphor within philosophy have not given systematic accounts of this sort (e.g., Black, 1962; Hesse, 1966; Searle, 1979). We leave open the question of whether their insights are compatible with the theory proposed here.
This chapter discusses condensed meaning in the EuroWordNet project. In this project, several wordnets for different languages are combined in a multilingual database. The matching of the meanings across the wordnets makes it necessary to account for polysemy in a generative way and to establish a notion of equivalence at a more global level. Finally, we will describe an attempt to set up a more fundamental ontology, which is linked to the meanings in the wordnets as derived complex types. The multilingual design of the EuroWordNet database makes it possible to specify how the lexicon of each language uniquely maps onto these condensed types.
Introduction
The aim of EuroWordNet is to develop a multilingual database with wordnets in several European languages: English, Dutch, Italian, Spanish, French, German, Czech, and Estonian. Each language-specific wordnet is structured along the same lines as WordNet (Miller et al., 1990): Synonyms are grouped into synsets, which are related by means of basic semantic relations such as hyponymy (e.g., between “car” and “vehicle”) or meronymy relations (e.g., between “car” and “wheel”). By means of these relations all words are interconnected, constituting a huge network or wordnet. Because the lexicalization of concepts is different across languages, each wordnet in the EuroWordNet database represents an autonomous and unique system of language-internal relations. This means that each wordnet represents the lexical semantic relations between the lexicalized words and expressions of the language only: No artificial classifications (such as External-Body-Part, InanimateObject) are introduced to impose some structuring of the hierarchy (Vossen, 1998; Vossen and Bloksma, 1998).
The common thread that connects the first three papers is that, contrary to a wideheld view, metonymy and metaphor can be studied systematically. The contributors present and analyze various data sets that are taken to fall outside of the traditional areas of knowledge investigated in contemporary linguistic research. Our goal is to show that a subset of phenomena that have been labeled lexical idiosyncrasies (sense shift phenomena) and world/pragmatic knowledge effects (metonymy and metaphor) actually constitute a privileged window on the nature of lexical knowledge.
The papers by Julius Moravcsik and by Nicholas Asher and Alex Lascarides present detailed studies of metaphorical expressions. Their arguments are presented in such a way that it makes it clear how a particular data set may significantly affect the methodology for carrying out the study of the mental lexicon. In particular, while the lexicon can provide the key to understanding metaphor, in turn, this use of language represents actual evidence for determining the structuring of lexical information.
The paper by Jerry Hobbs extends, in an interesting and possibly controversial manner, the range of phenomena that are thought as metonymy: He makes us reconsider the relation between syntax and semantics. For example, as surprising as may be his treatment of extraposition as an instance of metonymy, we are forced to acknowledge the role of interpretive strategies when grammar alone fails to make sense of certain structures.
Lexical semantics has not yet given a satisfactory answer to a crucial question that implicitly or explicitly has been asked by philosophers of language, computer scientists, linguists, and lexicographers, namely, “What is word meaning?” The goal of this part of the volume is to set the stage for subsequent discussion, presenting the issues that confront those investigating language and the mind through the study of the mental lexicon.
The reader should not expect a definite answer. We may gain an insight that word meaning can best be studied as a transparent structure, rather than a black box whose contents escape us. Alternatively, we may choose to take a stand in the debate presented in Chapters 4, 5, and 6, where opposed views are expressed.
There are two broad positions emerging in this part of the volume: One argues for an internal syntax of word meaning (James McGilvray; James Pustejovsky; Yorick Wilks), and the other views concepts as particulars (Jerry Fodor and Ernie Lepore).
McGilvray discusses how word meaning contributes to the creative aspect of language and reaches the conclusion that lexical semantics, as done within a research program such as the Generative Lexicon, is a “branch of syntax, broadly speaking.”
Jerry Fodor and Ernie Lepore criticize lexical semantics frameworks that aim at isolating the internal constitution of word meaning. They forcefully argue against the assumption that the content of concepts is constituted by inferential relations, a view that necessarily leads to holism.