To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When I was commissioned by Cambridge University Press to write an introduction to semantics, I was elated; I knew I was in for a great ride. This is the case even if I was perfectly aware of an unavoidable fact: it's impossible to write an introduction to semantics that will satisfy everybody. Many great scholars will feel you are ignoring them (or more probably, just plain wrong about your choices). While it's evident that different traditions have very smart people in their camps who cannot be completely wrong, there's really no way around it: in a book like this, you have to choose sides. And then face the music.
So, as expected, some level of disagreement can be found in almost any topic in semantics we choose: whether the tools of logic are useful or not; whether meaning is something embodied or symbolic, whether metaphor is useful or useless, whether semantics and pragmatics are to be distinguished or not, whether language influences thought significantly or not, and a very long etcetera. That's why I said that writing a book on semantics (especially an introduction, where there's not much space to give all possibilities their due), is probably a surefire way of creating enemies. Well, that's life.
I am in complete agreement with the saying that ‘education is not the filling of a vessel, but the lighting of a fire’. This is the approach I've tried to take here: even more important than the facts themselves is the feeling of excitement and enthusiasm that emanates from all the work that is being carried out in the field right now. As I see it, the story of semantics is not a mere recollection of facts from the past, but something that is being written in these very moments. I firmly believe these are topics that are central to our understanding of language, our minds and ourselves, and in this sense, they should be interesting for just about anyone (especially for anyone interested in how language and our minds work).
A great deal of effort in this book has gone into two goals: first, aiming for an explanation of topics accessible to anyone, while trying to avoid oversimplification at the same time. This is really tricky, because most issues in semantics are notoriously complicated and each single point can be modified, qualified and nuanced ad nauseam.
In this chapter, we introduce the subject of semantics, stressing its essential role in linguistics and other cognitive disciplines. We will look at some common definitions, and will come up with a list of questions about meaning that we would be interested in answering. We also review briefly the place of semantic studies in linguistic theorizing in the last century; in order to get a feel of the difficulties involved in semantic analysis, we will examine the meaning of a single word. The second part of the chapter will be devoted to reviewing the ways in which meaning can be expressed, both non-linguistically (introducing the discipline of semiotics) and linguistically, examining the different types of meaning expressed by the different linguistic levels (phonology, morphology, lexicon and syntax). The chapter ends with some notes on the general organization of the book.
Some Preliminaries
Meaning controls memory and perception. Meaning is the goal of communication. Meaning underlies social activities and culture. To a great degree, what distinguishes human cultures are the meanings they give to natural phenomena, artifacts, and human relations (Glenberg and Robertson, 2000).
The importance of semantics for the study of language cannot be overstated. Understanding how we construct meanings from the words and expressions we hear can be said to be the core of linguistic studies, since it amounts to understanding how language performs its main task, which is to convey meaning. Semantics is thus essential for all aspects of language study: how language is acquired (be it a first language or a second one) and how it is structured; how language changes over time and how it varies in different social contexts; how languages should be taught and how we (or machines) translate it; how language-related conditions such as aphasias work, etc. You could say that semantics lies at the very heart of the study of language. The practical applications of knowing in an accurate and detailed way how people associate their thoughts to linguistic objects, and how hearers use those objects to recover the intended meaning in a communicative exchange, are obviously enormous. Semantic search in the web has been called ‘the holy grail of computer-assisted research’ (McCloskey, 2013); natural-language computer query systems such as Apple's Siri or IBM's Watson would be delighted to have a complete story of how meaning really works.
In this chapter, we consider the topic of the different relations among the meaning of words. After reviewing the importance and centrality of meaning relations in accounts of word meaning, we first distinguish between two broad types of relations: semantic relations, which are held by words based on an overlap in their meaning, and associative relations, which are established by the co-occurrence of words in discourse. We also introduce a third type of relation: thematic relations, in which items that appear together in a scene become associated (as in hammer and nail). Before diving into semantic relations, we review the central but complex notion of polysemy (the different but related senses associated with a single word), and distinguish it from homonymy (in which the meanings of a word are not related). We also review the mechanisms by which a word can be connected to a number of related meanings in a principled way, introducing the notion of regular polysemy, which is contrasted with the more contextual meaning adaptations present in irregular polysemy. Next, we review synonymy and antonymy, distinguishing between canonical and non-canonical antonyms and other sub-types of antonymy. We also look at hyponymy and hyperonymy, introducing the distinction between basic-level, superordinate and subordinate categories. The chapter ends with a brief discussion of associative relations, examining how combining two words can alter their meaning, and reviewing the notion of collocation and some relatively new notions coming from corpus linguistics: semantic preference and semantic prosody.
The Structure of the Lexicon: a Network of Words
Words are not isolated things. We know this from many sources. When people are given one word and they are asked to respond with any word that comes to mind (an exercise known as a ‘word association test’), the responses are not random at all: in fact, depending on the type of word, they can show a surprising degree of overlap. For example, in response to the word canary, a majority of people will answer bird (other options normally supplied are yellow or sing); when given the word furniture, the most frequent answer is table or chair; adjectives like hot or wide typically evoke their opposing pairs, cold and narrow, respectively.
In this chapter, we take a look at some of the methods that have been used in semantic theorizing, from the more traditional to the ones being currently applied. After a short introduction situating the topic, we exemplify one of the more traditional methodologies: Semantic Feature Analysis, a classic example of the use of introspection in semantics. We review some of its advantages, and then point out some of its problems. Next, we cover statistical methods, focusing in particular on distributed-semantics methods, such as Hyper-space Analogue to Language or Latent Semantic Analysis. The next section deals with psycholinguistic methods, with tasks such as lexical decision, memory measures, reading times or eye-tracking, which notes the movement of our eyes to derive information about the mental models we construct as we are understanding language. We then go on to neurological methods, such as Event Related Potentials (ERPs) or Functional Magnetic-Resonance Imaging (fMRIs), and the latest addition of TMS, or Transcraneal Magnetic Stimulation. Finally, we discuss the possibilities of computational modelling. The chapter closes with a short reflection on the limits and place of empirical data in theorizing.
Why Worry about Methods?
One of the issues that are seldom discussed in courses on semantics is the methodology used by the different scholars to arrive at their analyses. Probably, the reason behind this fact is that, until now, most of the methods have been introspective, that is, based on the intuitions of the analyst (what is sometimes known as ‘arm-chair linguistics’, as in Figure 2.1). Introspection seems to be the most obvious method to study something that goes on within our own minds (which is thus very personal) and is not externally visible in any obvious way. Clearly, it is not apparent how we can study meaning directly: even looking directly at our brain with a scanner would not show us ‘meanings’ there, but only brain activity, neural connections and so forth. Unfortunately, introspective methods are faced with many problems: the subjectivity of the analyst and also the lack of mechanisms for verifiability, to name just two of the main ones. Thus, their claim to usefulness as a method of increasing our knowledge about any topic has to be treated with the utmost caution.
We look at a different aspect of language understanding; we go beyond the level of words to talk about the meaning of sentences. For this task, we will focus on the information supplied by one specific grammatical category: the verb, the main element in the organization of the meaning of sentences. Verbs are ‘predicates’, relational items that must be combined with other free-standing items, the ‘arguments’, in order to complete their meaning. The combination of a verb and its arguments forms the backbone of the meaning of a sentence. Verbs specify how many arguments they need, the roles that those arguments play in the event or scene they describe, and also how those elements should be expressed linguistically. The semantic relations between an argument and a predicate can be classified into a number of different ‘semantic roles’, like agent, patient or instrument. Semantic roles are linked with grammatical functions to construct the whole meaning of a sentence. The union between a predicate and its arguments forms one special type of entity, called ‘proposition’, which some authors have described as ‘complete units of meaning’, and the most adequate level to capture the meaning of sentences. We review some of the assumptions and empirical evidence for the existence of propositions as representations of sentence meaning, as well as some criticisms aimed at propositional structures.
Then, we will examine some of the problems faced by semantic roles, such as the difficulty of agreeing on their number and their exact coverage, and the two possibilities for their definition: the more abstract, syntactically based approach to semantic roles and the more specific, semantic-based one. The final part of the chapter will explore this second option: the possibility of having very specific semantic roles, which capture information at the level of specific events. We will review empirical evidence showing that thematic event-based knowledge is activated incrementally in the understanding of sentences and we will see how all this could come together by examining the function of a concrete example, the commercial transaction frame. The chapter closes with a discussion of the challenges and the future directions of this field of semantics.
In this chapter, we introduce the main mechanisms by which words can be said to convey meaning. We start with two of the more classic notions in semantics, reference and sense, examining different types of reference. Since words typically refer to ‘classes of things’, we devote some time to the notion of ‘category’. We will discuss what categorization is, how it has been treated since classical times (e.g., Aristotle) and how this view has been called into question by scholars like Wittgenstein. We then introduce one of the prevailing theories of categorization, namely the prototype theory of categorization established by Rosch and her colleagues. The chapter also presents for the first time a construct that will show up in several places throughout the book: semantic frames. We will show here how frames can be extremely useful in characterizing the meaning of words. The chapter closes with a discussion of connotation, an important part of meaning that is sometimes overlooked, reviewing the ways in which it can be measured and analysed.
The Meaning of Words
I wanted to utter a word; that word I cannot remember, and the bodiless thought will now return to the palace of shadows (The Swallow, by the Russian poet Isap Mandelstam; cited in Aitchison, 2012).
When talking about how meaning is expressed with language, our gut feeling is that linguistic meaning resides mainly in words. Words are the key to a great part of the information we store in our heads. According to some estimates (e.g., Aitchison, 2012: 7) an educated adult speaker of English knows between 50,000 and 80,000 words (the numbers vary according to the individual, the method of measuring, etc.). However, in spite of the great number of words we store, we can use a word in no time at all. Native speakers can recognize a word 200 milliseconds after its onset, well before it has been completely pronounced. It is no wonder that psychologists are trying to elucidate in what ways words are organized in our minds (e.g., Aitchison, 2012; Altmann, 1997).
The highly efficient organization of the thousands of words we store is no doubt a very interesting issue (which we will examine in Chapter 5), but for semantics there are questions that come before that: how do words mean? How do we use words to convey meaning?
In this chapter, we take a look at the relationships between language and thought, examining the two competing proposals that exist nowadays: the formal and the cognitive approach. After situating both of them historically in Section 3.1, we explain the main tenets of the formal viewpoint in Section 3.2, including the idea that the format of thought is language-like (the Language of Thought Hypothesis). Then, we review some criticisms levelled at formal approaches, especially the one known as ‘the symbol grounding problem’. The next section of the chapter is devoted to the cognitive or embodied approach; after introducing the approach in Section 3.3 and explaining the concept of embodied simulation (Section 3.3.1), we review the parameters that have been shown to be simulated in the literature (Section 3.3.2) and also examine Glenberg's Indexical Hypothesis (Section 3.3.3), which articulates all these assumptions into a specific model of how we understand language. This block closes with some criticisms of the embodied approaches, in particular the problem presented by abstract thought. The chapter concludes with a final reflection on the interactions of both approaches (Section 3.4).
Language and Thought: the Formal vs. Embodied Split
In Chapter 1, we drew up a list of basic questions about meaning. Two of the most basic ones were concerned with the nature of meaning itself, on the one hand, and its relationship with thought, on the other. How do we think? Do we think directly in language? Do we think in images, or perhaps is there another mode of thought? What actually goes on in people's minds (or brains) when they understand some linguistic object? How should the ‘meaning’ that is created in the mind of the hearer be conceived of?
Unsurprisingly, there are different opinions on these matters. There is a classic (and on-going) debate in cognitive science concerning the format in which information is represented, stored and manipulated in our brains. We can distinguish two opposing views, which can be described as the formal semantics view (also known as the symbolic or amodal view) and the embodied semantics view (also known as the cognitive view).
In this chapter, we look at how children acquire the meaning of words; after explaining the difficulties that children face when trying to learn the meaning of an unknown word, we review the ways in which the problem of word learning can be solved. We take into account the knowledge that children bring to the task (their knowledge and expectations about the structure of the world), their abilities to infer the communicative intention of the speaker and certain facilitative strategies which help acquire word meanings more efficiently once language acquisition has started (especially the role of syntax in constraining the possible meanings of words). We will also look at the relationships between cognitive development and language acquisition, to see how they influence each other. In the second part of the chapter we focus once more on the relationship between meaning and thought, trying to ascertain whether speaking a different language may have an impact on our cognitive processes. We shall see how this idea, which has been rejected outright for a long time, enjoys today a different status, even if the issue is not resolved yet.
Acquisition of Word Meaning
Two and three year old children have poor motor control and bad manners; they are unreflective artists and inept dance partners. However, they're strikingly good at learning the meaning of words (Bloom, 2000: 1).
How do children acquire the meaning of words? This was one of the first questions about meaning that we included in our first chapter. This topic is important for many reasons: for example, a close examination of how children learn linguistic signs may help us achieve a more adequate idea of the true nature of language and meaning. One of the aspects that have attracted more attention is which cognitive capacities children must possess in order to acquire language. A second, closely related question is whether these capacities stem from innate, language-related mechanisms that children bring to the task, or whether meaning can be acquired using mechanisms which are used in other areas of cognition, different from language.
We examine how people interpret linguistic material in the wider realm of discourse comprehension. In order to recover the communicative intention of the speaker, we normally have to go beyond the words uttered and enrich them with inferences derived from the context at hand. This thorny issue becomes more tractable if we consider language as a collaborative activity: meaning is constructed from the speaker's linguistic cues and the hearer's understanding of the contextual situation. Both participants in the communicative setting derive expectations about the intentions of their interlocutor from the context and the goals of the social situation they are engaged in. In this chapter, the complex notion of context will be divided into: (1) information that comes from the physical place and time where the communication takes place; (2) the context provided by what has been said so far (called the co-text); and (3) context provided by the shared knowledge between speaker and hearer. For this last type of context, we will resort to the notion of ‘common ground’ (the knowledge that both speaker and hearer are aware that they share with each other) as well as to our knowledge of scenes and frames. We will see how knowing the way in which we are supposed to interact with the other person provides a great wealth of information that facilitates the recovery of inferences. We will also examine two classic notions in pragmatics, Grice's Conversational Maxims and Searle's Speech Acts, in connection with the general framework described in the chapter.
Discourse, Context and Use
What words mean is a matter of what people mean by them
(Grice, 1975: 340)
In this last chapter, like in all good stories, it is time to try to tie up all the loose ends, always a challenging task in the field of semantics. So far, we have been looking at how we construct meaning by examining the types of meaning evoked by the different linguistic levels, mainly the lexical one. We have spent some time looking at the meanings associated with words (Chapter 4), how words can refer to different meanings (i.e., polysemy; Chapter 5), and how word meanings are related to each other (Chapter 5); we have also seen how the meanings activated by verbs and other predicates organize the meaning of sentences (along with constructional meanings) in Chapter 8.
In this chapter, we go beyond the literal meaning of words and examine figurative meaning. The cases in which language cannot be understood literally are incredibly abundant, and figurative mechanisms such as hyperbole, understatement, irony or tautology, to name a few, are extremely common. We will focus on two figurative mechanisms that have attracted a great deal of attention in the last decades: metaphor and metonymy. This chapter will introduce the main ideas behind Conceptual Metaphor Theory, which provides a tentative (though plausible) explanation for the extension of embodiment to abstract domains. We will take a look at the different types of metaphor, their behaviour and their possible origins. We will also review empirical work that has examined their conceptual nature. In the second part of the chapter, we will take a look at the important notion of conceptual metonymy, which is often explained alongside metaphor and is just as essential in any explanation of how meaning is constructed and communicated.
Metaphor and metonymy have proved very useful as commonsense explanatory mechanisms for many central topics in semantics. For example, they are among the main mechanisms that extend the meanings of words in a principled way; this chapter will look at their role in the establishment of regular polysemy patterns. The chapter closes with a critical evaluation of the theory.
Introduction: Literal vs. Figurative Language
Whenever we invoke notions such as ‘figurative language’, ‘metaphor’, ‘metonymy’ and the like, the first thing that comes to mind is literary language. Typical examples of metaphor are phrases like your eyes are stars or your teeth are pearls, a type of language that is associated with poetry, song lyrics or literature in general, and does not correspond to the way people normally speak. There is a very clear separation between what constitutes ‘normal’ (or literal) language and what is ‘figurative’ language. If somebody told you that your teeth are pearls, you would have to reject the initial, completely impossible literal interpretation, and would have to go on trying to construct another meaning, this time a non-literal, figurative one (probably something like ‘your teeth are white and shiny’).
For a long time, this was the way most people thought figurative language worked. However, in recent times this view has been challenged.
The measurement postulate (postulate 4) relates the quantum theoretic predictions with experimental observations. Those predictions are found to be in conformity with experimental observations. That postulate, however, has been the subject of debate right since its conception. The issues debated are: (i) whether the measurement postulate is consistent with the postulate specifying the time evolution (postulate 5) and (ii) whether it denies the objective reality per the criterion of the Einstein, Podolsky and Rosen (EPR). The said issues are highlighted in the form of paradoxes by the thought experiment (Gedanken experiment in German) of Schrödinger, and that of EPR. See [40] for collection of articles and papers on the problem of measurement. See in particular [42] and [43].
The Measurement Problem
We recall first the measurement postulate. Let a system S (hereafter called the object) be subject to the measurement of an observable . Let the eigenvalues of be non-degenerate with as the corresponding orthonormal eigenvectors. Let the object be in the state jsi before the measurement, where
According to the measurement postulate, the measurement of would make the state of the object collapse to one of the eigenstates of and the outcome of the measurement would be the eigenvalue corresponding to the eigenstate to which the state of the object has collapsed. Hence, if the outcome of the measurement is the eigenvalue kk then we know that the state of the object after the measurement is, written symbolically as.