To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As recently as two decades ago there was little interest among neuroscientists, philosophers, psychologists, and cognitive scientists generally in the problem of consciousness. Reasons for the resistance to the problem varied from discipline to discipline. Philosophers had turned to the analysis of language, psychologists had become convinced that a scientific psychology must be a science of behavior, and cognitive scientists took their research program to be the discovery of the computer programs in the brain that, they thought, would explain cognition. It seemed especially puzzling that neuroscientists should be reluctant to deal with the problem of consciousness, because one of the chief functions of the brain is to cause and sustain conscious states. Studying the brain without studying consciousness would be like studying the stomach without studying digestion, or studying genetics without studying the inheritance of traits. When I first got interested in this problem seriously and tried to discuss it with brain scientists, I found that most of them were not interested in the question.
The reasons for this resistance were various but they mostly boiled down to two. First, many neuroscientists felt – and some still do – that consciousness is not a suitable subject for neuroscientific investigation. A legitimate brain science can study the microanatomy of the Purkinje cell, or attempt to discover new neurotransmitters, but consciousness seems too airy-fairy and touchy-feely to be a real scientific subject. Others did not exclude consciousness from scientific investigation, but they had a second reason: “We are not ready” to tackle the problem of consciousness.
Since the early work on speech acts by Austin, Grice, myself and others in the 1950s and '60s, it has been possible to distinguish two apparently inconsistent strands in speech act theory. One strand, most prominently associated with the name of Grice (1957, 1969), treats individual intentionality as the fundamental notion in the theory of speech acts. Meaning is created by individual acts whereby speakers attempt to produce effects on hearers by getting the hearers to recognize their attempt to produce those effects. Meaning is thus the product of individual acts of meaning. On a Gricean analysis, there is no suggestion that conventions, rules, or social practices are in any way essential for the performance of speech acts. A second tradition associated with the name of Austin (1962), as well as with my own early book Speech Acts (1969), emphasizes the role of social institutions in the performance of speech acts. On this view, social conventions, rules, and contexts of utterance play a crucial role in the determination of the speech act. Meaning, on this view, is not just a product of individual intentionality, but it is also the result of social practices.
Is there anything to be said for either of these approaches, and in particular, are they, or can they be made to be, consistent with each other? My own view is that if they are stated carefully they can be construed in a way which renders them consistent. I believe they are both trying to say something true.
Like most words, ‘consciousness’ does not admit of a definition in terms of genus and differentia or necessary and sufficient conditions. Nonetheless, it is important to say exactly what we are talking about because the phenomenon of consciousness that we are interested in needs to be distinguished from certain other phenomena such as attention, knowledge, and self-consciousness. By ‘consciousness’ I simply mean those subjective states of sentience or awareness that begin when one awakes in the morning from a dreamless sleep and continue throughout the day until one goes to sleep at night, or falls into a coma, or dies, or otherwise becomes, as one would say, ‘unconscious’.
Above all, consciousness is a biological phenomenon.We should think of consciousness as part of our ordinary biological history, along with digestion, growth, mitosis and meiosis. However, though consciousness is a biological phenomenon, it has some important features that other biological phenomena do not have. The most important of these is what I have called its ‘subjectivity’. There is a sense in which each person's consciousness is private to that person, a sense in which he is related to his pains, tickles, itches, thoughts and feelings in a way that is quite unlike the way that others are related to those pains, tickles, itches, thoughts and feelings. This phenomenon can be described in various ways. It is sometimes described as that feature of consciousness by way of which there is something that it's like or something that it feels like to be in a certain conscious state. If somebody asks me what it feels like to give a lecture in front of a large audience, I can answer that question.
This essay begins with an intuition, a notation, and a presupposition. The intuition is: Collective intentional behavior is a primitive phenomenon that cannot be analyzed as just the summation of individual intentional behavior; and collective intentions expressed in the form “we intend to do such-andsuch” or “we are doing such-and-such” are also primitive phenomena and cannot be analyzed in terms of individual intentions expressed in the form “I intend to do such-and-such” or “I am doing such-and-such.” The notation is: S (p). The “S” stands for the type of psychological state; the “p” stands for the propositional content, the content that determines the conditions of satisfaction. Like all such notations, it isn't neutral; it embodies a theory. The presupposition is: All intentionality, whether collective or individual, requires a preintentional Background of mental capacities that are not themselves representational. In this case that implies that the functioning of the phenomena represented by the notation requires a set of phenomena that cannot be represented by that notation.
The questions this essay addresses are: Is the intuition right? (It is denied by most of the authors I have read on the subject.) And if it is right, can it be made to fit the notation? How, if at all, can we capture the structure of collective intentions within that notation? And what role does the Background play in enabling us to function in social collectives? These questions are not innocent. The larger question they form a part of is: How far can the theory of intentional action in Intentionality (Searle, 1983) be extended to become a general theory?
The essays collected in this volume were written over a period of two decades. They deal with a wide range of subjects and were intended for a variety of audiences. Despite the variety, there are certain unifying principles that underlie this collection; indeed, I have tried to make a selection that will exhibit a natural progression, as the topics move from consciousness to intentionality to society to language, and finally conclude with several debates about the issues that have preceded. In this introduction I want to try to state some of these unifying principles and offer a brief description (and note I say “description” rather than “summary” or “abstract”) of the essays.
There is a single overarching problem that has preoccupied me since I first began work in philosophy almost a half-century ago: How can we have a unified and theoretically satisfactory account of ourselves and of our relations to other people and to the natural world?Howcan we reconcile our common-sense conception of ourselves as conscious, free, mindful, speechact performing, rational agents in a world that we believe consists entirely of brute, unconscious, mindless, meaningless, mute physical particles in fields of force? How, in short, can we make our conception of ourselves fully consistent and coherent with the account of the world that we have acquired from the natural sciences, especially physics, chemistry, and biology? The questions that have most preoccupied me – What is a speech act? What is consciousness? What is intentionality? What is society? What is rationality? – have all in one way or another been addressed to this larger problematic.
Traditionally speech act theory has a very restricted subject matter. The speech act scenario is enacted by its two great heroes, “S” and “H”; and it works as follows: S goes up to H and cuts loose with an acoustic blast; if all goes well, if all the appropriate conditions are satisfied, if S's noise is infused with intentionality, and if all kinds of rules come into play, then the speech act is successful and nondefective. After that, there is silence; nothing else happens. The speech act is concluded and S and H go their separate ways. Traditional speech act theory is thus largely confined to single speech acts. But, as we all know, in real life speech acts are often not like that at all. In real life, speech characteristically consists of longer sequences of speech acts, either on the part of one speaker, in a continuous discourse, or it consists, more interestingly, of sequences of exchange speech acts in a conversation, where alternately S becomes H, and H, S.
Now the question naturally arises: Could we get an account of conversations parallel to our account of speech acts? Could we, for example, get an account that gave us constitutive rules for conversations in a way that we have constitutive rules of speech acts? My answer to that question is going to be “No.” But we can say some things about conversations; we can get some sorts of interesting insights into the structure of conversations.
The neurosciences have now advanced to the point that we can address – and perhaps, in the long run, even solve – the problem of consciousness as a scientific problem like any other. However there are a number of philosophical obstacles to this project. The aim of this article is to address and try to overcome some of those obstacles. Because the problem of giving an adequate account of consciousness is a modern descendant of the traditional ‘mindbody problem’, I will begin with a brief discussion of the traditional problem.
The mind-body problem can be divided into two problems; the first is easy to solve, the second is much more difficult. The first is this: what is the general character of the relations between consciousness and other mental phenomena on the one hand and the brain on the other? The solution to the easy problem can be given with two principles: first, consciousness and indeed all mental phenomena are caused by lower level neurobiological processes in the brain; and, second, consciousness and other mental phenomena are higher level features of the brain. I have expounded this solution to the mindbody problem in a number of writings, so I will not say more about it here (but see Searle 1984, 1992).
The second, and more difficult problem, is to explain in detail how consciousness actually works in the brain. Indeed, I believe that a solution to the second problem would be the most important scientific discovery of the present era.
Many species of animals have consciousness, intentionality, and thought processes. By “consciousness” I mean those subjective states of sentience and awareness that we have during our waking life (and at a lower level of intensity in our dreams); by “intentionality” I mean that feature of the mind by which it is directed at or about objects and states of affairs in the world; and by “thought processes” I mean those temporal sequences of intentional states that are systematically related to each other, where the relationship is constrained by some rational principles. Examples of conscious states are such things as feeling a pain or hearing a sound. Examples of intentional states are such things as wanting to eat food or believing that someone is approaching. Examples of thought processes are such things as figuring how to get a banana that is out of reach or monitoring the behavior of prey who is on the move and is trying to escape. Though these three phenomena – consciousness, intentionality, and thought processes – overlap, they are not identical. Some conscious states are intentional, some not. Some intentional states are conscious, many are not. For example, my current thought that it is unlikely to rain is conscious, my belief when I am asleep that Bill Clinton is president of the United States is unconscious. All thought processes, as I have defined them, are intentional; but not every intentional state occurs as part of a thought process. For example, a pang of undirected anxiety, though conscious, is not intentional. A sudden desire for a cold beer is both conscious and intentional.
Throughout most of its history analytic philosophy has exhibited a curious prejudice against the mental. Many, perhaps most, analytic philosophers have felt that there was something especially puzzling about mental processes, states, and events, and that we would be better off if they could be analyzed away or explained in terms of something else or somehow eliminated. One sees this attitude, for example, in the persistent use of pejorative adjectives, such as “mysterious” and “occult,” that analytic philosophers from Ryle to Rorty use to characterize mental phenomena naively construed.
I first became aware of the pervasiveness of this attitude when I tried to extend my analysis of speech acts to intentional states. No one doubts the existence of promises, statements, apologies, and commands, but when the analysis is extended to beliefs, fears, hopes, desires, and visual experiences, suddenly philosophers raise a host of “ontological” doubts. I think that thinking and other mental processes and events, like linguistic processes and events, are biologically based and are as real as digestion, conversation, lactation, or any other of the familiar biologically based processes. This seems to me so obviously true as to be hardly worth arguing, but I am assured that it is a minority opinion in contemporary philosophy.
During the positivist and verificationist phase of analytic philosophy the reason for the urge to eliminate the mental was not difficult to see: if the meaning of a statement is its method of verification and if the only method of verification of statements about the mental is in the observations of behavior, at least where “other minds” are concerned, then it would appear that some sort of behaviorism is an immediate logical consequence of verificationism.
For over a century now there has been a continuing debate about whether the forms of explanation appropriate to the social sciences are essentially the same as or radically different from those used in the natural sciences. On one side is the empiricist philosophical tradition, ranging at least from John Stuart Mill through the logical positivists. According to this view, the covering law model of explanation appropriate for the natural sciences is equally appropriate for subjects such as history, anthropology, linguistics, economics, and other social sciences.Onthe other side is the interpretivist or hermeneutic tradition which ranges at least from Dilthey in the nineteenth century through the twentieth-century followers of Wittgenstein. According to this tradition, there are special modes of explanation appropriate to human behavior. In the second tradition, for example, Dilthey claims that a special method which he calls Verstehen (literally, understanding) is essential to the social sciences. And more recently, Charles Taylor (1985) claimed that human beings are unique in that events are meaningful to them in a special way and that any mode of explanation adequate to accounting for human behavior must explain this meaning component.
An unstated but underlying feature of this debate is often the assumption that much larger issues are at stake. There is at least the suggestion that the issue is a version of the dispute between materialism, on one hand, and dualism and idealism, on the other.
What sorts of systematic explanations should we and can we seek in cognitive science for perception, language comprehension, rational action and other forms of cognition? In broad outline I think the answer is reasonably clear:We are looking for causal explanations, and our subject matter is certain functions of a biological organ, the human and animal brain.
As with any other natural science there are certain assumptions we have to make and certain conditions that our explanations have to meet. Specifically, we have to suppose that there exists a reality totally independent of our representations of it (in a healthier intellectual era it would not be necessary to say that), and we have to suppose that the elements of that reality that we cite in our explanations genuinely function causally.
Not all functions of the brain are relevant to cognition, so we have to be careful to restrict the range of brain functions we are discussing. Cognitive science is about the cognitive functioning of the brain and its relation to the rest of the organism and to the rest of the world in the way that nutrition science is about the digestive functioning of the digestive system and its relation to the rest of the organism and the rest of the world. Like other organs, and indeed like other physical systems, the brain has different levels of description and cognitive science is appropriately concerned with any level of description of the brain that is relevant to the causal explanation of cognition.
The notion of a performative is one that philosophers and linguists are so comfortable with that one gets the impression that somebody must have a satisfactory theory. But I have not seen such a theory and in this essay I want to address the question: how exactly do performatives work? I believe that answering that question is not just a fussy exercise in linguistic analysis but can give us insights into the nature of language and the relation between speech acts and actions generally. Some people who have written about performatives seem to think that it is just a semantic fact about certain verbs that they have performative occurrences, but the puzzle is: how could any verbs have such remarkable properties just as a matter of semantics? I can't fix the roof by saying, “I fix the roof” and I can't fry an egg by saying, “I fry an egg,” but I can promise to come and see you just by saying, “I promise to come and see you” and I can order you to leave the room just by saying, “I order you to leave the room.” Now why the one and not the other? And, to repeat, how exactly does it work? Perhaps the most widely accepted current view is the following: performative utterances are really just statements with truth values like any other statements, and Austin was wrong to contrast performative utterances with some other kind. The only special feature of the performative statement is that the speaker can perform some other speech act indirectly by making the statement.
This chapter reviews the evidence for asymmetrical involvement of different forebrain structures at different times during learning and memory consolidation in the chick, focusing primarily on imprinting and one-trial passive avoidance. In doing so it covers the lateralized behavioural, electrophysiological, biochemical and structural processes that occur during memory consolidation. It concludes with a discussion of the functional relevance of the observed lateralization to theories of memory formation and storage.
Despite their overall gross anatomical similarities, the two cerebral hemispheres of many of the avian species studied to date show a remarkable degree of structural and functional lateralization. The directionality of lateralization in, for instance, chicks, zebra finches and canaries is the same in almost all individuals. Indeed, some of the earliest demonstrations of functionally lateralized brains in non-human species were in avian species (Nottebohm, 1971; Rogers and Anson, 1979; Scharff and Nottebohm, 1991; Cynx, Williams and Nottebohm, 1992).
This chapter is primarily concerned with the evidence for functional lateralization in one of the most thoroughly investigated avian models of learning and memory, the domestic chick (Gallus gallus domesticus). As we will describe, not only do chicks show brain asymmetries in processing information, akin to the lateralized functions of the mammalian brain (Bisazza, Rogers and Vallortigara, 1998) but the processes of memory consolidation and storage engage structures in left and right hemispheres differentially.
Over the last two decades, it has been well established that there are a variety of forms of structural and functional lateralization in a number of avian species, including chicks (reviewed by Andrew, 1988, 1991; Rogers, 1995, 1996), pigeons (Güntürkün, 1997b), canaries (Nottebohm, 1977), zebra finches (Alonso, 1998) and marsh tits (Clayton and Krebs, 1994). Among these avian species, the chick, Gallus gallus domesticus, has been used as a model to study how hormones and early experience interact to influence the development of brain lateralization. In this chapter we review a number of studies that have examined the influence of light exposure prior to hatching and of steroid hormones on the development of asymmetry of the visual pathways and lateralization of visual behaviour.
Although the chick has been used extensively in studies of learning, memory formation, visual lateralization and visual neurone development, knowledge of the organization of the visual pathways in the chick is limited. Until recently, most knowledge of the organization of avian visual pathways came only from studies of the pigeon (reviewed by Güntürkün et al., 1993; Bischof and Watanabe, 1997), and it was generally assumed that the visual system of chicks had the same organization as that of the pigeon or one very similar to it. However, our recent studies have shown there are some clear differences between the chick and the pigeon in the organization of the central visual pathways (Deng and Rogers, 1998a, 1998b).
From the perspective of a new century, it seems difficult to comprehend that only 30 years ago no one believed that animals had lateralized brains and asymmetrical behaviours. Indeed, the conventional dogma was that humans were uniquely different from all other animals in having handedness, brain laterality and speech. The first two functions have now been well established in animals and the third is under serious challenge (Savage-Rumbaugh et al., 1993).
Before addressing these issues, it is necessary to make a distinction between brain lateralization for individual animals and brain lateralization at the population level (Denenberg, 1981). If the population is lateralized (e.g. handedness in humans), then it is obvious that the individuals within that population are lateralized. However, individual animals can be lateralized even when there is no population asymmetry. Thus, Collins (1977) found that 50% of mice used their right paws in reaching for food and 50% used their left. Each animal was given 50 trials and most were found to be strongly biased one way or the other, thereby showing strong asymmetry at the individual level, but no asymmetry at the population level.
The first definitive evidence for population behavioural asymmetry in non-humans was Nottebohm's paper in 1970 reporting that song production in the chaffinch was under the control of the left half of the syrinx (the vocal organ of birds).