To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In a seminal paper, Harry Frankfurt advanced the now famous “Frankfurt-type example” primarily to argue against the Principle of Alternative Possibilities (PAP). The principle, which purports to specify the sort of freedom or control required for moral responsibility, is formulated by Frankfurt as follows:
PAP: A person is morally responsible for what he has done only if he could have done otherwise (Frankfurt 1969: 829).
In addition to revealing the lessons that they do about responsibility, Frankfurt-type examples are especially useful in probing the control requirements for deontic anchors. I appeal to them to defend a thesis about deontic control that says that deontic anchors require freedom to do otherwise: No one can perform an action that is right, wrong, or obligatory unless it is an action that one could have avoided performing.
I am convinced that Frankfurt-type examples exert commanding pressure against PAP, though I shall not argue for this view in this book. If PAP is indeed called into question by Frankfurt-type cases, and if deontic morality requires “two-way” control, then moral responsibility and deontic morality are asymmetric with respect to the requirement of alternative possibilities: Whereas one can be morally responsible for an act that one could not have avoided performing, one cannot perform an act that has primary deontic properties unless one could have avoided performing it.
In this chapter, after introducing Frankfurt-type examples, I propose two different lines of reasoning for the view that deontic morality requires two-way control.
Very many of us are convinced that on numerous occasions in our lives we perform actions that are morally right, or wrong, or obligatory. But the seemingly innocuous view that our actions have such moral statuses may not be as secure as we initially believe. I want to generate a riddle about this view that, in many respects, bears striking resemblance to a much more widely known riddle. It will be helpful to start by saying something about this other riddle in order better to appreciate the new riddle.
The venerable old riddle is the riddle about freedom and responsibility. Though fascinating and deeply puzzling, its essentials are easy to grasp. Almost all of us believe that people have been and will be morally responsible for at least some of their behavior. But suppose causal determinism – roughly, the view that all the facts of the past, in conjunction with all the laws of nature, entail one unique future – is true. Then it seems that, at each instant, we would lack genuinely open alternatives; contrary to popular belief, there would be no time at which we could do other than what we in fact did at that time. Causal determinism threatens our very natural picture of the future as a garden of forking paths. Perhaps a more apt figurative representation of what our lives, including our futures, would be like if causal determinism is true is captured by the image of trains chugging along the predestined grooves of a nonbranching trunk line (see Feinberg 1980: 36–7).
The aim of this essay is to assess the significance of W. V. Quine's indeterminacy thesis. If Quine is right, the thesis has vast ramifications for the philosophy of language and mind; if he is wrong, we ought to be able to say exactly how and why.
Let us begin by stating the behaviorist assumptions from which Quine originally proceeds. For the sake of developing an empirical theory of meaning, he confines his analysis to correlations between external stimuli and dispositions to verbal behavior. In thus limiting the analysis, he does not claim to capture all the intuitions we have about the pretheoretical notion, but rather the “objective reality” that is left over if we strip away the confusions and incoherencies in the pretheoretical “meaning.” The point of the “behavioristic ersatz” is to give us a scientific, empirical account of the objective reality of meaning. On this view, the objective reality is simply a matter of being disposed to produce utterances in response to external stimuli. The stimuli are defined entirely in terms of patterns of stimulations of the nerve endings, and the responses entirely in terms of sounds and sound patterns that the speaker is disposed to emit. But we are not supposed to think that between the stimulus and the verbal response there are any mental entities. We are not supposed to think that there is any consciousness, intentionality, thoughts, or any internal “meanings” connecting the stimuli to the noises. There is just the pattern of stimulus and the pattern of learned response.
Intentionality is that feature of certain mental states and events that consists in their (in a special sense of these words) being directed at, being about, being of, or representing certain other entities and states of affairs. If, for example, Robert has the belief that Ronald Reagan is president, then his belief is an intentional state because in the appropriate sense his belief is directed at, or about, or of, or represents Ronald Reagan and the state of affairs that Ronald Reagan is president. In such a case Ronald Reagan is the intentional object of Robert's belief, and the existence of the state of affairs that Ronald Reagan is president is the condition of satisfaction of his belief. If there is not anything that a belief is about, then it does not have an intentional object; and if the state of affairs it represents does not obtain, it is not satisfied.
Ascriptions of intentionality are of differing kinds, and as these differences have been a source of confusion, I will begin by sorting out some of them. Consider the statements made in utterances of the following sentences:
Robert believes that Ronald Reagan is president.
Bill sees that it is snowing.
“Es regnet” means it's raining.
My car thermostat perceives changes in the engine temperature.
Each of these statements ascribes intentionality, but the status of the ascriptions is different.
As recently as two decades ago there was little interest among neuroscientists, philosophers, psychologists, and cognitive scientists generally in the problem of consciousness. Reasons for the resistance to the problem varied from discipline to discipline. Philosophers had turned to the analysis of language, psychologists had become convinced that a scientific psychology must be a science of behavior, and cognitive scientists took their research program to be the discovery of the computer programs in the brain that, they thought, would explain cognition. It seemed especially puzzling that neuroscientists should be reluctant to deal with the problem of consciousness, because one of the chief functions of the brain is to cause and sustain conscious states. Studying the brain without studying consciousness would be like studying the stomach without studying digestion, or studying genetics without studying the inheritance of traits. When I first got interested in this problem seriously and tried to discuss it with brain scientists, I found that most of them were not interested in the question.
The reasons for this resistance were various but they mostly boiled down to two. First, many neuroscientists felt – and some still do – that consciousness is not a suitable subject for neuroscientific investigation. A legitimate brain science can study the microanatomy of the Purkinje cell, or attempt to discover new neurotransmitters, but consciousness seems too airy-fairy and touchy-feely to be a real scientific subject. Others did not exclude consciousness from scientific investigation, but they had a second reason: “We are not ready” to tackle the problem of consciousness.
Since the early work on speech acts by Austin, Grice, myself and others in the 1950s and '60s, it has been possible to distinguish two apparently inconsistent strands in speech act theory. One strand, most prominently associated with the name of Grice (1957, 1969), treats individual intentionality as the fundamental notion in the theory of speech acts. Meaning is created by individual acts whereby speakers attempt to produce effects on hearers by getting the hearers to recognize their attempt to produce those effects. Meaning is thus the product of individual acts of meaning. On a Gricean analysis, there is no suggestion that conventions, rules, or social practices are in any way essential for the performance of speech acts. A second tradition associated with the name of Austin (1962), as well as with my own early book Speech Acts (1969), emphasizes the role of social institutions in the performance of speech acts. On this view, social conventions, rules, and contexts of utterance play a crucial role in the determination of the speech act. Meaning, on this view, is not just a product of individual intentionality, but it is also the result of social practices.
Is there anything to be said for either of these approaches, and in particular, are they, or can they be made to be, consistent with each other? My own view is that if they are stated carefully they can be construed in a way which renders them consistent. I believe they are both trying to say something true.
Like most words, ‘consciousness’ does not admit of a definition in terms of genus and differentia or necessary and sufficient conditions. Nonetheless, it is important to say exactly what we are talking about because the phenomenon of consciousness that we are interested in needs to be distinguished from certain other phenomena such as attention, knowledge, and self-consciousness. By ‘consciousness’ I simply mean those subjective states of sentience or awareness that begin when one awakes in the morning from a dreamless sleep and continue throughout the day until one goes to sleep at night, or falls into a coma, or dies, or otherwise becomes, as one would say, ‘unconscious’.
Above all, consciousness is a biological phenomenon.We should think of consciousness as part of our ordinary biological history, along with digestion, growth, mitosis and meiosis. However, though consciousness is a biological phenomenon, it has some important features that other biological phenomena do not have. The most important of these is what I have called its ‘subjectivity’. There is a sense in which each person's consciousness is private to that person, a sense in which he is related to his pains, tickles, itches, thoughts and feelings in a way that is quite unlike the way that others are related to those pains, tickles, itches, thoughts and feelings. This phenomenon can be described in various ways. It is sometimes described as that feature of consciousness by way of which there is something that it's like or something that it feels like to be in a certain conscious state. If somebody asks me what it feels like to give a lecture in front of a large audience, I can answer that question.
This essay begins with an intuition, a notation, and a presupposition. The intuition is: Collective intentional behavior is a primitive phenomenon that cannot be analyzed as just the summation of individual intentional behavior; and collective intentions expressed in the form “we intend to do such-andsuch” or “we are doing such-and-such” are also primitive phenomena and cannot be analyzed in terms of individual intentions expressed in the form “I intend to do such-and-such” or “I am doing such-and-such.” The notation is: S (p). The “S” stands for the type of psychological state; the “p” stands for the propositional content, the content that determines the conditions of satisfaction. Like all such notations, it isn't neutral; it embodies a theory. The presupposition is: All intentionality, whether collective or individual, requires a preintentional Background of mental capacities that are not themselves representational. In this case that implies that the functioning of the phenomena represented by the notation requires a set of phenomena that cannot be represented by that notation.
The questions this essay addresses are: Is the intuition right? (It is denied by most of the authors I have read on the subject.) And if it is right, can it be made to fit the notation? How, if at all, can we capture the structure of collective intentions within that notation? And what role does the Background play in enabling us to function in social collectives? These questions are not innocent. The larger question they form a part of is: How far can the theory of intentional action in Intentionality (Searle, 1983) be extended to become a general theory?
The essays collected in this volume were written over a period of two decades. They deal with a wide range of subjects and were intended for a variety of audiences. Despite the variety, there are certain unifying principles that underlie this collection; indeed, I have tried to make a selection that will exhibit a natural progression, as the topics move from consciousness to intentionality to society to language, and finally conclude with several debates about the issues that have preceded. In this introduction I want to try to state some of these unifying principles and offer a brief description (and note I say “description” rather than “summary” or “abstract”) of the essays.
There is a single overarching problem that has preoccupied me since I first began work in philosophy almost a half-century ago: How can we have a unified and theoretically satisfactory account of ourselves and of our relations to other people and to the natural world?Howcan we reconcile our common-sense conception of ourselves as conscious, free, mindful, speechact performing, rational agents in a world that we believe consists entirely of brute, unconscious, mindless, meaningless, mute physical particles in fields of force? How, in short, can we make our conception of ourselves fully consistent and coherent with the account of the world that we have acquired from the natural sciences, especially physics, chemistry, and biology? The questions that have most preoccupied me – What is a speech act? What is consciousness? What is intentionality? What is society? What is rationality? – have all in one way or another been addressed to this larger problematic.
Traditionally speech act theory has a very restricted subject matter. The speech act scenario is enacted by its two great heroes, “S” and “H”; and it works as follows: S goes up to H and cuts loose with an acoustic blast; if all goes well, if all the appropriate conditions are satisfied, if S's noise is infused with intentionality, and if all kinds of rules come into play, then the speech act is successful and nondefective. After that, there is silence; nothing else happens. The speech act is concluded and S and H go their separate ways. Traditional speech act theory is thus largely confined to single speech acts. But, as we all know, in real life speech acts are often not like that at all. In real life, speech characteristically consists of longer sequences of speech acts, either on the part of one speaker, in a continuous discourse, or it consists, more interestingly, of sequences of exchange speech acts in a conversation, where alternately S becomes H, and H, S.
Now the question naturally arises: Could we get an account of conversations parallel to our account of speech acts? Could we, for example, get an account that gave us constitutive rules for conversations in a way that we have constitutive rules of speech acts? My answer to that question is going to be “No.” But we can say some things about conversations; we can get some sorts of interesting insights into the structure of conversations.
The neurosciences have now advanced to the point that we can address – and perhaps, in the long run, even solve – the problem of consciousness as a scientific problem like any other. However there are a number of philosophical obstacles to this project. The aim of this article is to address and try to overcome some of those obstacles. Because the problem of giving an adequate account of consciousness is a modern descendant of the traditional ‘mindbody problem’, I will begin with a brief discussion of the traditional problem.
The mind-body problem can be divided into two problems; the first is easy to solve, the second is much more difficult. The first is this: what is the general character of the relations between consciousness and other mental phenomena on the one hand and the brain on the other? The solution to the easy problem can be given with two principles: first, consciousness and indeed all mental phenomena are caused by lower level neurobiological processes in the brain; and, second, consciousness and other mental phenomena are higher level features of the brain. I have expounded this solution to the mindbody problem in a number of writings, so I will not say more about it here (but see Searle 1984, 1992).
The second, and more difficult problem, is to explain in detail how consciousness actually works in the brain. Indeed, I believe that a solution to the second problem would be the most important scientific discovery of the present era.
Many species of animals have consciousness, intentionality, and thought processes. By “consciousness” I mean those subjective states of sentience and awareness that we have during our waking life (and at a lower level of intensity in our dreams); by “intentionality” I mean that feature of the mind by which it is directed at or about objects and states of affairs in the world; and by “thought processes” I mean those temporal sequences of intentional states that are systematically related to each other, where the relationship is constrained by some rational principles. Examples of conscious states are such things as feeling a pain or hearing a sound. Examples of intentional states are such things as wanting to eat food or believing that someone is approaching. Examples of thought processes are such things as figuring how to get a banana that is out of reach or monitoring the behavior of prey who is on the move and is trying to escape. Though these three phenomena – consciousness, intentionality, and thought processes – overlap, they are not identical. Some conscious states are intentional, some not. Some intentional states are conscious, many are not. For example, my current thought that it is unlikely to rain is conscious, my belief when I am asleep that Bill Clinton is president of the United States is unconscious. All thought processes, as I have defined them, are intentional; but not every intentional state occurs as part of a thought process. For example, a pang of undirected anxiety, though conscious, is not intentional. A sudden desire for a cold beer is both conscious and intentional.
Throughout most of its history analytic philosophy has exhibited a curious prejudice against the mental. Many, perhaps most, analytic philosophers have felt that there was something especially puzzling about mental processes, states, and events, and that we would be better off if they could be analyzed away or explained in terms of something else or somehow eliminated. One sees this attitude, for example, in the persistent use of pejorative adjectives, such as “mysterious” and “occult,” that analytic philosophers from Ryle to Rorty use to characterize mental phenomena naively construed.
I first became aware of the pervasiveness of this attitude when I tried to extend my analysis of speech acts to intentional states. No one doubts the existence of promises, statements, apologies, and commands, but when the analysis is extended to beliefs, fears, hopes, desires, and visual experiences, suddenly philosophers raise a host of “ontological” doubts. I think that thinking and other mental processes and events, like linguistic processes and events, are biologically based and are as real as digestion, conversation, lactation, or any other of the familiar biologically based processes. This seems to me so obviously true as to be hardly worth arguing, but I am assured that it is a minority opinion in contemporary philosophy.
During the positivist and verificationist phase of analytic philosophy the reason for the urge to eliminate the mental was not difficult to see: if the meaning of a statement is its method of verification and if the only method of verification of statements about the mental is in the observations of behavior, at least where “other minds” are concerned, then it would appear that some sort of behaviorism is an immediate logical consequence of verificationism.