To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
For equivalence at the level of elementary logic there is at least a complete proof procedure, even if there is no complete disproof procedure; for mathematical equivalence in general, as a consequence of the Gödel theorem, there is not even a complete proof procedure. But the notion of equivalence that is philosophically important today is not the notion of logical or mathematical equivalence, but rather the notion of cognitive equivalence of whole theories, and, in particular, of theoretical systems which are, taken literally, incompatible. It is to this topic – the cognitive equivalence of theories and conceptual systems, especially systems which are incompatible when taken at face value – that the present article is devoted.
‘Equivalence’ as a philosophical notion
For one kind of traditional realist philosopher the sort of equivalence that we are discussing does not exist at all. This is the sort of realist who believes, as Lenin did in Materialism and Empirio-Criticism, that theories are simply ‘copies’ of the world. If realism is identified with the view that there is ‘one true theory of everything’ (and exactly one), then realism is just the denial that there is a plurality of ‘equivalent descriptions’ of the world (apart from the noncontroversial case of logical or mathematical equivalence). Today, however, few if any philosophers of a realist stamp would wish to be identified with that sort of realism.
Michael Dummett (1975) has argued that logic and metaphysics are intimately connected. While Dummett's arguments are based upon considerations from the philosophy of language, rather than upon the actual history of logic and metaphysics, I believe that the history of these subjects suggests that Dummett is right. G. E. L. Owen has pointed out† that the notion of a ‘property’ was neither an evident nor a simple one for either Aristotle or Plato. We can say of a man that he is a ‘white man’; but if we ask whether the man is white in the way that a white wall is white, we shall have to answer ‘no’. What ‘white’ is is not specified, Aristotle thought, until we say what sort of thing we are predicating it of, taking as the standard, or whatever. But what if the term we use to answer this question has the same relativity?
(Modern logic teachers would probably tell their students: ‘in Logic we assume – or pretend – that all terms have somehow been made precise’. According to Owen, Aristotle – and even Plato – worried about this pretense. Is it a pretense that we have done something that we – or some conceivable cognitive extension of ourselves – could in principle do? Or a pretense that we have done a ‘we know not what it would be like’? And who is truly more sophisticated: the modern logic teacher, for whom this is no problem, or the founders of the subject?)
Even the schema ∼ (Px & ∼ Px) becomes problematic if this relativity cannot somehow be avoided when we wish.
I believe that the time has come to take another look at Quine's (1951) celebrated article ‘Two dogmas of empiricism’. The analysis of this article that is usually offered in philosophy seminars is very simple: Quine was attacking the analytic–synthetic distinction. His argument was simply that all attempts to define the distinction are circular. I think that this is much too simple a view of what was going on. (The continuing recognition of the great importance of ‘Two dogmas’ shows that at some level many readers must be aware that something deep and momentous for philosophy was going on.)
I shall argue that what was importantly going on in Quine's paper was also more subtle and more complicated than Quine and his defenders (and not just the critics) perceived. Some of Quine's arguments were directed against one notion of analyticity, some against another. Moreover, Quine's arguments were of unequal merit. One of the several notions of analyticity that Quine attacked in ‘Two dogmas of empiricism’ was close to one of Kant's accounts of analyticity (namely, that an analytic judgment is one whose negation reduces to a contradiction), or, rather, to a ‘linguistic’ version of Kant's account: a sentence is analytic if it can be obtained from a truth of logic by putting synonyms for synonyms. Let us call this the linguistic notion of analyticity. Against this notion, Quine's argument is little more than that Quine cannot think how to define ‘synonymy’.
The essays collected in this volume were written in a period of rethinking and reconsidering much of my philosophical position. A look at the introduction to the second volume of my Philosophical Papers, and at the essay titled ‘Language and reality’ in that volume, will reveal that in 1975 I thought that the errors and mistakes I detected in analytical philosophy were occasioned by ‘naive verificationism’ and ‘sophisticated verificationism’. I described myself as a ‘realist’ (without any qualifying adjective), and I chiefly emphasized the importance of reference in determining meaning in opposition to the idea, traditional among both realists and idealists, that it is meaning that determines reference. Reference itself I described as a matter of causal connections. The following quotation (from ‘Language and reality’) illustrates these themes at work:
As language develops, the causal and noncausal links between bits of language and aspects of the world become more complex and more various. To look for any one uniform link between word or thought and object of word or thought is to look for the occult; but to see our evolving and expanding notion of reference as just a proliferating family is to miss the essence of the relation between language and reality. The essence of the relation is that language and thought do asymptotically correspond to reality, to some extent at least. A theory of reference is a theory of the correspondence in question.
One of the most important themes in Goodman's (1978) Ways of Worldmaking (WoW) is that there is no privileged basis. Reducing sense data to physical objects or events is an admissible research program for Goodman; it is no more (and no less) reasonable than reducing physical objects to sense data. As research programs, there is nothing wrong with either physicalism or phenomenalism; as dogmatic monisms there is everything wrong with both of them.
This is decidedly not the fashionable opinion today. Physicalism and ‘realism’ are at the high tide of fashion; phenomenalism has sunk out of sight in a slough of philosophical disesteem and neglect. Goodman's assumption that physicalism and phenomenalism are analogous would be disputed by many philosophers.
It is this assumption that I wish to explain and defend before considering other aspects of WoW. Because it runs so counter to the fashion, it is of great importance to see that it is correct. At the same time, the analogy leads directly to the heart of Goodman's book, which is its defense of pluralism.
In WoW, Goodman points out that the phenomenal itself has many equally valid descriptions. In his view, this arises from two causes. First of all, perception is itself notoriously influenced by interpretations provided by habit, culture, and theory. (Goodman's long and close acquaintance with actual psychological research shines through many sections of WoW.) We see toothbrushes and vacuum tubes as toothbrushes and vacuum tubes, not as arrangements of color patches.
In a number of famous publications (the most famous being the celebrated article ‘Two dogmas of empiricism’) Quine has advanced the thesis that there is no such thing as an (absolutely) a priori truth. (Usually he speaks of ‘analyticity’ rather than apriority; but his discussion clearly includes both notions, and in his famous paper ‘Carnap and logical truth’ he has explicitly said that what he is rejecting is the idea that any statement is completely a priori. For a discussion of the different threads in Quine's arguments, see chapter 5). Apriority is identified by Quine with unrevisability. But there are at least two possible interpretations of unrevisability: (1) a behavioral interpretation, namely, an unrevisable statement is one we would never give up (as a sheer behavioral fact about us); and (2) an epistemic interpretation, namely, an unrevisable statement is one we would never be rational to give up (perhaps even a statement that it would never be rational to even think of giving up). On the first interpretation, the claim that we might revise even the laws of logic becomes merely the claim that certain phenomena might cause us to give up our belief in some of the laws of logic; there would be no claim being made that doing so would be rational. Rather the notion of rationality itself would have gone by the board.
I don't know if Quine actually intended to take so radical a position as this, but, in any case, I think that most of his followers understood him to be advocating a more moderate doctrine.
Two ideas that have become a part of our philosophical culture stand in a certain amount of conflict. One idea, which was revived by Moore and Russell after having been definitely sunk by Kant and Hegel (or so people thought) is metaphysical realism, and the other is that there are no such things as intrinsic or ‘essential’ properties. Let me begin by saying a word about each.
What the metaphysical realist holds is that we can think and talk about things as they are, independently of our minds, and that we can do this by virtue of a ‘correspondence’ relation between the terms in our language and some sorts of mind-independent entities. Moore and Russell held the strange view that sensibilia (sense data) are such mind-independent entities: a view so dotty, on the face of it, that few analytic philosophers like to be reminded that this is how analytic philosophy started. Today material objects are taken to be paradigm mind-independent entities, and the ‘Correspondence’ is taken to be some sort of causal relation. For example, it is said that what makes it the case that I refer to chairs is that I have causally interacted with them, and that I would not utter the utterances containing the word ‘chair’ that I do if I did not have causal transactions ‘of the appropriate type’ with chairs.
The use by logicians and philosophers of the notions of possibility and necessity goes back to Aristotle. In the modern period, enormous use was made of the notion of a ‘possible world’ by Leibniz. Yet the epistemological and metaphysical foundations of these notions remain obscure.
Although empiricist philosophers tried to restrict necessity to linguistic necessity, or even to banish it from philosophy altogether, the notions have proved, like other perennial philosophical notions, to be extremely hardy. (Some philosophers would complain that they are hardy weeds.) As a result of the work described below, modal logic, possible worlds semantics (a theory due to Richard Montague which has connections with what we shall discuss here, although it falls beyond the purview of this article), the topic of ‘essences’, and the theory of counterfactual conditionals have all been pursued with vigor. Indeed, the concepts of necessity and possibility have enjoyed an unprecedented philosophical revival.
In this article I shall first discuss the strange subject of quantum logic, which well illustrates the case for abandoning the notion of necessity (in the sense of apriority) altogether, and then look at two representative examples of the work on the non-epistemic notion of necessity, metaphysical necessity, as it is grandly called. These examples are theories of Saul Kripke and David Lewis, respectively, and they have been the pace-setters for the revival of talk about possible worlds and metaphysically necessary truths.
The preceding chapter described the failure of contemporary attempts to ‘naturalize’ metaphysics; in the present chapter I shall examine attempts to naturalize the fundamental notions of the theory of knowledge, for example the notion of a belief's being justified or rationally acceptable.
While the two sorts of attempts are alike in that they both seek to reduce ‘intentional’ or mentalistic notions to materialistic ones, and thus are both manifestations of what Peter Strawson (1979) has described as a permanent tension in philosophy, in other ways they are quite different. The materialist metaphysician often uses such traditional metaphysical notions as causal power, and nature quite uncritically. (I have even read papers in which one finds the locution ‘realist truth’, as if everyone understood this notion except a few fuzzy anti-realists.) The ‘physicalist’ generally doesn't seek to clarify these traditional metaphysical notions, but just to show that science is progressively verifying the true metaphysics. That is why it seems just to describe his enterprise as ‘natural metaphysics’, in strict analogy to the ‘natural theology’ of the eighteenth and nineteenth centuries. Those who raise the slogan ‘epistemology naturalized’, on the other hand, generally disparage the traditional enterprises of epistemology. In this respect, moreover, they do not differ from philosophers of a less reductionist kind; the criticism they voice of traditional epistemology – that it was in the grip of a ‘quest for certainty’, that it was unrealistic in seeking a ‘foundation’ for knowledge as a whole, that the ‘foundation’ it claimed to provide was by no means indubitable in the way it claimed, that the whole ‘Cartesian enterprise’ was a mistake, etc., – are precisely the criticisms one hears from philosophers of all countries and types.
According to Bertrand Russell's view in Problems of Philosophy, we have two kinds of knowledge: knowledge by acquaintance and knowledge by description. Knowledge by acquaintance is limited to sense data (for Russell, sense data were themselves qualities, and were thus universals rather than particulars; but the details of his theory are not relevant here). Sense data can be directly apprehended and named according to Russell; only in the case of sense data can we be certain that a name refers and certain of what it refers to. These names – names of sense data of which we have knowledge by acquaintance – Russell called ‘logically proper names’.
Of other sorts of things we have knowledge by description. I can know that there was such a person as Julius Caesar, even though I do not have knowledge by acquaintance of Julius Caesar, because I can describe Julius Caesar as ‘the Roman general who was named “Julius”, who defeated Pompey, who crossed the Rubicon, etc.’ (Of course, these clauses should be reformulated so as to contain only logically proper names: a difficult problem for Russell.)
Russell's view can be restated as a view about the reference of terms as follows.
There are two sorts of terms: basic terms and defined terms.
The defined terms are synonymous with descriptions, i.e., expressions of the form ‘The one and only entity which …’. (Russell's celebrated ‘theory of descriptions’ showed how to translate descriptions into the notation of symbolic logic.)
Basic terms refer to things to which we have some sort of epistemic access.
Both Wittgenstein and Quine have had important insights in connection with the nature of mathematical and logical ‘necessity’, and both have written things that have transformed the discussion of this topic. But it is the burden of this paper to show that the views of both are unacceptable as they stand. I hope that a short and sharp statement of why both sorts of views will not do may help take us to a new stage in the discussion.
Part I: Why mathematical necessity is not explained by human nature, ‘forms of life’, etc.
Wittgensteinian views
Just what Wittgenstein's contention is, in connection with philosophers' opinions, theories, and arguments on the topic of ‘mathematical necessity’, has been a subject of considerable controversy. Clearly he thinks the whole discussion is nonsensical and confused; but why (in his view) it is nonsensical and confused, and whether he offers any explanation at all of why we think there is such a thing as mathematical necessity and of what the difference is between mathematical and empirical statements, is a subject on which there seems to be a great deal of disagreement among his interpreters.
I shall not attempt to do any textual exegesis here. I know what the (several) views of Wittgensteinians are, even if I do not know for sure which, if any, was Wittgenstein's and what I shall try to show is that not even the most sophisticated of these ‘Wittgensteinian’ views is tenable.
The failure of non-reductive forms of materialism leads us to consider reductive theories. The most popular contemporary versions of reductive materialism, namely the central state theories of Smart and Armstrong, were developed in response to the inadequacies of behaviourism. They also retained important behaviouristic components. So it is natural to begin our discussion by plotting the logical paths which have led from behaviourism to central state materialism.
According to the most naive formulation of behaviourism, to be in a certain mental state is to be behaving in some supposedly appropriate way. Thus, to be in pain is to be pain-behaving. The inadequacies of such a naive approach were always apparent: overt behaviour is neither necessary nor sufficient for possession of the mental state. Some people can suffer pain and repress the normal behavioural response others – for example actors – might exhibit the behaviour when they lack the pain. Behaviourists have tended to adopt either of two possible modifications to the theory to meet this problem. We shall see that both responses point towards central state materialism. The first modification, and the more popular one among philosophers, is to identify a mental state with the disposition towards behaviour, rather than with the behaviour itself. The first problem with this theory is to explain what one means by ‘disposition’. Ryle (1949) popularised this general approach to the philosophy of mind and he interpreted ‘disposition’ in a non-realist (sometimes called ‘phenomenalist’) sense. For Ryle, to say that s has a disposition to Ø is to say that he has Ød, is Øing, will Ø or would Ø in certain specified appropriate circumstances.
William James divided philosophers into two psychological types, the tender-minded and the tough-minded. One of the characteristics of the tough-minded is a disposition to believe in philosophical materialism, whilst the tender-minded prefer dualism or idealism. The epithets ‘tender-minded’ and ‘tough-minded’ are not meant as evaluations of the arguments used to defend the theories each type is inclined to adopt; rather they describe the temperaments which are prone to accept and defend those theories: they are a psychological not a rational evaluation. Despite their association with the toughminded temperament, materialist theories have tended to get the worst of the argument a bout the nature of the mind, if we judge by influence on the history of philosophy. Plato and Aristotle, for example, have had a greater appeal for philosophers through the ages than have the epicureans or the atomists: Descartes's influence is much greater than Hobbes's. This has been so despite the influence on modern philosophy of the materialistically inclined physical sciences. James also classified the approach and doctrines normally labelled ‘empiricist’ as tough-minded, yet empiricists have tended not to be materialists. The modern philosopher most representative of traditional empiricism, Sir Alfred Ayer, can both declare that his only object of faith is science and express a reluctant pessimism about the possibility of a coherent materialist theory of the human mind.1 One may be tempted to cynicism and suspect that otherwise hardnosed philosophers become tender-minded when dealing with their own souls.
The concept of reduction has always played an important part in the discussion of materialist theories. To some extent it is a ‘boo’ word, signifying that the materialist is adopting a counterintuitive position – one which involves claiming that mind is other than it seems. For this reason the philosophers we have been discussing so far deny that they are reductionists. They say that the mental is not reducible to the physical, although it supervenes upon it. The purpose of this chapter is to investigate the fashionable concept of supervenience and to see how, if at all, it differs from reduction.
It is convenient to start this investigation by picking up an unfinished line of argument from our discussion of interaction and epiphenomenalism (see ch. 1.3). One criterion of whether an event or state is epiphenomenal and does not interact with other things is whether its absence would have had causal consequences, ceterisparibus. We showed that mental events were epiphenomenal on Davidson's and Peacocke's theory without recourse to this criterion: we showed that a interacts with b to produce c only if a causally explains c, whereas on their theory the occurrence of mental states causally explains nothing. However, reversion to the former criterion is a good place to begin a discussion of supervenience. It serves this purpose because the question ‘What would have happened if mental states had been absent and other things had remained the same?’ will only have application if it makes sense to suggest that mental states might have been absent while physical states remained the same.
At the beginning of chapter 1 we referred to three types of materialist theory. The most dramatic-sounding of these, namely the theory that sensations and consciousness do not exist, has as yet barely been mentioned. In this brief chapter I shall discuss the so-called ‘disappearance’ or ‘elimination’ theory of mind. The purpose of that discussion is to show two things. The first is that the disappearance theory, must, if it is to be coherent, involve the causal or functionalist theory: it does not represent an alternative approach. The second is that those elements in the theory that go beyond this, and which might be deemed its original parts, are misleading and generally false.
The motive for propounding the disappearance theory was to avoid analytical reductionism. Such reductionism will, in Richard Rorty's opinion, ‘inevitably get bogged down in controversy about the adequacy of proposed topic-neutral translations of statements about sensations’ (1965–6: 27). He therefore suggests an alternative form of the identity theory which supposedly does not involve giving any analysis of psychological discourse. According to the proposed theory, mental and physical are not related by strict Leibnitzian identity, and therefore there is no necessity of claiming that everything that can truly be said of one can truly be said of the other. Instead, he chooses as the model for the relation between mental and physical states the relation of entities in a scientific theory which is about to be abandoned to those entities which are postulated in the theory which is replacing it, and which perform the same explanatory role as the entities in the original theory.