If a theory entails that a certain statement is true and it is known that the statement is indeed true, this constitutes a reason to believe the theory. However, various other factors may influence the strength of such evidence. In this chapter I want to explore four superficially related intuitions concerning the degree of confirmation provided by theoretically entailed data: (l) the view that verification of relatively surprising consequences of a theory has especially great evidential value; (2) the idea that survival through severe experimental tests provides a theory with particularly strong support; (3) that the postulation of ad hoc hypotheses is generally disreputable; and (4) that the successful prediction of subsequently verified events boosts the credibility of whatever theory is employed to a greater extent than the subsumption or accommodation of previously known results.
It is tempting to regard these intuitions as manifestations of a single underlying phenomenon. For, one might suppose, mere subsumption of known data is inferior to prediction since it is tainted with ad hocness; ad hoc hypotheses are undesirable because the facts to which they are tailored have not derived from a genuine test of the new theory; and a genuine, relatively severe test of a theory is one which it is relatively likely to fail – where the data which would constitute passing are relatively improbable and surprising. Thus, our ideas about surprising consequences, severe tests, ad hoc hypotheses, and prediction versus subsumption, seem to be intimately related. I will argue, however, that this line of thought is almost wholly mistaken. (3) is explained independently of (l) and (2); and (4) is shown to be incorrect.
Let us start with the question: what makes something surprising? Presumably, it is sufficient that we strongly believed, until the moment of truth, that it would not happen. But violation of active expectations cannot be the whole story since we can be reasonably taken aback by things that we had no occurrent beliefs about – explosion of the moon, for example.
The traditional problem of induction derives from Hume's question: ‘What is the nature of that evidence which assures us of any real existence and matter of fact beyond the present testimony of our senses or the records of memory?’ His answer is that we expect the future to resemble the past and ‘expect similar effects from causes which are to appearance similar’. But he argues that this assumption cannot be deductively justified (since no contradiction arises from its denial), and cannot be established by reasoning appropriate to matters of fact (since such reasoning would require the very assumption to be justified, and would therefore be circular). And so he infers that ‘our conclusions concerning matters of fact are not founded on reasoning or any process of the understanding’. In short, Hume's view is that these beliefs are determined by the unjustifiable instinctive expectation that nature will be uniform.
As we shall see, there has been little improvement upon Hume's line of thought, although these days the discussion is more refined. In particular, it now seems clear that the problem of induction – to investigate the rational basis of inductive inference – cannot be resolved without considerable preliminary attention to (A), the nature of inductive inference; and to (B), what should be required in a justification of it. Thus, there are three distinct questions here, and three respects in which an alleged solution may be open to criticism. Firstly, it may involve, and exploit, an erroneous conception of our inductive practice. As the grue problem reveals, we do not generally accept the rule
All the many sampled As have been B
∴ Probably all As are B
Secondly, it could presuppose mistaken standards of justification.Obviously, we should not be required to show that inductive arguments are invariably truth preserving.And thirdly, it may fail, even within its own terms, to establish that what is taken to constitute our inductive reasoning does meet the supposed adequacy conditions.
My purpose in writing this essay was to exhibit a unified approach to philosophy of science, based on the concept of subjective probability. I hope to have contributed to the subject, first, by offering new treatments of several problems (for example the raven paradox, the nature of surprising data, and the supposed special evidential value of prediction over and above the accommodation of experimental results); and, second, by providing a more complete probabilistic account of scientific methods and assumptions than has been given before. In the interest of autonomy, I have included a chapter on probability which surveys the ideas that will be needed later, keeping technicalities to a minimum. Unfortunately, there was no room for a proper consideration of many alternative points of view; and a much longer treatise would be desirable in which the competition is adequately presented and criticised. Nevertheless, I do make something of a case for the probabilistic approach: it yields satisfying solutions to a wide range of problems in the philosophy of science.
This book is aimed at professional philosophers, but not exclusively; for I have covered a lot of ground, trying to explain everything from scratch, and so I hope that students will find here a useful introduction to the subject. Let no one be intimidated by the occasional intrusion of symbols. The formalism is intended to promote clarity; it is not difficult to master; and if worse comes to worst, those few sections may be skimmed without losing the general drift.
I would like to acknowledge the help and pleasure I have received from Hempel's Philosophy of Natural Science, from the work of Hacking and Kyburg on probability, and from Janina Hosiasson-Lindenbaum's brilliant early essay, ‘On Confirmation’ Also, I am very grateful to Frank Jackson, Thomas Kuhn and Dan Osherson who read an early draft and contributed many helpful suggestions. And I would especially like to thank my friends, Ned Block and Josh Cohen, for their excellent advice and steady encouragement.
There is no shortage of objections in the philosophical literature to the general strategy and the particular concepts which have been employed here. Some of these complaints have been aired in previous chapters and, I hope, defused. In particular, I have nothing to add to the earlier arguments, in Chapter 2, for the existence of degrees of belief and their conformity to the probability calculus. Let me simply repeat what I think are the main considerations which help to allay criticism of those ideas. (l) A respectable concept need not be operationally definable. (2) The present notion of subjective probability was designed only to provide a perspicuous representation facilitating the exposure of confusion, and is not intended to serve the needs of psychology, the history of science or any other discipline.
However, there are several perspectives opposed to my approach which have not yet been explicitly addressed, and I would like in conclusion to respond directly to some of them. I will begin with Popper, who has been perhaps the most prominent and persistent opponent of Carnap-style probabilistic inductivism. Then I will discuss the bearing of Bayesianism upon the realism/instrumentalism controversy. Thirdly, I will examine an ingenious argument due to Putnam and intended to produce dissatisfaction with probabilistic confirmation theory. Finally, I will consider the criticisms of Bayesianism which motivate Glymour's ‘bootstrap’ conception of evidence.
According to Popper (1972) we ought never to believe that a general explanatory theory is true, or even probable; but we may often come to know that such a theory is false – when it conflicts with our data. Rational scientific inquiry proceeds by the formulation of bold (= contentful = easily falsifiable = intrinsically improbable) conjectures, by their subjection to rigorous experimental investigation, and by the invention of new hypotheses to resolve the problems, yet preserve the merits, of those which have been refuted. Progress consists in the growth in our knowledge of which theories are false, and in the increasing corroboration (survival through severe tests) of those which are as yet unfalsified, and which might therefore be true.
If your unconditional degree of belief that elephants can fly is lower than your conditional degree of belief relative to the supposition that they have wings, then, at least as far as you are concerned, the information that elephants have wings would count in favour of the view that they are capable of flight. Generalising, we might be inclined towards the following probabilistic account of evidence:
(1) E confirms (supports, is evidence for) H
iff P(H/E) > P(H)
(2) E disconfirms (is evidence against) H
iff P(H/E) < P(H)
(3) E is irrelevant to H
iff P(H/E) = P(H)
But this would be open to a couple of substantial objections. First, it is too subjective. It suggests the conditions in which an individual, given his personal degrees of belief P, would say that E confirms H. But facts about evidence are not simply matters of taste. E may be evidence for H, whatever I may happen to think; and what we want is an account of the circumstances in which this objective fact obtains. Second, as Glymour (1980) has noted, its plausibility is confined to those contexts in which the truth of E is uncertain. When E is known, P(E) = 1 and P(H/E) = P(H). Thus, we would have to deny that established data could ever qualify as evidence; and this is obviously wrong.
Nevertheless, this approach is on the right track, and both difficulties can be avoided by the introduction of suitable revisions. To see what is required it is necessary to acknowledge two things.
This book is about scientific knowledge, particularly the concept of evidence. Its purpose is to explore scientific methodology in light of the obvious yet frequently neglected fact that belief is not an all-or-nothing matter, but is susceptible to varying degrees of intensity. More specifically, my main object is to exploit this fact to treat certain well-known puzzles in the philosophy of science, such as the problem of induction and the paradox of confirmation, as well as questions about ad hoc postulates, the tenability of realism, statistical testing, the relative merits of prediction and accommodation, a special quality of varied data, and the evidential value of further information. My second aim is to display the extent to which diverse elements of scientific method may be unified and justified by means of the concept of subjective probability. These two projects are intimately related. The probabilistic terms in which our evidential ideas will be formulated should promote clarity and accuracy, dispel confusion, and thereby facilitate the primary task. I should stress that this main goal is not to propound or defend a theory of the scientific method, either normative or descriptive, but rather to solve various paradoxical problems. I cannot now adequately describe the conception of philosophy which promotes this distinction and motivates my approach; but I think that some appreciation of that metaphilosophical perspective is needed to understand properly what is being attempted here and to pre-empt certain objections. I hope the following sketch will be better than nothing.
In a way, philosophy contains science and art. For there are philosophical research programmes whose methodology is scientific and others in which aesthetic standards prevail. Investigations into the semantics of natural languages, systematisations of basic ethical judgements, conceptual analyses – attempts to formulate necessary and sufficient conditions for S knows p, x causes γ, and w refers to z – these typify scientific philosophy. Their object is a justified account of certain phenomena – a simple theory designed to accommodate the relevant data provided, usually, by intuition.
The primitive theory
According to what I will call ‘the primitive theory’, the probability that a trial or experiment will have a certain outcome is equal to the proportion of possible results of the trial which generate that outcome. There are six possible results of throwing a die, and three of these yield an even number; so the probability of that outcome is 1/2.
A major fault with this definition is that it entails incorrect attributions of probability. The chances that a biased die will show an even number may not be 1/2. In order to pre-empt this objection, one natural strategy is to require that the possible results of the trial be equally likely. The probability, according to such a modified account, would be the proportion of equally likely results which generate the outcome. However, this saves the definition from incorrect consequences only on pain of circularity. The account is now inadequate as a definition of probability since it depends upon the notion of equally likely results. In order to apply the definition in the calculation of some probability, we must already grasp what it is for the alternative results of the trial to have the same probability.
A second deficiency of the modified primitive theory is that it applies only to a restricted class of those cases in which we attribute probabilities. For example, it does not encompass the claim that the probability of getting 1 or 2 with a certain biased die is 0.154. In many cases there may be no way to divide the possible results into a set of equally probable, mutually exclusive alternatives. For this reason, the primitive theory would also have difficulty in dealing with probability claims such as:
(a) The probability that Oswald shot Kennedy is 0.7.
(b) The probability, on the basis of evidence already collected by the NASA, that there is life on Mars is very slight.
(c) The probability that a radium atom will decay in any given ten-second interval is 0.001.
As we have seen, the primitive theory can be rescued from circularity only by some further characterisation of equiprobability.
As its title suggests, the topic of this paper lies at the intersection of three central debates within philosophy.
One of them is the clash between truth-based approaches to empirical semantics (which form the mainstream) and a less popular use-theoretic point of view, variously known as ‘inferentialism’, ‘expressivism’ and ‘semantic deflationism’. The issue here, in a nutshell, is whether the word- world relation of reference and the derived property of truth are to be given central roles in characterising the nature of meaning and hence in explaining the import of understanding for verbal behaviour. Perhaps it should be acknowledged, rather, that the concepts of truth and reference are exhausted by trivial equivalence schemata that imply their unsuitability for causal-explanatory work (but enable them nonetheless to serve as useful expressive devices). And if so then, in so far as word meanings exert a causal influence on linguistic activity, their grasp will have to be constituted in some non-referential way – e.g., by basic propensities of word usage.
Second, we have the issue of naturalism. There is undeniably a vast, unified network of objects, properties and facts that bear spatial, temporal, causal and explanatory relations to one another – a network incorporating observable phenomena, the elementary particles, fields, strings, etc., of physics for which those phenomena provide evidence, and all the macroscopic objects and events built out of such elements. But is everything located within this network, as naturalism dictates?
The origins of this volume lie in a kind invitation from the Tilburg Center for Logic and Philosophy of Science (TiLPS), to deliver their inaugural René Descartes Lectures in May 2008. I was delighted to accept, and presented the lectures under the title ‘Three Themes in Contemporary Pragmatism’ (the themes in question being naturalism, representationalism and pluralism). The lecture series was held in conjunction with a research workshop on pragmatism and naturalism, providing me with a remarkable opportunity to discuss some of my recent work with the best kind of philosophical audience – broadly sympathetic to a considerable extent, yet challenging on many points. I am very grateful indeed to Professor Stephan Hartmann and his colleagues at TiLPS for their hospitality, and for doing me the honour of inviting me in the first place. I am also greatly indebted to the workshop speakers and participants, for their part in making it such a memorable and educational experience, from my point of view.
With the promise of such an excellent audience, I tried to use the lectures to do two things: first, to present what I felt to be the most interesting ideas in my recent work at that time, and, second, to try to think through some succeeding steps (very much work in progress, at that stage). Accordingly, I used the first lecture to present some material that was then recently in print, on the role and significance of representationalist presuppositions in conventional forms of philosophical naturalism.
Email your librarian or administrator to recommend adding this to your organisation's collection.