Forcing epistemology is a trendy way of defeating the skeptics who since the days of old have cited prima facie error possibilities as some of the most devastating arguments against claims to knowledge. The idea of forcing is to delimit the set of possibilities over which the inquiring agent has to succeed: If the agent can succeed over the relevant possibility set, then the agent may still be said to have knowledge even if he commits many errors, even grave ones, in other but irrelevant possibilities.
Contemporary epistemological studies are roughly either carried out:
in a mainstream or informal way, using largely conceptual analyses and concentrating on sometimes folksy and sometimes exorbitantly speculative examples or counterexamples, or (2) in a formal way, by applying a variety of tools and methods from logic, computability theory or probability theory to the theory of knowledge. The two traditions have unfortunately proceeded largely in isolation from one another.
Many contemporary mainstream and formal epistemologies pay homage to the forcing strategy. The aim of this book is to demonstrate systematically that the two traditions have much in common, both epistemologically and methodologically. If they could be brought closer together, not only might they significantly benefit from one another, the way could be paved for a new unifying program in ‘plethoric’ epistemology.
Epistemology seems to enjoy an unexpectedly sexy reputation these days. A few years ago William Safire wrote a popular novel called The Sleeper Spy. It depicts a distinctly post-cold-war world in which it is no longer easy to tell the good guys – or, rather, the good spies – from the bad ones. To emphasize this sea change, Safire tells us that his Russian protagonist has not been trained in the military or the police, as he would have been during the old days, but as an epistemologist.
One often hears that philosophy largely concerns conceptual analysis. Conceptual analysis is enjoying a revival these days after having been put to sleep for a number of years partly due to the stream of naturalism that has fled the philosophical landscape for the past 50 years or so.
In contemporary mainstream epistemology, the goal of these new conceptual exercises is to spell out and elucidate some of the epistemologically significant notions, like knowledge, justification and rationality, that ordinary folk use on a daily basis. An integral part of the elucidation process is to stretch the usage of these concepts to the max in order to reveal their limitations and what these limitations in turn reveal about the nature of human cognition. Seen from this perspective, conceptual analysis is focused on clarifying how words are used in everyday epistemic contexts.
The actual ‘stretching’ is performed by applying the method of ‘consulting intuitions about possible cases,’ as Jackson (1994) recently made a case for. Jackson takes conceptual analysis to be an indispensable part of intellectual activity in general.
The concept of knowledge is elusive – at least when epistemology starts scrutinizing the concept too much. According to Lewis's contextual epistemology, all there is to knowledge attribution in a given context is a set of rules for eliminating the relevant possibilities of error while succeeding over the remaining possibilities and properly ignoring the extravagant possibilities of error. Considering demons and brains as relevant possibilities of error is often what makes the concept of knowledge evaporate into thin air
FORCINGS knows that P iff S's evidence eliminates every possibility in which not-P – Psst! – except for those possibilities that we are properly ignoring.
Contextualistic epistemology starts much closer to home. Agents in their local epistemic environments have knowledge – and plenty of it in a variety of (conversational) contexts. Knowledge is not only possible, as counterfactual epistemology demonstrates, it is a real and fundamental human condition.
The general contextualistic template for a theory of knowledge is crisply summarized in DeRose's (1995) description of the attribution of knowledge. The description also embodies many of the epistemological themes central to the contextualistic forcing strategy:
Suppose a speaker A says, ‘S knows that P’, of a subject S's true belief that P. According to contextualist theories of knowledge attributions, how strong an epistemic position S must be in with respect to P for A's assertion to be true can vary according to features of A's conversational context. (p. 4)
The incentive to take skeptical arguments to knowledge claims seriously is based on an exploitation of the way in which otherwise operational epistemic concepts, notably knowledge, can be gravely disturbed by sudden changes of the linguistic context in which they figure.
The epistemo-methodological prerequisites for comparing mainstream and formal epistemologies concentrate on the following items: the modality of knowledge, infallibility, forcing and the reply to skepticism; the interaction between epistemology and methodology; the strength and validity of knowledge; reliability; and the distinction between a first-person perspective and a third-person perspective on inquiry.
If knowledge can create problems, it is not through ignorance we can solve them.
Modal Knowledge, Infallibility and Forcing
Agents inquire to replace ignorance with knowledge. Knowledge is a kind of epistemic commitment or attitude held toward propositions or hypotheses describing some aspect of the world under consideration. Agents may in general hold a host of different propositional attitudes, such as belief, hope, wish, desire etc. But there is a special property that knowledge enjoys over and above the other commitments. As Plato pointed out, a distinct property of knowledge is truth. Whatever is known must be true; otherwise it is not knowledge, even though it very well may qualify as belief or some other propositional attitude.
Contemporary notions of knowledge are often modal in nature. Knowledge is defined with respect to other possible states of affairs besides the actual state of affairs (Fig. 2.1). The possibility of knowledge seems ruled out when it is possible that we err. Introducing other possible state of affairs is an attempt to preclude exactly these error possibilities. Knowledge must be infallible by definition. As Lewis (1996) puts it, “To speak of fallible knowledge, of knowledge despite uneliminated possibilities of error, just sounds like a contradiction” (p. 367).
Apart from recent trends in logical epistemology, the epistemologies discussed in the preceding chapter largely neglect the connection between successful learning and knowledge. Computational epistemology is an approach embodying knowledge acquisition studies. It utilizes logical and computational techniques to investigate when guaranteed convergence to the truth about epistemic problems is feasible. Every epistemic problem determines a set of possible worlds over which the inquiring agent is to succeed witnessing a forcing relation.
FORCING ‘Logical reliability theory’ is a more accurate term, since the basic idea is to find methods that succeed in every possible world in a given range.
Computational epistemology is not a traditional epistemological paradigm by any means – neither from the mainstream nor formal perspectives treated so far. It does not start off with global conceptual analyses of significant epistemological notions like knowledge, justification and infallibility. It does not follow logical epistemology in locally focusing on axiomatics, validity and strength of epistemic operators. Computational epistemology is not obligated to hold a particular view, or formulate its ‘characteristic’ definition, of what knowledge is. Given its foundation in computability theory and mathematical logic, computational epistemology is not actually about knowledge but about learning – but learning of course is knowledge acquisition.
It is a curiosity of the philosophical temperament, this passion for radical solutions. Do you feel a little twinge in your epistemology? Absolute skepticism is the thing to try … Apparently the rule is this: if aspirin doesn't work, try cutting of your head.
Humans are in pursuit of knowledge. It plays a significant role in deliberation, decision and action in all walks of everyday and scientific life. The systematic and detailed study of knowledge, its criteria of acquisition and its limits and modes of justification is known as epistemology.
Despite the admirable epistemic aim of acquiring knowledge, humans are cognitively accident-prone and make mistakes perceptually, inferentially, experimentally, theoretically or otherwise. Epistemology is the study of the possibility of knowledge and how prone we are to making mistakes. Error is the starting point of skepticism. Skepticism asks how knowledge is possible given the possibility of error. Skeptics have for centuries cited prima facie possibilities of error as the most substantial arguments against knowledge claims. From this perspective, epistemology may be viewed as a reply to skepticism and skeptical challenges. Skepticism is the bane of epistemology, but apparently also a blessing, according to Santayana (1955): “Skepticism is the chastity of the intellect, and it is shameful to surrender it too soon or to the first comer” (p. 50).
Skepticism is a tough challenge and requires strong countermeasures. In set theory, a powerful combinatorial technique for proving statements consistent with the axioms of set theory was invented by P. Cohen in the 1960s.
In counterfactual epistemology, knowledge is characterized by tracking the truth, that is, avoiding error and gaining truth in all worlds sufficiently close to the actual world given the standard semantic interpretation of counterfactual conditionals. This conception of knowledge imposes a categorical conception of reliability able to solve the Gettier paradoxes and other severe skeptical challenges.
FORCING Knowledge is a real factual relation, subjunctively specifiable, whose structure admits our standing in this relation, tracking, to p without standing in it to some q which we know p to entail.
Epistemology begins with facing the beastly skepticism that arises from the possibility of an evil demon. Any talk about knowledge possession, acquisition let alone maintenance before skepticism's claim about the impossibility of knowledge is defeated, is absurd. To get epistemology off the ground it must be demonstrated that knowledge is in fact possible:
Our task here is to explain how knowledge is possible, given what the skeptic says that we do accept (for example, that it is logically possible that we are dreaming or are floating in a tank). (Nozick 1981, 355)
This is the starting point for the counterfactual epistemology developed by Dretske (1970) and later refined by Nozick (1981).
The often cited premise supporting the skeptical conclusion that agents do not know much of anything is this: If an agent cannot be guaranteed the ability to know the denials of skeptical hypotheses, then knowledge regarding other issues cannot be ascribed to the agent. The traditional understanding of infallibilism (see Chapter 2), which counts every possible world as relevant, supports this pessimistic premise.
Logical epistemology, also known as epistemic logic, proceeds axiomatically. ‘Ξ knows that A’ is formalized as a modal operator in a formal language that is interpreted using the standard apparatus of modal logic. This formal epistemological approach also pays homage to the forcing heuristics by limiting the scope of the knowledge operator through algebraic constraints imposed on the accessibility relation between possible worlds.
FORCING ‘What the concept of knowledge involves in a purely logical perspective is thus a dichotomy of the space of all possible scenarios into those that are compatible with what I know and those that are incompatible with my knowledge. This observation is all we need for most of epistemic logic.
Logical epistemology dates back to Von Wright (1951) and especially to the work of Hintikka (1962) in the early 1960's. Epistemic logics have since then grown into powerful enterprises enjoying many important applications. The general epistemological significance of the logics of knowledge has to some extent been neglected by mainstreamers and formalists alike. The field is in a rather awkward position today. On the one hand, it is a discipline of importance for theoretical computer scientists, linguists and game theorists, for example, but they do not necessarily have epistemological ambitions in their use of epistemic logic. On the other hand, it is a discipline devoted to the logic of knowledge and belief but is alien to epistemologists and philosophers interested in the theory of knowledge.
Recent results and approaches have fortunately brought the logics of knowledge quite close to the theories of knowledge.
Mainstream epistemology seeks necessary and sufficient conditions for the possession of knowledge. The focus is on folksy examples and counterexamples, with reasons undercutting reasons that undercut reasons. According to epistemic reliabilism, reasons may be sustained, truth gained and error avoided if beliefs are reliably formed, sometimes in the actual world, sometimes in other worlds too. But the stochastic notion of reliability unfortunately backfires, reinviting a variety of skeptical challenges.
FORCING On the present rendering, it looks as if the folk notion of justification is keyed to dispositions to produce a high ratio of true beliefs in the actual world, not in ‘normal’ worlds.
Mainstream epistemologies emphasizing reliability date back at least to the 1930s, to F. P. Ramsey's (1931) note on the causal chaining of knowledge. The nomic sufficiency account developed by Ramsey and later picked up and modified by Armstrong in the 1970s is roughly as follows: If a connection can be detected to the effect that the method responsible for producing a belief is causally chained to the truth due to the laws of nature, then this suffices for nomologically stable knowledge and keeps Gettierization from surfacing. Causality through laws of nature gives reliability (Armstrong 1973).
Armstrong draws an illuminating analogy between a thermometer reliably indicating the temperature and a belief reliably indicating the truth. Now a working thermometer is one that gives accurate readings in a range of temperatures. This is not a coincidence. A thermometer is successful because there are laws of nature that connect the readings to the very temperature itself.
Email your librarian or administrator to recommend adding this to your organisation's collection.