To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Apart from recent trends in logical epistemology, the epistemologies discussed in the preceding chapter largely neglect the connection between successful learning and knowledge. Computational epistemology is an approach embodying knowledge acquisition studies. It utilizes logical and computational techniques to investigate when guaranteed convergence to the truth about epistemic problems is feasible. Every epistemic problem determines a set of possible worlds over which the inquiring agent is to succeed witnessing a forcing relation.
FORCING ‘Logical reliability theory’ is a more accurate term, since the basic idea is to find methods that succeed in every possible world in a given range.
Kelly et al. (1997)
Computational epistemology is not a traditional epistemological paradigm by any means – neither from the mainstream nor formal perspectives treated so far. It does not start off with global conceptual analyses of significant epistemological notions like knowledge, justification and infallibility. It does not follow logical epistemology in locally focusing on axiomatics, validity and strength of epistemic operators. Computational epistemology is not obligated to hold a particular view, or formulate its ‘characteristic’ definition, of what knowledge is. Given its foundation in computability theory and mathematical logic, computational epistemology is not actually about knowledge but about learning – but learning of course is knowledge acquisition.
Logical epistemology, also known as epistemic logic, proceeds axiomatically. ‘Ξ knows that A’ is formalized as a modal operator in a formal language that is interpreted using the standard apparatus of modal logic. This formal epistemological approach also pays homage to the forcing heuristics by limiting the scope of the knowledge operator through algebraic constraints imposed on the accessibility relation between possible worlds.
FORCING ‘What the concept of knowledge involves in a purely logical perspective is thus a dichotomy of the space of all possible scenarios into those that are compatible with what I know and those that are incompatible with my knowledge. This observation is all we need for most of epistemic logic.
Jaakko Hintikka 2003b
Logical epistemology dates back to Von Wright (1951) and especially to the work of Hintikka (1962) in the early 1960's. Epistemic logics have since then grown into powerful enterprises enjoying many important applications. The general epistemological significance of the logics of knowledge has to some extent been neglected by mainstreamers and formalists alike. The field is in a rather awkward position today. On the one hand, it is a discipline of importance for theoretical computer scientists, linguists and game theorists, for example, but they do not necessarily have epistemological ambitions in their use of epistemic logic. On the other hand, it is a discipline devoted to the logic of knowledge and belief but is alien to epistemologists and philosophers interested in the theory of knowledge.
Recent results and approaches have fortunately brought the logics of knowledge quite close to the theories of knowledge.
The concept of knowledge is elusive – at least when epistemology starts scrutinizing the concept too much. According to Lewis's contextual epistemology, all there is to knowledge attribution in a given context is a set of rules for eliminating the relevant possibilities of error while succeeding over the remaining possibilities and properly ignoring the extravagant possibilities of error. Considering demons and brains as relevant possibilities of error is often what makes the concept of knowledge evaporate into thin air
FORCINGS knows that P iff S's evidence eliminates every possibility in which not-P – Psst! – except for those possibilities that we are properly ignoring.
David Lewis (1996)
Contextualistic epistemology starts much closer to home. Agents in their local epistemic environments have knowledge – and plenty of it in a variety of (conversational) contexts. Knowledge is not only possible, as counterfactual epistemology demonstrates, it is a real and fundamental human condition.
The general contextualistic template for a theory of knowledge is crisply summarized in DeRose's (1995) description of the attribution of knowledge. The description also embodies many of the epistemological themes central to the contextualistic forcing strategy:
Suppose a speaker A says, ‘S knows that P’, of a subject S's true belief that P. According to contextualist theories of knowledge attributions, how strong an epistemic position S must be in with respect to P for A's assertion to be true can vary according to features of A's conversational context. (p. 4)
The incentive to take skeptical arguments to knowledge claims seriously is based on an exploitation of the way in which otherwise operational epistemic concepts, notably knowledge, can be gravely disturbed by sudden changes of the linguistic context in which they figure.
In counterfactual epistemology, knowledge is characterized by tracking the truth, that is, avoiding error and gaining truth in all worlds sufficiently close to the actual world given the standard semantic interpretation of counterfactual conditionals. This conception of knowledge imposes a categorical conception of reliability able to solve the Gettier paradoxes and other severe skeptical challenges.
FORCING Knowledge is a real factual relation, subjunctively specifiable, whose structure admits our standing in this relation, tracking, to p without standing in it to some q which we know p to entail.
Robert Nozick 1981
Epistemology begins with facing the beastly skepticism that arises from the possibility of an evil demon. Any talk about knowledge possession, acquisition let alone maintenance before skepticism's claim about the impossibility of knowledge is defeated, is absurd. To get epistemology off the ground it must be demonstrated that knowledge is in fact possible:
Our task here is to explain how knowledge is possible, given what the skeptic says that we do accept (for example, that it is logically possible that we are dreaming or are floating in a tank). (Nozick 1981, 355)
This is the starting point for the counterfactual epistemology developed by Dretske (1970) and later refined by Nozick (1981).
The often cited premise supporting the skeptical conclusion that agents do not know much of anything is this: If an agent cannot be guaranteed the ability to know the denials of skeptical hypotheses, then knowledge regarding other issues cannot be ascribed to the agent. The traditional understanding of infallibilism (see Chapter 2), which counts every possible world as relevant, supports this pessimistic premise.
Forcing epistemology is a trendy way of defeating the skeptics who since the days of old have cited prima facie error possibilities as some of the most devastating arguments against claims to knowledge. The idea of forcing is to delimit the set of possibilities over which the inquiring agent has to succeed: If the agent can succeed over the relevant possibility set, then the agent may still be said to have knowledge even if he commits many errors, even grave ones, in other but irrelevant possibilities.
Contemporary epistemological studies are roughly either carried out:
in a mainstream or informal way, using largely conceptual analyses and concentrating on sometimes folksy and sometimes exorbitantly speculative examples or counterexamples, or (2) in a formal way, by applying a variety of tools and methods from logic, computability theory or probability theory to the theory of knowledge. The two traditions have unfortunately proceeded largely in isolation from one another.
Many contemporary mainstream and formal epistemologies pay homage to the forcing strategy. The aim of this book is to demonstrate systematically that the two traditions have much in common, both epistemologically and methodologically. If they could be brought closer together, not only might they significantly benefit from one another, the way could be paved for a new unifying program in ‘plethoric’ epistemology.
Mainstream epistemology seeks necessary and sufficient conditions for the possession of knowledge. The focus is on folksy examples and counterexamples, with reasons undercutting reasons that undercut reasons. According to epistemic reliabilism, reasons may be sustained, truth gained and error avoided if beliefs are reliably formed, sometimes in the actual world, sometimes in other worlds too. But the stochastic notion of reliability unfortunately backfires, reinviting a variety of skeptical challenges.
FORCING On the present rendering, it looks as if the folk notion of justification is keyed to dispositions to produce a high ratio of true beliefs in the actual world, not in ‘normal’ worlds.
Alvin Goldman (1992)
Mainstream epistemologies emphasizing reliability date back at least to the 1930s, to F. P. Ramsey's (1931) note on the causal chaining of knowledge. The nomic sufficiency account developed by Ramsey and later picked up and modified by Armstrong in the 1970s is roughly as follows: If a connection can be detected to the effect that the method responsible for producing a belief is causally chained to the truth due to the laws of nature, then this suffices for nomologically stable knowledge and keeps Gettierization from surfacing. Causality through laws of nature gives reliability (Armstrong 1973).
Armstrong draws an illuminating analogy between a thermometer reliably indicating the temperature and a belief reliably indicating the truth. Now a working thermometer is one that gives accurate readings in a range of temperatures. This is not a coincidence. A thermometer is successful because there are laws of nature that connect the readings to the very temperature itself.
The epistemo-methodological prerequisites for comparing mainstream and formal epistemologies concentrate on the following items: the modality of knowledge, infallibility, forcing and the reply to skepticism; the interaction between epistemology and methodology; the strength and validity of knowledge; reliability; and the distinction between a first-person perspective and a third-person perspective on inquiry.
If knowledge can create problems, it is not through ignorance we can solve them.
Isaac Asimov
Modal Knowledge, Infallibility and Forcing
Agents inquire to replace ignorance with knowledge. Knowledge is a kind of epistemic commitment or attitude held toward propositions or hypotheses describing some aspect of the world under consideration. Agents may in general hold a host of different propositional attitudes, such as belief, hope, wish, desire etc. But there is a special property that knowledge enjoys over and above the other commitments. As Plato pointed out, a distinct property of knowledge is truth. Whatever is known must be true; otherwise it is not knowledge, even though it very well may qualify as belief or some other propositional attitude.
Contemporary notions of knowledge are often modal in nature. Knowledge is defined with respect to other possible states of affairs besides the actual state of affairs (Fig. 2.1). The possibility of knowledge seems ruled out when it is possible that we err. Introducing other possible state of affairs is an attempt to preclude exactly these error possibilities. Knowledge must be infallible by definition. As Lewis (1996) puts it, “To speak of fallible knowledge, of knowledge despite uneliminated possibilities of error, just sounds like a contradiction” (p. 367).
It is a curiosity of the philosophical temperament, this passion for radical solutions. Do you feel a little twinge in your epistemology? Absolute skepticism is the thing to try … Apparently the rule is this: if aspirin doesn't work, try cutting of your head.
Jerry Fodor (1985)
Humans are in pursuit of knowledge. It plays a significant role in deliberation, decision and action in all walks of everyday and scientific life. The systematic and detailed study of knowledge, its criteria of acquisition and its limits and modes of justification is known as epistemology.
Despite the admirable epistemic aim of acquiring knowledge, humans are cognitively accident-prone and make mistakes perceptually, inferentially, experimentally, theoretically or otherwise. Epistemology is the study of the possibility of knowledge and how prone we are to making mistakes. Error is the starting point of skepticism. Skepticism asks how knowledge is possible given the possibility of error. Skeptics have for centuries cited prima facie possibilities of error as the most substantial arguments against knowledge claims. From this perspective, epistemology may be viewed as a reply to skepticism and skeptical challenges. Skepticism is the bane of epistemology, but apparently also a blessing, according to Santayana (1955): “Skepticism is the chastity of the intellect, and it is shameful to surrender it too soon or to the first comer” (p. 50).
Skepticism is a tough challenge and requires strong countermeasures. In set theory, a powerful combinatorial technique for proving statements consistent with the axioms of set theory was invented by P. Cohen in the 1960s.
This chapter introduces fundamental concepts needed to identify, analyze, and evaluate arguments. It is most vital to be able to recognize deductive arguments and to be able to contrast them with two other types of arguments. One is the inductive type of argument based on probability. The other is the presumptive type of argument based on plausibility. It is necessary to begin with deductive arguments, as these are the kind that have been most intensively studied in logic and about which most is known. From there, the chapter goes on to examine the distinction between an explanation and an argument. Mainly in this book we are concerned with arguments. But there is a common tendency to confuse arguments and explanations, and the problem of distinguishing between the two has to be dealt with if we are to avoid the error of treating something as an argument when it is not. The chapter begins with the notion of inconsistency and its role in argumentation. This notion is fundamental to defining and recognizing deductive arguments as a distinctive type.
There can also be much confusion in mixing up the three kinds of arguments, and the clues in a dialogue on which type was meant to be put forward can be subtle. Even so, one can begin to get a good fundamental grasp of how to recognize each type of argument by learning about its success criteria. Each type of argument has a distinctive structure.
The argumentation structures analyzed in this chapter are highly typical of the kinds of reasoning commonly used in everyday deliberations. These structures of inference are also used in all aspects of technology, especially in fields such as engineering and medicine, where the objectives are essentially practical in nature, even though the reasoning is based on scientific knowledge. But their root use and their most familiar appearance to us in daily life is in the reasoning we commonly use to decide on which course of action to take, especially where personal choices on how to conduct one's daily life are made and acted on in real situations. As arguments, practical inferences are typically used in the type of dialogue called the deliberation in chapter 6. Deliberation is characterized by the need to arrive at a decision on what to do in a set of circumstances that is not completely known to an agent and that is liable to change in ways that are impossible to predict with certainty. Thus, practical reasoning tends to use argumentation schemes that are neither deductive nor inductive in nature.
We begin with very simple cases of practical inferences, and then, by the end of the chapter, consider so-called real world situations where additional factors need to be taken into account. Such practical inferences are highly familiar and are widely used by everyone.
Critical argumentation is a practical skill that needs to be taught, from the very beginning, through the use of real or realistic examples of arguments of the kind that the user encounters in everyday life. In this introductory textbook of critical argumentation an example-based method of teaching is therefore used. All points covered are introduced and illustrated through the use of examples representing arguments, or problems of various kinds that arise in argumentation, of a kind that will be quite familiar to readers from their own personal experiences. Exercises appended to each section of the book are designed to give practice in putting these skills to work.
As well as being a skill, critical argumentation is an attitude. It is an attitude that is useful in working your way through a problem or making a thoughtful decision. But it is most useful when you are confronted by an argument and you need to arrive at some reasoned evaluation of it on a balance of considerations in a situation where there are arguments on both sides of an issue. A purpose of this book therefore is to sharpen this critical attitude, which we all already have to some degree, to focus and heighten it in a constructive way, by providing an introduction to its basic methods. The methods presented are based on the latest state-of-the-art techniques developed in argumentation theory and informal logic.
The three goals of critical argumentation are to identify, analyze, and evaluate arguments. The term “argument” is used in a special sense, referring to the giving of reasons to support or criticize a claim that is questionable, or open to doubt. To say something is a successful argument in this sense means that it gives a good reason, or several reasons, to support or criticize a claim. But why should one ever have to give a reason to support a claim? One might have to because the claim is open to doubt. This observation implies that there are always two sides to an argument, and thus that an argument takes the form of a dialogue. On the one side, the argument is put forward as a reason in support of a claim. On the other side, that claim is seen as open to doubt, and the reason for giving the reason is to remove that doubt. In other words, the offering of an argument presupposes a dialogue between two sides. The notion of an argument is best elucidated in terms of its purpose when used in a dialogue. At risk of repetition, the following general statement about arguments is worth remembering throughout chapter 1 and the rest of this book. The basic purpose of offering an argument is to give a reason (or more than one) to support a claim that is subject to doubt, and thereby remove that doubt.
This chapter is concerned with the task of taking an argument as given in a particular case as a text of discourse and identifying the argumentation as a set of premises presented as reasons to accept a conclusion. Identifying the structure of such a chain of argumentation by means of an argument diagram can be extremely useful prior to criticizing the argument by finding gaps or problems in it and evaluating the argumentation as weak or strong. In this chapter, we do not tackle the problem of how to evaluate argumentation found in a text of discourse. We only confront the prior problems of how to identify and analyze the argument. Of course, some arguments are easier to identify and analyze than others. In an abstract philosophical text, in a complex text of discourse containing technical scientific argumentation, or in a legal case where there is a mass of evidence on some highly contested issue, it may be extremely difficult to analyze the argumentation in any very clear and simple way by using a single diagram that is not filled with complexities. The problem with tackling real cases of arguments in a natural language text of discourse is that there can be gaps, ambiguities and uncertainties about what was really meant. Here we consider only some relatively simple cases that are fairly easy to diagram. Only toward the end of the chapter do we address some of the problems posed by harder cases.