To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It would be very helpful for users of the schemes to have a more refined system of classification, so that the user could search through to find a scheme applicable to her needs in a given case by searching under other, more general ones where the particular scheme being sought is known to fit. It is already fairly evident from the compendium of schemes that some schemes fit under others as subspecies of them. For example, one of the most common schemes is argument from consequences. It is closely related to practical reasoning. Other schemes, like those for the slippery slope argument, often fit under the category of argument from consequences. However, such classifications are not as straightforward as they initially seem. For example, some slippery slope arguments fit under the category of arguments from precedent, and therefore may not fit the scheme of argument from consequences, at least in any straightforward way. Another very common scheme under which many others fit as subspecies is the scheme for argument from commitment. Here we have a cluster of schemes that are closely related to each other, but in complex ways. Schemes that are very general, like those for argument from consequences and argument from ignorance, are related to many other, more specific schemes that fall under them. This chapter sets us on the road to beginning the research project of taking such clusters of schemes and investigating how they fit together with their neighboring schemes.
In this chapter, we outline a number of schemes that do not fall into any single classification but are useful to know about before considering the schemes discussed in the next chapters. We begin with a group of schemes based on assumptions about knowledge and go on to consider a group related to actions, goals, and means. These latter include schemes representing practical reasoning and arguing from the consequences of an action that is being considered. Finally, we consider some other schemes that are not easily classified, including arguments from composition and division, fear appeal arguments, appeals to pity, and argument from alternatives and opposites. Some work has been done on some of these schemes, while others remain largely unstudied.
ARGUMENTS FROM KNOWLEDGE
In arguments from expert opinion, position to know, and witness testimony, the acceptability of the conclusion is drawn from a knowledge base that the source is assumed to have access to. The relation between the latter and the plausibility of the conclusion is based on a kind of default reasoning: it is assumed that the proposition in question is contained in a certain subject domain that is known by the source. For this reason, it is an assumption in this kind of argumentation that many or most of the true propositions of the domain are known. The warrant of this kind of argumentation is therefore closely similar to that of the argument from ignorance.
Argumentation theory has laid foundations for and has had influence upon a wide variety of computational systems (Reed and Norman, 2003). This chapter explores four distinct areas, reviewing the ways in which argumentation schemes have been put to work: in natural language generation, in interagent communication, in automated reasoning, and in various specific computational applications. To start, however, we look at the tools that are being used to support the development of these applications and that allow the creation, analysis, and manipulation of the raw computational resources that involve argumentation schemes.
SCHEMES IN ARAUCARIA
Following work examining the diagramming of natural argument – an important topic from the practical, pedagogic point of view (van Gelder and Rizzo, 2001), but also a driver of theoretical development in informal logic (Walton & Reed, 2004) – Reed and Rowe (2004) developed Araucaria, a system for aiding human analysts and students in marking up argument. Araucaria adopts the “standard treatment” (Freeman, 1991) for argument analysis, based on identification of propositions (as vertices in a diagram) and the relationships of support and attack holding between them (edges in a diagram). It is thus similar to a range of argument visualization tools (see Kirschner et al., 2003, for an overview), and familiar from AI techniques such as Pollock's (1995) inference graphs and even Bayesian nets and qualitative probabilistic networks.
This chapter introduces the reader to argumentation schemes and explains, through the use of some examples, why they are important. Another aim of the chapter is to briefly review the literature on argumentation schemes, including the key works by Hastings, Walton, and Kienpointner, and to set it in a broader context, bringing out some characteristics of defeasible reasoning and argument evaluation that are fundamental to the study of schemes. Another is to introduce the beginning reader to some basic tools, like argument diagramming, that utilize schemes and need to be integrated with them. In this chapter we will introduce the reader to an automated system of argument diagramming called Araucaria. This technique is a box-and-arrow representation of the premises and conclusions of an argument, showing how one argument can be chained together with others to form a sequence of reasoning. This tool will be used in subsequent chapters, and so we need to introduce the reader to it now. One of our goals in the book is to show how argumentation schemes are in the process of being modeled by argument technology in the field of artificial intelligence (AI). However, we will reserve our fullest account of these developments for the last chapter of the book, even though, from time to time, we will mention aspects of them that impinge on our fundamental understanding of argumentation schemes as forms of reasoning.
Topics (topoi), in a long tradition stemming from Aristotle's rhetoric and early writings on argumentation and logic, are the places where arguments can be found to make a case, and the warrants that can back a logical inference leading from premises to a conclusion. Argumentation schemes are tools of modern argumentation theory that have been developed to fulfil the latter function, but may also be useful to fulfill the former one as well. In this chapter we will outline the varied developments of the topoi in both the logical and rhetorical traditions, starting with Aristotle, the first to describe them. We will examine some leading accounts of them given in the Middle Ages, when they were studied in relation to logical consequences.
Aristotle's Topics contains accounts of many commonly used types of arguments he calls topics (topoi, or places). There are some 300–400 of these topics, depending on how you count them, according to Kienpointner (1997, p. 227). Many topics can also be found in Aristotle's Rhetoric. What these topics supposedly represent has been subject to many different interpretations over the centuries. Many have interpreted the topic as a device to help an arguer search around to find a useful argument she can use, for example, in a debate or in a court of law. Other have taken the topic to have a guaranteeing or warranting function that enables rational inferences to be drawn from a set of premises to a conclusion.
Argument from analogy is one of the fundamental forms of argumentation on which many other forms of argument – argument from precedent in law, for example – are based (Gordon, 1995; Weinreb, 2005). Our system of Anglo-American law is based on ratio decidendi, the principle that if a case has been decided by a court in a certain way, then a new case that is similar to it should be decided in the same way. Indeed, argument from analogy is the foundation of all case-based reasoning (CBR) in which the argumentation turns on a comparison of one case to another (Ashley and Rissland, 2003). CBR not only compares one case to another as similar, but also compares cases as more similar to others with respect to a given case, depending on the description of the problem posed in the given case. Thus argument from analogy is an extremely important and fundamental species of argumentation. So much has been written on it, in so many fields, including philosophy, cognitive science, artificial intelligence, linguistics, psychology, law, and computing, that we can barely scratch the surface here. Our more limited aim is to develop tools that can be used to identify the precise form of arguments from analogy, allowing us to better understand its close relationships with other important schemes, especially those representing argument from verbal classification and argument from precedent.
The schemes studied in this chapter typically represent arguments of a kind that are weak and subject to default. They work best as uses of plausible reasoning in situations of uncertainty and lack of knowledge. However, they are so fragile and prone to error that they have traditionally been treated in logic as fallacies under the headings argumentum ad populum and argumentum ad hominem. Even so, despite their inherent fragility and proneness to exploit prejudice, they can be reasonable arguments in many instances, and sometimes they may be the best kind of evidence we have to make a rational decision. First we address the kind of argument Aristotle called endoxic, meaning that it is based on an opinion accepted by the majority and/or by the wise (the experts). In logic, this form of argument is called appeal to popular opinion, but it might be less negative to label it argument from informed public opinion. Then we address ethotic argumentation, recognized by Aristotle as especially powerful. Ethos is the moral character of the speaker. It can be used to support an argument, but in the argumentum ad hominem, or personal attack on an arguer's character, it is used to discredit his argument. Many argumentation schemes fitting this general type have now been identified and studied, and much of the chapter is taken up with reviewing and discussing these schemes.
The connection between these two apparently different classes of argumentation schemes can be found in the analysis of the meaning of “plausibility.”
The goal of this chapter is to show how to formalize the schemes by expressing each scheme as a formal inference structure in a way comparable to forms of inference we all are already familiar with in deductive logic, and to some extent in inductive reasoning. Although deductive and inductive forms of argument can be included as argumentation schemes, the most difficult part of this project is the formalization of the defeasible schemes. The defeasible schemes listed in the compendium represent the most common forms of reasoning not only in everyday discourse, but also in special contexts of use like legal and scientific reasoning. The defeasible schemes presented in the compendium look to have a rough formal structure, but there is a wide variety of them, utilizing many different kinds of variables and constants. Before the schemes can be formalized, further clarifications need to be made (Verheij, 2003).
THE DEFEASIBLE MODUS PONENS FORM OF SCHEMES
In order to be useful in logic, artificial intelligence, and related scientific fields, schemes must be formalized, meaning that they have to be codified in some precise way so that the coder (whether machine or human) can recognize a particular argument as fitting a scheme and then use it to derive conclusions from the given set of premises based on that identification. Once an argument is recognized as fitting a scheme, an argument markup, utilizing an argument diagram, can reconstruct the argument in a given case using the scheme as a template or pattern on which to frame the reconstruction.
One of the most valuable uses of schemes is to enable an argument analyst to fill in implicit assumptions needed to make sense of a given argument she is trying to analyze. Arguments that have missing (unstated) premises or conclusions are traditionally called enthymemes in logic. One problem with enthymemes is that reasonable people can have differences of opinion on what the implicit assumptions are supposed to be. Filling in the missing parts of an enthymeme may depend on interpreting the natural language text in which the argument was put forward, to try to fairly judge what the speaker meant to say. The danger of attributing such missing assumptions to an arguer is that of unwittingly committing the straw man fallacy. This fallacy is committed when an arguer misrepresents her opponent's position to make it look more extreme or unreasonable than it really is, in order to attack it more easily. In some cases, more than one interpretation of a given argument is possible. Thus the problem is to find out what kind of evidence is needed to support or question the claim that some proposition really can be inserted into an apparently incomplete argument presented in a text of discourse, without unfairly distorting what the speaker meant to say. It will be shown in this chapter, by studying key examples, how argumentation schemes constitute an important part of this evidence.
The goal of this chapter generally is to explore the role of argumentation schemes in enthymeme reconstruction.
The theory of argumentation is a rich interdisciplinary area of research spanning philosophy, communication studies, linguistics, computer science, and psychology. In the past few years, formal models of argumentation have been steadily gaining importance in artificial intelligence, where they have found a wide range of applications in specifying semantics for logic programs, generating natural language text, supporting legal reasoning, and facilitating multi-agent dialogue and negotiation on the Internet. The most useful and widely used tool so far developed in argumentation theory is the set of argumentation schemes. Argumentation schemes are forms of argument (structures of inference) that represent structures of common types of arguments used in everyday discourse, as well as in special contexts like those of legal argumentation and scientific argumentation. They include the deductive and inductive forms of argument that we are already so familiar with in logic. However, they also represent forms of argument that are neither deductive nor inductive, but that fall into a third category, sometimes called defeasible, presumptive, or abductive. Such an argument may not be very strong by itself, but may be strong enough to provide evidence to warrant rational acceptance of its conclusion, given that its premises are acceptable (Toulmin, 1958). Such an argument can rightly carry weight, or be a plausible basis for acceptance, on a balance of considerations in an investigation or discussion that is moving forward, as new evidence is being collected.
In this chapter, a new model of causation is formulated that views causal argumentation as defeasible. The new model structures many of the most common cases of causal argumentation as dialectical, meaning that the case is viewed in the context of an investigation or discussion in which two parties take part in a collaborative process of rational argumentation. The model is shown to apply very well, particularly during the initial stages of an investigation, where information is incomplete but preliminary hypotheses are formed. But it will also be argued that causal arguments need to be evaluated differently in different contexts. In scientific argumentation, there is an investigative process in which tentative hypotheses are formulated about a cause at an early stage, and then tested and refined at later stages. In legal argumentation, the method of evaluation typically is a trial or some other form of dispute resolution in which a causal claim made by one side is opposed to one made by the other side. For example, it could be a case in tort law concerning whether a toxic substance caused cancer in a population. In a criminal case, it could be a trial concerning the cause of an accident. In such cases, there are differing opposed views on what the cause of something is, and it is to such cases that the defeasible model applies best.
In this chapter we develop a pragmatic theory of refutation in which a refutation is defined as a sequence of dialogue moves in which an argument is used by one party to attack and defeat an opposed argument put forward previously by the other party. A fundamental distinction is drawn between refutation and rebuttal. A rebuttal is an argument that is opposed to another argument. It stands against the argument it is opposed to. But it does not necessarily refute that argument. Refutation is something more powerful. A refutation knocks down the original argument. It not only is opposed to the original argument, but also has enough strength itself as an argument that it overpowers the original argument and knocks it down (defeats it). This distinction is not a very firm or precise one in everyday conversational usage. An attempted refutation is, after all, a refutation in conversational English, even if it is not strong enough to knock down the argument it is opposed to. But still, as we hope to show, there is something to this distinction. Refutation is stronger than rebuttal. A refutation is something like a strong rebuttal, or a rebuttal that has active force in successfully attacking the argument it is aimed at.
A parallel distinction that has been very influential in recent work in artificial intelligence can be drawn between attacking and defeating an argument (Dung, 1995; Prakken, 1997).
We introduce an epistemic theory of truth according to which the same rational degree of belief is assigned to Tr(┌α┐) and α. It is shown that if epistemic probability measures are only demanded to be finitely additive (but not necessarily σ-additive), then such a theory is consistent even for object languages that contain their own truth predicate. As the proof of this result indicates, the theory can also be interpreted as deriving from a quantitative version of the Revision Theory of Truth.
Just as set theory can be divorced from Ernst Zermelo's original axiomatization of it, counterpart theory can be divorced from the eight postulates that were originally stipulated by David Lewis (1968, p. 114) to constitute it. These were postulates governing some of the properties and relations holding among possible worlds and their inhabitants. In particular, counterpart theory can be divorced from Lewis's postulate P2, the stipulation that individuals are ‘world bound’—that none exists in more than one possible world
We incorporate the idea of coercive subtyping, a theory of abbreviation for dependent type theories, into the polymorphic type system in functional programming languages. The traditional type system with let-polymorphism is extended with argument coercions and function coercions, and a corresponding type inference algorithm is presented and proved to be sound and complete.
Stephen Read (2002, 2006) has recently discussed Bradwardine's theory of truth and defended it as an appropriate way to treat paradoxes such as the liar. In this paper, I discuss Read's formalisation of Bradwardine's theory of truth and provide a class of models for this theory. The models facilitate comparison of Bradwardine's theory with contemporary theories of truth.
Isomorphisms of types are computational witnesses of logical equivalence with additional properties. The types/formulas A and B are isomorphic if there are functions (in a certain formalism) f : A → B and g : B → A such that g ○ f and f ○ g are equal in a certain sense to the identity on A and B, respectively. Typical such formalisms are extensions of simply typed λ-calculus, with βη-convertibility as equality relation. Another view of a pair of functions f : A → B and g : B → A (besides establishing the logical equivalence of A and B) is that f is invertible with left-inverse g, and it is then natural to relax the above symmetric condition to just g ○ f being equal to the identity on A. In this situation, A is called a retract of B, which is thus a natural generalisation of the notion of an isomorphism, while both these notions are refinements of the concept of logical equivalence in operational terms, that is, in terms of computable functions.
1.1 In standard modal logics, the worlds are 2-valued in the following sense: there are 2 values (true and false) that a sentence may take at a world. Technically, however, there is no reason why this has to be the case. The worlds could be many-valued. This paper presents one simple approach to a major family of many-valued modal logics, together with an illustration of why this family is philosophically interesting.