To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter is about how to analyze kinds of arguments traditionally called enthymemes, arguments that require for proper analysis and evaluation the identification of a missing premise, or in some instances a missing conclusion. A small section on enthymemes has traditionally been included in logic textbooks from the time of Aristotle. In this chapter it is shown how methods of argumentation study, including software tools recently developed in computing, have enabled new ways of analyzing such arguments. It is shown how the employment of these methods to four key examples reveals that the traditional doctrine of the enthymeme needs to be radically reconfigured in order to provide a more useful approach to the analysis of incomplete arguments.
There is an extensive literature on incomplete arguments, and this chapter begins with a survey of enough of this literature to make it possible to understand the investigation that follows and to show why it is needed. The second section of the chapter gives a brief historical outline of the literature on enthymemes, beginning with Aristotle’s account of it, including coverage of a significant historical controversy about what Aristotle meant by this term. One of the problems with the task of analyzing incomplete arguments is to get some general grasp of what it is one is trying to do, because the solution to this task can be applied not only to logic but to many other fields that contain argumentation, such as science and law. Therefore, it is important to formulate at the beginning what the purpose of the investigation is supposed to be. A brief account of this is contained in Section 3. The next four sections contain extensive analyses of four examples of incomplete arguments. The first example is meant to be very simple, but the next three examples show some highly significant factors found in carrying out the task of argument analysis needed to identify the missing parts of an argument. These findings are taken into account in Section 9 of the chapter, where both the nature of the task and the proper terminology needed to assist it are reformulated and clarified.
The purpose of this chapter is to advance fallacy theory beyond its current state of development by linking it to the notion of defeasible reasoning. Defeasible reasoning has turned out to be very important for computing, especially in view of the attention paid to modeling argumentation (Bench-Capon and Dunne, 2007) and the use of argumentation schemes (Verheij, 2003) and dialogues (Prakken, 2000; 2006) to study problems of nonmonotonic reasoning. The advent of argumentation frameworks (Dung, 1995) can be shown to provide an elegant way of subsuming much previous work on defeasible reasoning (Bondarenko, Dung, Kowalski and Toni, 1997). Although much has been written on individual fallacies, there is comparatively little on the general theory of fallacy, except for the pragma-dialectical theory (van Eemeren and Grootendorst, 1992) and the pragmatic theory (Walton, 1995). Defeasible reasoning is uncontestably important for helping us to better grasp the notion of fallacy and rethink it as a concept useful for modern logic, but so far the link between the two notions has not been studied.
Many of the most common forms of argument associated with major fallacies, such as argument from expert opinion, ad hominem argument, argument from analogy and argument from correlation to cause, have now been analyzed using the device of argumentation schemes (Walton, Reed and Macagno, 2008). Recent research in computing has embraced the use of argumentation schemes and linked them with key logical notions such as burden of proof that are also related to the study of fallacies (Gordon, Prakken and Walton, 2007). Argumentation schemes have been put forward as a helpful way of characterizing structures of human reasoning, such as argument from expert opinion, that have proved troublesome to view deductively. Attempting to deduce the reasonable examples, by viewing the major premise as a conditional not subject to exceptions (e.g., if X says Y, then Y is true) does not work at all well, as this type of argument is typically defeasible.
In this chapter, the Carneades Argumentation System is used to model an example of the progress of scientific inquiry starting from a discovery phase. This new procedural bounded rationality model of scientific inquiry is used to show how a hypothesis can justifiably be accepted based on a process of marshaling and testing of evidence pro and contra, once it has been supported strongly enough by this evidential procedure to meet a standard of proof appropriate for the inquiry. Both discovery of a hypothesis and its later proof is seen as part of an orderly rule-governed procedure, modeled by a formal dialectical structure in which evidence is collected, tested and measured against standards of proof, then used to draw a justified conclusion. This context of argumentation was called the inquiry dialogue in Chapter 1.
The model supports an approach to scientific inquiry that could be classified as pragmatic, in that it varies with the standards of proof appropriate for kinds of inquiry in a field of knowledge and with criteria for it to be considered to be evidence. It is based on the theories of inquiry of Peirce (1931; 1984) and Popper (1963; 1972). According to the Carneades model of inquiry (Gordon, Prakken and Walton, 2007; Gordon, 2010), a group of interacting agents is collecting evidence as part of a search for the truth of a matter that they are collaboratively investigating. As they go along during the search process, they verify or falsify hypotheses by testing them using the data they have collected so far, at the same time as they are engaged in the process of collecting new data. As the search for knowledge continues, some hypotheses become better supported by the evidence, but at the same time, some of the hypotheses previously accepted have to be given up, because they are falsified by the new data that are streaming in.
As shown in Chapter 1 Section 2, there are two apparently opposed models of rational thinking and acting in the literature on cognitive science. The belief-desire-intention (BDI) model is based on the concept of an agent that carries out practical reasoning premised on goals that represent the agent’s intentions and incoming perceptions that update the agent’s set of beliefs. The commitment model is based on agents interacting with each other in a dialogue in which each contributes speech acts. Commitment in dialogue is a public notion because evidence of commitment is available in the commitment set (database) of propositions that an agent has gone on record as accepting (Hamblin, 1970; 1971). Explicit commitments are statements externally accepted by an agent and recorded in an external memory that is transparent to all parties. Implicit commitments, of the kind that need to be postulated to reconstruct arguments as enthymemes, can be inferred from the explicit ones using argumentation schemes and dialogue rules, as indicated in Chapter 3. However, beliefs, desires and intentions are private psychological notions internal to an agent, and so there is a logical problem of how we are to infer what they are. One agent cannot directly inspect the contents of another agent’s mind as a basis for making this judgment. This inability poses an evidential problem for the BDI model.
The problem of other minds has long been a central difficulty in philosophy, not only in philosophy of law, but also in ethics and philosophy of mind. We can observe a person’s external actions, and thereby have empirical evidence to confirm or refute the claim that this person carried out a particular action. But since we cannot directly observe a person’s motive, intention or desire, how can we confirm or refute any claim that a person acted on the basis of such an internal state of mind? The problem is particularly acute in legal reasoning about evidence in criminal law, because so much of it is built on assumptions about mens rea, the guilty mind. This chapter surveys recent developments in argumentation-based artificial intelligence and law to address the problem by studying the logical structure of reasoning about motives in law.
Argumentation, which can be abstractly defined as the interaction of different arguments for and against some conclusion, is an important skill to learn for everyday life, law, science, politics and business. It is a rich, interdisciplinary area of research straddling philosophy, communication studies, linguistics, psychology and artificial intelligence that has developed context-sensitive practical methods to help a user identify, analyze and evaluate arguments.
Recently, the field of computing has embraced argumentation as a paradigm for research in artificial intelligence and multi-agent systems. Artificial intelligence in particular has seen a prolific growth in uses of argumentation. Argumentation has proved helpful to computing because it has provided concepts and methods used to build software tools for designing, implementing and analyzing sophisticated forms of reasoning and interaction among rational agents (Reed and Grasso, 2007). Recent successes include argumentation-based models of evidential relations and legal processes of examination and evaluation of evidence. Argument mapping has proved to be a useful tool for designing better products and services and for improving the quality of communication in social media by making deliberation dialogues more efficient. There now exist formal systems of argumentation to model many aspects of reasoning and argument that were formerly studied only by less structured methods of informal logic.
Straw man is a modern addition to the list of informal fallacies treated in logic textbooks, where it is said to be the fallacy of misrepresenting an opponent’s real position as a weaker one that can more easily be attacked (Johnson and Blair, 1983; Freeman, 1988; Govier, 1992; Walton, 1996c). However, much earlier, Aristotle remarked (Topics 159b30–35) on the danger in argumentation of misrepresenting another arguer’s position, and no doubt philosophers have often made similar comments in the past. Thus although straw man represents the extreme kind of case of misrepresenting another arguer’s position, it would be useful more generally to have some tool that could assist in making objective determinations of when an expressed position has been wrongly represented in a given case. For it is a rule of rational argumentation that before you criticize or refute another party’s view in a critical discussion, you should be sure that the view you are attacking actually represents the other party’s position. This chapter develops a formal dialogue system that can utilize three different kinds of commitment query inference engines that are designed to help in analyzing and evaluating cases where the straw man fallacy is alleged to have been committed.
Such an engine is meant to be used to search through an arguer’s commitment set in a case of argumentation that can be structured in a formal dialogue format, so that it can be fairly judged whether a queried statement is in an arguer’s commitment store or not. Only if it is not in the arguer’s commitment store can an attacker of the alleged position of the second party be fairly judged to have committed the straw man fallacy, based on the evidence in the case at issue. The project undertaken is not to implement any of these inference engines computationally, but only to construct formal models specifying generally how they should work.
This chapter is about the logical structure of argument from analogy and its relationship to legal arguments from classification and precedent. Its main purpose is to provide guidance for researchers in artificial intelligence and law on which argumentation scheme for argument from analogy to use, among the leading candidates that are currently available. Arguments from precedent cases to a case at issue are based on underlying arguments from analogy of a kind extremely common both in everyday conversational argumentation and in legal reasoning. There is a very large literature on argument from analogy in argumentation (Guarini, Butchart, Simard Smith et al., 2009), and the topic is fundamentally important for law because of the centrality of arguments from precedent and analogy in Anglo-American law. It is not hard to appreciate this connection, given that according to the rule of stare decisis, the precedent decision of a higher or equal court is binding on a similar current case (Ashley, 1988, 206).
In this chapter, cases are used to argue that arguments from precedent are based on arguments from analogy in legal reasoning and that arguments from analogy are based on a similarity between the two cases held to be analogous. As shown in the chapter, this claim is controversial, because there are different views about how the argumentation scheme for argument from analogy should be formulated (Macagno and Walton, 2009). According to the version of the scheme for argument from analogy argued to be the basic one in this chapter, one of the premises has a requirement holding that there is a similarity between the two cases in point. In this chapter I show how to analyze this notion of similarity using the story-based approach of Bex (2011) and the formal dialogue model for investigating stories of Bex and Prakken (2010). It is shown how an abstract structure called a story scheme can be employed in a way that makes it useful to identify, analyze and evaluate arguments from analogy, and show their function in case-based reasoning where precedents are involved.
The aim of this chapter is to clarify a group of related terms, including ‘argument attack’, ‘rebuttal’, ‘refutation’, ‘challenge’, ‘critical question’, ‘defeater’, ‘undercutting defeater’, ‘rebutting defeater’, ‘exception’ and ‘objection’, which are commonly used in the literature on argumentation. The term ‘rebuttal’ is often associated with the work of Toulmin (1958), while the terms ‘undercutting defeater’ and ‘rebutting defeater’ are associated with the work of Pollock (1995) and are commonly used in the artificial intelligence literature. The notions of argument attack and argument defeat are associated with a formal model of argumentation that is prominent in artificial intelligence called the abstract argumentation framework. As shown in the chapter, these terms are, at their present state of usage, not precise or consistent enough for us to helpfully differentiate their meanings in framing useful advice on how to attack and refute arguments. An additional difficulty is that argument diagramming tools are of limited use if they cannot represent the critical questions matching an argumentation scheme. A way of overcoming both difficulties is presented in this chapter is by using the Carneades Argumentation System.
The aims of this chapter are to survey the resources available for the project of building an exact method that will be helpful for the purpose of identifying arguments in natural language discourse, and to formulate some specific problems that need to be overcome along the way to building the method. It is argued that such a method would be useful as a tool to help students of informal logic identify arguments of the kind they encounter in natural language texts, for example, in newspapers, magazines or on the Internet. The method proposed is based on the use of argumentation schemes representing common types of defeasible arguments (Walton, 1996b; Walton, Reed and Macagno, 2008). The idea is that each scheme is associated with a set of identifiers (key words and markers locating premises and conclusions), and when the right grouping of identifiers is located at some place in a text, the argument mining method locates it as an instance of an argument of some particular, identifiable type (from a list of schemes).
The project is related to the development of argumentation systems in artificial intelligence. One of these technical initiatives, outlined in Section 7, is the project of building an automated argumentation tool for argument mining. The idea is that this tool could go onto the Internet and collect arguments of specifically designated types, for example, argument from expert opinion. These technical initiatives are connected to the aim of finding an exact method for argument identification in informal logic, because the most powerful method would likely turn out to combine both tasks. The most powerful method would have human users apply the automated tool to identify arguments on a tentative basis in a text, and then correct the errors made by the automated tool. It is not hard to see how even a semi-automated procedure of this kind could be extremely helpful for teaching courses in informal logic.
Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. Since bagging and boosting are an effective and open framework, several researchers have proposed their variants, some of which have turned out to have lower classification error than the original versions. This paper tried to summarize these variants and categorize them into groups. We hope that the references cited cover the major theoretical issues, and provide access to the main branches of the literature dealing with such methods, guiding the researcher in interesting research directions.