To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The importance of the role that conditionals play in both everyday and scientific discourse and reasoning is hard to overestimate. Perhaps it is no surprise then that for quite some time, conditionals have been a central area of investigation not only in philosophy, but also in linguistics and psychology, and to some extent in computer science. What is surprising, however, is that despite the considerable expenditure of time and effort of many researchers from those fields, there is still little that one can say about conditionals that is not highly controversial. Even with regard to the most fundamental questions concerning conditionals, there is very little unanimity to be found. Those who have ever proposed a semantics of conditionals can consider themselves fortunate if the proposal won the approval of at least one colleague.
This book focusses on the distinctively epistemological questions that conditionals raise, such as questions concerning their acceptability conditions and probabilities. There is hardly more consensus on these epistemological questions than there is on the semantics of conditionals. And insofar as there is consensus, it is based on questionable assumptions. In this book, I aim to develop at least an outline of the epistemology of conditionals. I do so by relying on the combined use of formal and empirical methods.
While the use of formal and empirical methods in philosophy is viewed much more favorably now than it was when I was a graduate student twenty years ago – formal methods were then associated with logical empiricism, which was generally regarded as a deeply misguided research program, and empirical methods were considered to be perfectly fine, but just not as having a place in philosophy – I have in the course of working on this book met with considerable skepticism from friends and colleagues (some to be named in the following paragraphs) about the methodology of experimental psychology (which basically comprises the empirical methods I have relied on). The main complaint was invariably that this methodology fails to produce robust results: “Ask people a question in one way, and they'll give you one answer; ask the same question in a slightly different way, and you'll get a different answer.”
Probably the only real antidote to such worries is to engage in empirical research with experimental psychologists, who tend to operate in a much more careful and sophisticated manner than my skeptical friends and colleagues seem to suppose.
This book provides an argumentation model for means end-reasoning, a distinctive type of reasoning used for problem-solving and decision-making. Means end-reasoning is modelled as goal-directed argumentation from an agent's goals and known circumstances, and from an action selected as a means, to a decision to carry out the action. Goal-based Reasoning for Argumentation provides an argumentation model of this kind of reasoning showing how it is employed in settings of intelligent deliberation where agents try to collectively arrive at a conclusion on what they should do to move forward in a set of circumstances. The book explains how this argumentation model can help build more realistic computational systems of deliberation and decision-making, and shows how such systems can be applied to solve problems posed by goal-based reasoning in numerous fields, from social psychology and sociology, to law, political science, anthropology, cognitive science, artificial intelligence, multi-agent systems, and robotics.
Conditionals are sentences of the form 'If A, then B', and they play a central role in scientific, logical, and everyday reasoning. They have been in the philosophical limelight for centuries, and more recently, they have been receiving attention from psychologists, linguists, and computer scientists. In spite of this, many key questions concerning conditionals remain unanswered. While most of the work on conditionals has addressed semantical questions - questions about the truth conditions of conditionals - this book focuses on the main epistemological questions that conditionals give rise to, such as: what are the probabilities of conditionals? When is a conditional acceptable or assertable? What do we learn when we receive new conditional information? In answering these questions, this book combines the formal tools of logic and probability theory with the experimental approach of cognitive psychology. It will be of interest to students and researchers in logic, epistemology, and psychology of reasoning.
The conclusion of Chapter 1 was that even though practical reasoning can sometimes be a solitary form of reasoning – for example, where an agent needs to figure out how to fix a flat tire on his car – there are many other cases where a rational agent needs to communicate with other agents in order to move forward with intelligent practical reasoning. As noted in the conclusion of Chapter 1, in such cases an intelligent agent may need to deliberate with other agents on how to achieve shared goals. It may also need to collect information from them, or even to negotiate with them on how to divide up responsibility for the actions needed for them to reason together to achieve their common goal. In the subsequent chapters we will examine some complex cases of group deliberations by agents collaboratively using practical reasoning. In Chapter 2 we will start with some simpler cases in which one agent is interacting with another and is using practical reasoning to try to persuade the other to carry out a designated action.
In this chapter, several illustrative examples of direct-to-consumer health product advertisements that exemplify practical reasoning are analyzed using argumentation methods, and some conclusions about the analyses are drawn. The ads studied include commercials for pharmaceuticals, as well as ads for other medications, including health foods, and treatments and devices that purport to have health benefits. It is shown that they use arguments of a kind that fit the argumentation scheme for practical reasoning. The findings are interesting for many reasons. They can be applied both to the crafting of the ads and to the enhancement of critical thinking skills for intelligent consumers. They are also applicable to recent efforts to use computational techniques to assist consumer health informatics to inform consumers and improve health care. Most interestingly, they suggest that the central persuasion structure of the argumentation in health care ads is that of practical reasoning. Use of practical reasoning of this sort as argumentation is commonly called the problem-solution strategy in advertising.
This chapter discusses the theoretical questions that are raised as one navigates across the storm-tossed seas of philosophical controversies about rationality. The analysis of the structure of practical reasoning advanced in the previous chapters has raised the question of whether it can be used to frame a new philosophical definition of rationality, but it also poses some problems. One of them is how to define the other kind of rationality that is apparently left over once practical reasoning has been shown to represent some notion of practical rationality. Some (pragmatists) think that nothing is left over, and that all rationality is practical rationality. Some call what is left over theoretical rationality, while others call it epistemic rationality.
The four examples of the use of practical reasoning in deliberation studied in Chapter 6 showed that intelligent deliberation needs to be based on knowledge of the circumstances of the case that is continually streaming in to an agent (or group of them) from an open knowledge base. This view of how practical reasoning is used requires a different approach to the notions of epistemic and practical rationality. The two notions need to be defined separately as distinctive concepts, but they also need to be seen as concepts that are combined in practical reasoning. Practical reasoning needs to be based on an agent's goals and values, but also on its knowledge of the circumstances. However, because the circumstances are continually changing, both human and machine agents, even when working together, are fallible. This means we are continually subject to bias, misperceptions, and errors of judgment. As we saw when defining the list of characteristics of a rational agent, one of the most important characteristics is the capability for error correction based on feedback as new information streams into the agent's knowledge base.
The central feature of knowledge in this view is that it has to be based on evidence, rather than requiring the truth of its findings.
In Chapter 2 we examined some relatively simple (but real) cases in which argument diagramming tools, along with argumentation schemes and critical questions, were used to identify and analyze practical reasoning. Now we need to go more deeply into the subject to address the question of how the kind of practical reasoning we have identified can be formally modeled. There are some existing formal models in artificial intelligence that can model this kind of practical reasoning, in addition to the value-based system introduced in Chapter 1. Some of these formal models have been implemented in computational systems that contain argument diagramming tools. In Chapter 3, the reader will be introduced to these formal models and computational tools, in order to show how they can be used to model practical reasoning.
The way Araucaria is applied to the various examples of arguments in health ads in Chapter 2 was very instructive because it showed how the scheme for practical reasoning, along with other arguments – some of them also based on known schemes – can be structured in a way that makes the sequence of reasoning in a case easy to interpret and analyze. A nice feature of Araucaria was that once the scheme is identified as fitting a particular segment of the chain of argumentation represented in the argument diagram, the menu can be displayed showing the critical questions matching the scheme. This particular device turned out to be extremely helpful because, as indicated in Chapter 1, the main tool needed for evaluating instances of argumentation containing practical reasoning is the list of critical questions matching a scheme.
Argument diagramming, even along with argumentation schemes, is not sufficient to yield a model of the structure of practical reasoning as a species of rational argumentation. For that purpose, we need a precise formal model that goes more deeply into the structure of practical reasoning. In this chapter, it will be shown how some existing formal argumentation systems can be applied to practical reasoning. It will also prove helpful if the formal model is implemented in a working computational system that has additional features useful for analyzing practical reasoning, including an argument diagramming interface.
The previous chapters have shown how practical reasoning is used in arguments, but it will turn out to be vitally important to understand how it is used in explanations. It will even be shown at the end of this chapter how the same example of practical reasoning in a discourse can combine explanations with arguments. Hence, there arises the problem of building a model of explanation to reveal precisely how practical reasoning is used in explanations. The key to solving it, as will be shown in this chapter, is to broaden the study of practical reasoning to take into account not only its structure as a chain of reasoning, but also how that same kind of reasoning can be used in different ways in different communicative settings. An argument will be shown to be a response to a particular kind of question, while an explanation will be seen as a response to another kind of question.
Recent work in artificial intelligence has taken the approach that an explanation is best seen as a transfer of understanding from one party to another in a dialogue where one party is a questioner who asks why or how something works and the other party attempts to fulfill this request (Cawsey, 1992; Moore, 1995; Moulin et al., 2002). Recent literature in philosophy of science seems to be gradually moving toward this approach, but there is an open question of how it can be represented using a formal structure (Trout, 2002). Since explanations and arguments are sometimes hard to distinguish, the first step is to provide some way of representing the distinction between them in their formal structure. In this chapter, the Why2 Dialogue System is presented as a formal model showing how the difference between argument and explanation resides in the pre- and post-conditions for the speech act of requesting an argument and the speech act of requesting an explanation. It is an extension of earlier dialogue systems (Walton, 2004, 2007a, 2011).
Practical reasoning of the kind described by philosophers since Aristotle (384–322 BC) is identified as goal-based reasoning that works by finding a sequence of actions that leads toward or reaches an agent's goal. Practical reasoning, as described in this book, is used by an agent to select an action from a set of available alternative actions the agent sees as open in its given circumstances. A practical reasoning agent can be a human or an artificial agent – for example, software, a robot, or an animal. Once the action is selected as the best or most practical means of achieving the goal in the given situation, the agent draws a conclusion that it should go ahead and carry out this action. Such an inference is fallible, as long as the agent's knowledge base is open to new information. It is an important aspect of goal-based practical reasoning that if an agent learns that its circumstances or its goals have changed and a different action might now become the best one available, it can (and perhaps should) “change its mind.”
In computer science, practical reasoning is more likely to be known as means-end reasoning (where an end is taken to mean a goal), goal-based reasoning, or goal-directed reasoning (Russell and Norvig, 1995, 259). Practical reasoning is fundamental to artificial intelligence (Reed and Norman, 2003), where it is called means-end analysis (Simon, 1981). In goal-based problem-solving, a search for a solution to a problem is carried out by finding a sequence of actions from available means of solving a problem. An intelligent goal-seeking agent needs to receive information about its external circumstances by means of sensors, and store it in its memory. There are differences of opinion about how practical goal-based reasoning should be modeled. One issue is whether it should be seen as merely an instrumental form of reasoning, or whether it should be also based on values. Many automated systems of practical reasoning for multi-agent deliberation (Gordon and Richter, 2002; Atkinson et al., 2004a, 2004b; Rahwan and Amgoud, 2006) take values into account.
The most basic problem that led to the other problems studied in the book was posed in Chapter 1. If you try to model the given instance of practical reasoning as a sequence of argumentation only by an argument map, you are led to a state space explosion. Throughout the subsequent chapters we have moved toward a solution to this problem by embedding practical reasoning in an overarching procedural framework in which any given sequence of practical reasoning should be viewed as a part of a deliberation dialogue having an opening stage and a closing stage. This problem led to Chapter 6, where criteria for the proper closure of a deliberation dialogue were proposed. As shown in Chapter 6, practical reasoning is most characteristically used in deliberation dialogue – goal-directed dialogue in which a choice for action needs to be made or problem needs to be solved. It was also shown in Chapter 6 that that deliberation dialogue is often mixed in with information-seeking dialogue as new evidence of the circumstances comes in. Also, as early as Chapter 2, it was shown that practical reasoning is used in persuasion dialogue – for example, in ads for medical products.
Atkinson et al. (2013) showed that there are also many shifts in a deliberation dialogue to persuasion dialogue intervals. Typically, for example, a proposal that has been put forward as part of a deliberation dialogue is attacked by a critic who shifts to a persuasion dialogue in order to attack the arguments that were used to support the proposal that was made in the deliberation dialogue. It is important to see that there is nothing inherently illegitimate about such shifts.
However, a general problem arises from the variability of different communicative multi-agent settings in which practical reasoning is used. As seen in the examples from Chapter 6, deliberation dialogue is the most important and central setting in which practical reasoning is used, and the true colors of practical reasoning as a form of argumentation really begin to emerge once we embed it into this setting. Nevertheless, we also need to confront the underlying problem that in the argumentation in natural language examples where practical reasoning is used, there so often appear to be dialectical shifts from deliberation dialogue to persuasion dialogue.
In Chapter 2 it was shown how there are different frameworks of communication in which arguments can be put forward and critically questioned, including persuasion dialogue, information-seeking dialogue, and deliberation dialogue. This chapter will focus almost exclusively on deliberation dialogue, but will also deal with related issues where there is a shift to or from one of these other types of dialogue to deliberation dialogue. It will be shown how practical reasoning is woven through every aspect of deliberation dialogue, and how deliberation dialogue represents the necessary framework for analyzing and evaluating typical instances of practical reasoning in natural language cases of argumentation that we are all familiar with. This chapter will also show how formal models of deliberation dialogue built as artificial intelligence tools for multi-agent systems turn out to be extremely useful for solving the closure problem of practical reasoning in multi-agent settings.
The chapter begins by using four examples to show how practical reasoning is embedded in everyday deliberations of a kind all of us are familiar with. The first one, in Section 1, is a case of a man trying to solve the problem with his printer by looking on Google to get advice and then using a trial and error procedure to try to fix the problem. The second one, in Section 2, is an example of a couple trying to arrive at a decision on which home to buy, having narrowed the choices down to three candidates: a condominium, a two-story house, and a bungalow. The third one, in Section 3, is a case of a policy debate, showing how CAS employs practical reasoning in this setting. The fourth (Section 4), is a town hall meeting on a decision of whether or not to bring in no-fault insurance. Section 5 explains the essentials of the leading model of deliberation dialogue used in artificial intelligence at this point – the McBurney, Hitchcock, and Parsons (MHP) model.
Ascription of an intention to an agent is especially important in law. In criminal law the intent to commit a criminal act, called mens rea, refers to the guilty mind, the key element needed to prosecute a defendant for a crime. For example, in order to prove that a defendant has committed the crime of theft of an object, it needs to be established that the defendant had the intention never to return the object to its owner. Studying examples of how intention is proved in law is an important resource for giving us clues on how reasoning to an intention should be carried out. Intention is also fundamentally important in ethical reasoning where there are problems about how the end can justify the means.
This chapter introduces the notion of inference to the best explanation, often called abductive reasoning, and presents recent research on evidential reasoning that uses the concept of a so-called script or story as a central component. The introduction of these two argumentation tools show how they are helpful in moving forward toward a solution to the longstanding problem of analyzing how practical reasoning from circumstantial evidence can be used to support or undermine a hypothesis that an agent has a particular intention. Legal examples are used to show that even though ascribing an intention to an agent is an evaluation procedure that combines argumentation and explanation, it can be rationally carried out by using a practical reasoning model that accounts for the weighing of factual evidence on both sides of a disputed case.
The examples studied in this chapter will involve cases where practical reasoning is used as the glue that combines argumentation with explanation. Section 1 considers a simple example of a message on the Internet advising how to mount a flagpole bracket to a house. The example tells the reader how to take the required steps to attach a bracket to the house in order to mount a flagpole so that the reader can show his patriotism by displaying a flag on his house. The example text is clearly an instance of practical reasoning. The author of the ad presumes that the reader has a goal, and he tells the reader how to fulfill that goal by carrying out a sequence of actions.
From the end of antiquity to the middle of the nineteenth century it was generally believed that Aristotle had said all that there was to say concerning the rules of logic and inference. One of the ablest British mathematicians of his age, Augustus De Morgan (1806–71) played an important role in overturning that assumption with the publication of this book in 1847. He attempts to do several things with what we now see as varying degrees of success. The first is to treat logic as a branch of mathematics, more specifically as algebra. Here his contributions include his laws of complementation and the notion of a universe set. De Morgan also tries to tie together formal and probabilistic inference. Although he is never less than acute, the major advances in probability and statistics at the beginning of the twentieth century make this part of the book rather less prophetic.
Recent work in argumentation theory has emphasized the nature of arguers and arguments along with various theoretical perspectives. Less attention has been given to the third feature of any argumentative situation - the audience. This book fills that gap by studying audience reception to argumentation and the problems that come to light as a result of this shift in focus. Christopher W. Tindale advances the tacit theories of several earlier thinkers by addressing the central problems connected with audience considerations in argumentation, problems that earlier philosophical theories overlook or inadequately accommodate. The main tools employed in exploring the central issues are drawn from contemporary philosophical research on meaning, testimony, emotion and agency. These are then combined with some of the major insights of recent rhetorical work in argumentation to advance our understanding of audiences and suggest avenues for further research.