The conclusion of Chapter 1 was that even though practical reasoning can sometimes be a solitary form of reasoning – for example, where an agent needs to figure out how to fix a flat tire on his car – there are many other cases where a rational agent needs to communicate with other agents in order to move forward with intelligent practical reasoning. As noted in the conclusion of Chapter 1, in such cases an intelligent agent may need to deliberate with other agents on how to achieve shared goals. It may also need to collect information from them, or even to negotiate with them on how to divide up responsibility for the actions needed for them to reason together to achieve their common goal. In the subsequent chapters we will examine some complex cases of group deliberations by agents collaboratively using practical reasoning. In Chapter 2 we will start with some simpler cases in which one agent is interacting with another and is using practical reasoning to try to persuade the other to carry out a designated action.
In this chapter, several illustrative examples of direct-to-consumer health product advertisements that exemplify practical reasoning are analyzed using argumentation methods, and some conclusions about the analyses are drawn. The ads studied include commercials for pharmaceuticals, as well as ads for other medications, including health foods, and treatments and devices that purport to have health benefits. It is shown that they use arguments of a kind that fit the argumentation scheme for practical reasoning. The findings are interesting for many reasons. They can be applied both to the crafting of the ads and to the enhancement of critical thinking skills for intelligent consumers. They are also applicable to recent efforts to use computational techniques to assist consumer health informatics to inform consumers and improve health care. Most interestingly, they suggest that the central persuasion structure of the argumentation in health care ads is that of practical reasoning. Use of practical reasoning of this sort as argumentation is commonly called the problem-solution strategy in advertising.
In Chapter 2 we examined some relatively simple (but real) cases in which argument diagramming tools, along with argumentation schemes and critical questions, were used to identify and analyze practical reasoning. Now we need to go more deeply into the subject to address the question of how the kind of practical reasoning we have identified can be formally modeled. There are some existing formal models in artificial intelligence that can model this kind of practical reasoning, in addition to the value-based system introduced in Chapter 1. Some of these formal models have been implemented in computational systems that contain argument diagramming tools. In Chapter 3, the reader will be introduced to these formal models and computational tools, in order to show how they can be used to model practical reasoning.
The way Araucaria is applied to the various examples of arguments in health ads in Chapter 2 was very instructive because it showed how the scheme for practical reasoning, along with other arguments – some of them also based on known schemes – can be structured in a way that makes the sequence of reasoning in a case easy to interpret and analyze. A nice feature of Araucaria was that once the scheme is identified as fitting a particular segment of the chain of argumentation represented in the argument diagram, the menu can be displayed showing the critical questions matching the scheme. This particular device turned out to be extremely helpful because, as indicated in Chapter 1, the main tool needed for evaluating instances of argumentation containing practical reasoning is the list of critical questions matching a scheme.
Argument diagramming, even along with argumentation schemes, is not sufficient to yield a model of the structure of practical reasoning as a species of rational argumentation. For that purpose, we need a precise formal model that goes more deeply into the structure of practical reasoning. In this chapter, it will be shown how some existing formal argumentation systems can be applied to practical reasoning. It will also prove helpful if the formal model is implemented in a working computational system that has additional features useful for analyzing practical reasoning, including an argument diagramming interface.
This chapter discusses the theoretical questions that are raised as one navigates across the storm-tossed seas of philosophical controversies about rationality. The analysis of the structure of practical reasoning advanced in the previous chapters has raised the question of whether it can be used to frame a new philosophical definition of rationality, but it also poses some problems. One of them is how to define the other kind of rationality that is apparently left over once practical reasoning has been shown to represent some notion of practical rationality. Some (pragmatists) think that nothing is left over, and that all rationality is practical rationality. Some call what is left over theoretical rationality, while others call it epistemic rationality.
The four examples of the use of practical reasoning in deliberation studied in Chapter 6 showed that intelligent deliberation needs to be based on knowledge of the circumstances of the case that is continually streaming in to an agent (or group of them) from an open knowledge base. This view of how practical reasoning is used requires a different approach to the notions of epistemic and practical rationality. The two notions need to be defined separately as distinctive concepts, but they also need to be seen as concepts that are combined in practical reasoning. Practical reasoning needs to be based on an agent's goals and values, but also on its knowledge of the circumstances. However, because the circumstances are continually changing, both human and machine agents, even when working together, are fallible. This means we are continually subject to bias, misperceptions, and errors of judgment. As we saw when defining the list of characteristics of a rational agent, one of the most important characteristics is the capability for error correction based on feedback as new information streams into the agent's knowledge base.
The central feature of knowledge in this view is that it has to be based on evidence, rather than requiring the truth of its findings.
Practical reasoning of the kind described by philosophers since Aristotle (384–322 BC) is identified as goal-based reasoning that works by finding a sequence of actions that leads toward or reaches an agent's goal. Practical reasoning, as described in this book, is used by an agent to select an action from a set of available alternative actions the agent sees as open in its given circumstances. A practical reasoning agent can be a human or an artificial agent – for example, software, a robot, or an animal. Once the action is selected as the best or most practical means of achieving the goal in the given situation, the agent draws a conclusion that it should go ahead and carry out this action. Such an inference is fallible, as long as the agent's knowledge base is open to new information. It is an important aspect of goal-based practical reasoning that if an agent learns that its circumstances or its goals have changed and a different action might now become the best one available, it can (and perhaps should) “change its mind.”
In computer science, practical reasoning is more likely to be known as means-end reasoning (where an end is taken to mean a goal), goal-based reasoning, or goal-directed reasoning (Russell and Norvig, 1995, 259). Practical reasoning is fundamental to artificial intelligence (Reed and Norman, 2003), where it is called means-end analysis (Simon, 1981). In goal-based problem-solving, a search for a solution to a problem is carried out by finding a sequence of actions from available means of solving a problem. An intelligent goal-seeking agent needs to receive information about its external circumstances by means of sensors, and store it in its memory. There are differences of opinion about how practical goal-based reasoning should be modeled. One issue is whether it should be seen as merely an instrumental form of reasoning, or whether it should be also based on values. Many automated systems of practical reasoning for multi-agent deliberation (Gordon and Richter, 2002; Atkinson et al., 2004a, 2004b; Rahwan and Amgoud, 2006) take values into account.
The most basic problem that led to the other problems studied in the book was posed in Chapter 1. If you try to model the given instance of practical reasoning as a sequence of argumentation only by an argument map, you are led to a state space explosion. Throughout the subsequent chapters we have moved toward a solution to this problem by embedding practical reasoning in an overarching procedural framework in which any given sequence of practical reasoning should be viewed as a part of a deliberation dialogue having an opening stage and a closing stage. This problem led to Chapter 6, where criteria for the proper closure of a deliberation dialogue were proposed. As shown in Chapter 6, practical reasoning is most characteristically used in deliberation dialogue – goal-directed dialogue in which a choice for action needs to be made or problem needs to be solved. It was also shown in Chapter 6 that that deliberation dialogue is often mixed in with information-seeking dialogue as new evidence of the circumstances comes in. Also, as early as Chapter 2, it was shown that practical reasoning is used in persuasion dialogue – for example, in ads for medical products.
Atkinson et al. (2013) showed that there are also many shifts in a deliberation dialogue to persuasion dialogue intervals. Typically, for example, a proposal that has been put forward as part of a deliberation dialogue is attacked by a critic who shifts to a persuasion dialogue in order to attack the arguments that were used to support the proposal that was made in the deliberation dialogue. It is important to see that there is nothing inherently illegitimate about such shifts.
However, a general problem arises from the variability of different communicative multi-agent settings in which practical reasoning is used. As seen in the examples from Chapter 6, deliberation dialogue is the most important and central setting in which practical reasoning is used, and the true colors of practical reasoning as a form of argumentation really begin to emerge once we embed it into this setting. Nevertheless, we also need to confront the underlying problem that in the argumentation in natural language examples where practical reasoning is used, there so often appear to be dialectical shifts from deliberation dialogue to persuasion dialogue.
Ascription of an intention to an agent is especially important in law. In criminal law the intent to commit a criminal act, called mens rea, refers to the guilty mind, the key element needed to prosecute a defendant for a crime. For example, in order to prove that a defendant has committed the crime of theft of an object, it needs to be established that the defendant had the intention never to return the object to its owner. Studying examples of how intention is proved in law is an important resource for giving us clues on how reasoning to an intention should be carried out. Intention is also fundamentally important in ethical reasoning where there are problems about how the end can justify the means.
This chapter introduces the notion of inference to the best explanation, often called abductive reasoning, and presents recent research on evidential reasoning that uses the concept of a so-called script or story as a central component. The introduction of these two argumentation tools show how they are helpful in moving forward toward a solution to the longstanding problem of analyzing how practical reasoning from circumstantial evidence can be used to support or undermine a hypothesis that an agent has a particular intention. Legal examples are used to show that even though ascribing an intention to an agent is an evaluation procedure that combines argumentation and explanation, it can be rationally carried out by using a practical reasoning model that accounts for the weighing of factual evidence on both sides of a disputed case.
The examples studied in this chapter will involve cases where practical reasoning is used as the glue that combines argumentation with explanation. Section 1 considers a simple example of a message on the Internet advising how to mount a flagpole bracket to a house. The example tells the reader how to take the required steps to attach a bracket to the house in order to mount a flagpole so that the reader can show his patriotism by displaying a flag on his house. The example text is clearly an instance of practical reasoning. The author of the ad presumes that the reader has a goal, and he tells the reader how to fulfill that goal by carrying out a sequence of actions.
In Chapter 2 it was shown how there are different frameworks of communication in which arguments can be put forward and critically questioned, including persuasion dialogue, information-seeking dialogue, and deliberation dialogue. This chapter will focus almost exclusively on deliberation dialogue, but will also deal with related issues where there is a shift to or from one of these other types of dialogue to deliberation dialogue. It will be shown how practical reasoning is woven through every aspect of deliberation dialogue, and how deliberation dialogue represents the necessary framework for analyzing and evaluating typical instances of practical reasoning in natural language cases of argumentation that we are all familiar with. This chapter will also show how formal models of deliberation dialogue built as artificial intelligence tools for multi-agent systems turn out to be extremely useful for solving the closure problem of practical reasoning in multi-agent settings.
The chapter begins by using four examples to show how practical reasoning is embedded in everyday deliberations of a kind all of us are familiar with. The first one, in Section 1, is a case of a man trying to solve the problem with his printer by looking on Google to get advice and then using a trial and error procedure to try to fix the problem. The second one, in Section 2, is an example of a couple trying to arrive at a decision on which home to buy, having narrowed the choices down to three candidates: a condominium, a two-story house, and a bungalow. The third one, in Section 3, is a case of a policy debate, showing how CAS employs practical reasoning in this setting. The fourth (Section 4), is a town hall meeting on a decision of whether or not to bring in no-fault insurance. Section 5 explains the essentials of the leading model of deliberation dialogue used in artificial intelligence at this point – the McBurney, Hitchcock, and Parsons (MHP) model.
The previous chapters have shown how practical reasoning is used in arguments, but it will turn out to be vitally important to understand how it is used in explanations. It will even be shown at the end of this chapter how the same example of practical reasoning in a discourse can combine explanations with arguments. Hence, there arises the problem of building a model of explanation to reveal precisely how practical reasoning is used in explanations. The key to solving it, as will be shown in this chapter, is to broaden the study of practical reasoning to take into account not only its structure as a chain of reasoning, but also how that same kind of reasoning can be used in different ways in different communicative settings. An argument will be shown to be a response to a particular kind of question, while an explanation will be seen as a response to another kind of question.
Recent work in artificial intelligence has taken the approach that an explanation is best seen as a transfer of understanding from one party to another in a dialogue where one party is a questioner who asks why or how something works and the other party attempts to fulfill this request (Cawsey, 1992; Moore, 1995; Moulin et al., 2002). Recent literature in philosophy of science seems to be gradually moving toward this approach, but there is an open question of how it can be represented using a formal structure (Trout, 2002). Since explanations and arguments are sometimes hard to distinguish, the first step is to provide some way of representing the distinction between them in their formal structure. In this chapter, the Why2 Dialogue System is presented as a formal model showing how the difference between argument and explanation resides in the pre- and post-conditions for the speech act of requesting an argument and the speech act of requesting an explanation. It is an extension of earlier dialogue systems (Walton, 2004, 2007a, 2011).
Most of the literature on burden of proof in argumentation studies and AI has concentrated so far on the persuasion type of dialogue. This concentration is natural enough, because the bulk of this literature has concentrated on burden of proof in legal argumentation. The most significant exception is deliberation dialogue, where some recent work has begun to tentatively investigate burden of proof in that setting. The problem now posed is whether burden of proof operates in deliberation dialogue in the same way that it operates in persuasion dialogue, or whether there are essential differences in this regard between the two types of dialogue.
This chapter analyzes four examples of deliberation dialogue where burden of proof poses a problem. Based on analysis of the argumentation in these examples, a working hypothesis is put forward. It is that burden of proof only becomes relevant when deliberation dialogue shifts, at the beginning of the argumentation stage, to a persuasion dialogue. The hypothesis is that the shift can be classified as embedding one type of dialogue into another, meaning that the goal of the first type of dialogue continues to be supported once the transition to the second type of dialogue has been made (Walton and Krabbe, 1995, 102). In other instances, it is well known that a shift can be illicit, where the advent of the second dialogue interferes with the fulfillment of the goal of the first one. It has also been shown that such shifts can be associated with fallacies, as well as other logical and communicative problems (Walton, 2007, chapter 6).
Email your librarian or administrator to recommend adding this to your organisation's collection.