To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The previous chapters have shown how practical reasoning is used in arguments, but it will turn out to be vitally important to understand how it is used in explanations. It will even be shown at the end of this chapter how the same example of practical reasoning in a discourse can combine explanations with arguments. Hence, there arises the problem of building a model of explanation to reveal precisely how practical reasoning is used in explanations. The key to solving it, as will be shown in this chapter, is to broaden the study of practical reasoning to take into account not only its structure as a chain of reasoning, but also how that same kind of reasoning can be used in different ways in different communicative settings. An argument will be shown to be a response to a particular kind of question, while an explanation will be seen as a response to another kind of question.
Recent work in artificial intelligence has taken the approach that an explanation is best seen as a transfer of understanding from one party to another in a dialogue where one party is a questioner who asks why or how something works and the other party attempts to fulfill this request (Cawsey, 1992; Moore, 1995; Moulin et al., 2002). Recent literature in philosophy of science seems to be gradually moving toward this approach, but there is an open question of how it can be represented using a formal structure (Trout, 2002). Since explanations and arguments are sometimes hard to distinguish, the first step is to provide some way of representing the distinction between them in their formal structure. In this chapter, the Why2 Dialogue System is presented as a formal model showing how the difference between argument and explanation resides in the pre- and post-conditions for the speech act of requesting an argument and the speech act of requesting an explanation. It is an extension of earlier dialogue systems (Walton, 2004, 2007a, 2011).
Practical reasoning of the kind described by philosophers since Aristotle (384–322 BC) is identified as goal-based reasoning that works by finding a sequence of actions that leads toward or reaches an agent's goal. Practical reasoning, as described in this book, is used by an agent to select an action from a set of available alternative actions the agent sees as open in its given circumstances. A practical reasoning agent can be a human or an artificial agent – for example, software, a robot, or an animal. Once the action is selected as the best or most practical means of achieving the goal in the given situation, the agent draws a conclusion that it should go ahead and carry out this action. Such an inference is fallible, as long as the agent's knowledge base is open to new information. It is an important aspect of goal-based practical reasoning that if an agent learns that its circumstances or its goals have changed and a different action might now become the best one available, it can (and perhaps should) “change its mind.”
In computer science, practical reasoning is more likely to be known as means-end reasoning (where an end is taken to mean a goal), goal-based reasoning, or goal-directed reasoning (Russell and Norvig, 1995, 259). Practical reasoning is fundamental to artificial intelligence (Reed and Norman, 2003), where it is called means-end analysis (Simon, 1981). In goal-based problem-solving, a search for a solution to a problem is carried out by finding a sequence of actions from available means of solving a problem. An intelligent goal-seeking agent needs to receive information about its external circumstances by means of sensors, and store it in its memory. There are differences of opinion about how practical goal-based reasoning should be modeled. One issue is whether it should be seen as merely an instrumental form of reasoning, or whether it should be also based on values. Many automated systems of practical reasoning for multi-agent deliberation (Gordon and Richter, 2002; Atkinson et al., 2004a, 2004b; Rahwan and Amgoud, 2006) take values into account.
The most basic problem that led to the other problems studied in the book was posed in Chapter 1. If you try to model the given instance of practical reasoning as a sequence of argumentation only by an argument map, you are led to a state space explosion. Throughout the subsequent chapters we have moved toward a solution to this problem by embedding practical reasoning in an overarching procedural framework in which any given sequence of practical reasoning should be viewed as a part of a deliberation dialogue having an opening stage and a closing stage. This problem led to Chapter 6, where criteria for the proper closure of a deliberation dialogue were proposed. As shown in Chapter 6, practical reasoning is most characteristically used in deliberation dialogue – goal-directed dialogue in which a choice for action needs to be made or problem needs to be solved. It was also shown in Chapter 6 that that deliberation dialogue is often mixed in with information-seeking dialogue as new evidence of the circumstances comes in. Also, as early as Chapter 2, it was shown that practical reasoning is used in persuasion dialogue – for example, in ads for medical products.
Atkinson et al. (2013) showed that there are also many shifts in a deliberation dialogue to persuasion dialogue intervals. Typically, for example, a proposal that has been put forward as part of a deliberation dialogue is attacked by a critic who shifts to a persuasion dialogue in order to attack the arguments that were used to support the proposal that was made in the deliberation dialogue. It is important to see that there is nothing inherently illegitimate about such shifts.
However, a general problem arises from the variability of different communicative multi-agent settings in which practical reasoning is used. As seen in the examples from Chapter 6, deliberation dialogue is the most important and central setting in which practical reasoning is used, and the true colors of practical reasoning as a form of argumentation really begin to emerge once we embed it into this setting. Nevertheless, we also need to confront the underlying problem that in the argumentation in natural language examples where practical reasoning is used, there so often appear to be dialectical shifts from deliberation dialogue to persuasion dialogue.
In Chapter 2 it was shown how there are different frameworks of communication in which arguments can be put forward and critically questioned, including persuasion dialogue, information-seeking dialogue, and deliberation dialogue. This chapter will focus almost exclusively on deliberation dialogue, but will also deal with related issues where there is a shift to or from one of these other types of dialogue to deliberation dialogue. It will be shown how practical reasoning is woven through every aspect of deliberation dialogue, and how deliberation dialogue represents the necessary framework for analyzing and evaluating typical instances of practical reasoning in natural language cases of argumentation that we are all familiar with. This chapter will also show how formal models of deliberation dialogue built as artificial intelligence tools for multi-agent systems turn out to be extremely useful for solving the closure problem of practical reasoning in multi-agent settings.
The chapter begins by using four examples to show how practical reasoning is embedded in everyday deliberations of a kind all of us are familiar with. The first one, in Section 1, is a case of a man trying to solve the problem with his printer by looking on Google to get advice and then using a trial and error procedure to try to fix the problem. The second one, in Section 2, is an example of a couple trying to arrive at a decision on which home to buy, having narrowed the choices down to three candidates: a condominium, a two-story house, and a bungalow. The third one, in Section 3, is a case of a policy debate, showing how CAS employs practical reasoning in this setting. The fourth (Section 4), is a town hall meeting on a decision of whether or not to bring in no-fault insurance. Section 5 explains the essentials of the leading model of deliberation dialogue used in artificial intelligence at this point – the McBurney, Hitchcock, and Parsons (MHP) model.
Ascription of an intention to an agent is especially important in law. In criminal law the intent to commit a criminal act, called mens rea, refers to the guilty mind, the key element needed to prosecute a defendant for a crime. For example, in order to prove that a defendant has committed the crime of theft of an object, it needs to be established that the defendant had the intention never to return the object to its owner. Studying examples of how intention is proved in law is an important resource for giving us clues on how reasoning to an intention should be carried out. Intention is also fundamentally important in ethical reasoning where there are problems about how the end can justify the means.
This chapter introduces the notion of inference to the best explanation, often called abductive reasoning, and presents recent research on evidential reasoning that uses the concept of a so-called script or story as a central component. The introduction of these two argumentation tools show how they are helpful in moving forward toward a solution to the longstanding problem of analyzing how practical reasoning from circumstantial evidence can be used to support or undermine a hypothesis that an agent has a particular intention. Legal examples are used to show that even though ascribing an intention to an agent is an evaluation procedure that combines argumentation and explanation, it can be rationally carried out by using a practical reasoning model that accounts for the weighing of factual evidence on both sides of a disputed case.
The examples studied in this chapter will involve cases where practical reasoning is used as the glue that combines argumentation with explanation. Section 1 considers a simple example of a message on the Internet advising how to mount a flagpole bracket to a house. The example tells the reader how to take the required steps to attach a bracket to the house in order to mount a flagpole so that the reader can show his patriotism by displaying a flag on his house. The example text is clearly an instance of practical reasoning. The author of the ad presumes that the reader has a goal, and he tells the reader how to fulfill that goal by carrying out a sequence of actions.
Logic Forms (LF) are simple, first-order logic knowledge representations of natural language sentences. Each noun, verb, adjective, adverb, pronoun, preposition and conjunction generates a predicate. LF systems usually identify the syntactic function by means of syntactic rules but this approach is difficult to apply to languages with a high syntax flexibility and ambiguity, for example, Spanish. In this study, we present a mixed method for the derivation of the LF of sentences in Spanish that allows the combination of hard-coded rules and a classifier inspired on semantic role labeling. Thus, the main novelty of our proposal is the way the classifier is applied to generate the predicates of the verbs, while rules are used to translate the rest of the predicates, which are more straightforward and unambiguous than the verbal ones. The proposed mixed system uses a supervised classifier to integrate syntactic and semantic information in order to help overcome the inherent ambiguity of Spanish syntax. This task is accomplished in a similar way to the semantic role labeling task. We use properties extracted from the AnCora-ES corpus in order to train a classifier. A rule-based system is used in order to obtain the LF from the rest of the phrase. The rules are obtained by exploring the syntactic tree of the phrase and encoding the syntactic production rules. The LF algorithm has been evaluated by using shallow parsing with some straightforward Spanish phrases. The verb argument labeling task achieves 84% precision and the proposed mixed LFi method surpasses 11% a system based only on rules.
We investigate the problem of improving performance in distributional word similarity systems trained on sparse data, focusing on a family of similarity functions we call Dice-family functions (Dice 1945Ecology26(3): 297–302), including the similarity function introduced in Lin (1998Proceedings of the 15th International Conference on Machine Learning, 296–304), and Curran (2004 PhD thesis, University of Edinburgh. College of Science and Engineering. School of Informatics), as well as a generalized version of Dice Coefficient used in data mining applications (Strehl 2000, 55). We propose a generalization of the Dice-family functions which uses a weight parameter α to make the similarity functions asymmetric. We show that this generalized family of functions (α systems) all belong to the class of asymmetric models first proposed in Tversky (1977Psychological Review84: 327–352), and in a multi-task evaluation of ten word similarity systems, we show that α systems have the best performance across word ranks. In particular, we show that α-parameterization substantially improves the correlations of all Dice-family functions with human judgements on three words sets, including the Miller–Charles/Rubenstein Goodenough word set (Miller and Charles 1991Language and Cognitive Processes6(1): 1–28; Rubenstein and Goodenough 1965Communications of the ACM8: 627–633).
‘Deep-syntactic’ dependency structures that capture the argumentative, attributive and coordinative relations between full words of a sentence have a great potential for a number of NLP-applications. The abstraction degree of these structures is in between the output of a syntactic dependency parser (connected trees defined over all words of a sentence and language-specific grammatical functions) and the output of a semantic parser (forests of trees defined over individual lexemes or phrasal chunks and abstract semantic role labels which capture the frame structures of predicative elements and drop all attributive and coordinative dependencies). We propose a parser that provides deep-syntactic structures. The parser has been tested on Spanish, English and Chinese.
This article presents silhouette–attraction (Sil–Att), a simple and effective method for text clustering, which is based on two main concepts: the silhouette coefficient and the idea of attraction. The combination of both principles allows us to obtain a general technique that can be used either as a boosting method, which improves results of other clustering algorithms, or as an independent clustering algorithm. The experimental work shows that Sil–Att is able to obtain high-quality results on text corpora with very different characteristics. Furthermore, its stable performance on all the considered corpora is indicative that it is a very robust method. This is a very interesting positive aspect of Sil–Att with respect to the other algorithms used in the experiments, whose performances heavily depend on specific characteristics of the corpora being considered.
With this comprehensive guide you will learn how to apply Bayesian machine learning techniques systematically to solve various problems in speech and language processing. A range of statistical models is detailed, from hidden Markov models to Gaussian mixture models, n-gram models and latent topic models, along with applications including automatic speech recognition, speaker verification, and information retrieval. Approximate Bayesian inferences based on MAP, Evidence, Asymptotic, VB, and MCMC approximations are provided as well as full derivations of calculations, useful notations, formulas, and rules. The authors address the difficulties of straightforward applications and provide detailed examples and case studies to demonstrate how you can successfully use practical Bayesian inference methods to improve the performance of information systems. This is an invaluable resource for students, researchers, and industry practitioners working in machine learning, signal processing, and speech and language processing.
We propose a language-independent word normalisation method and exemplify it on modernising historical Slovene words. Our method relies on character-level statistical machine translation (CSMT) and uses only shallow knowledge. We present relevant data on historical Slovene, consisting of two (partially) manually annotated corpora and the lexicons derived from these corpora, containing historical word–modern word pairs. The two lexicons are disjoint, with one serving as the training set containing 40,000 entries, and the other as a test set with 20,000 entries. The data spans the years 1750–1900, and the lexicons are split into fifty-year slices, with all the experiments carried out separately on the three time periods. We perform two sets of experiments. In the first one – a supervised setting – we build a CSMT system using the lexicon of word pairs as training data. In the second one – an unsupervised setting – we simulate a scenario in which word pairs are not available. We propose a two-step method where we first extract a noisy list of word pairs by matching historical words with cognate modern words, and then train a CSMT system on these pairs. In both sets of experiments, we also optionally make use of a lexicon of modern words to filter the modernisation hypotheses. While we show that both methods produce significantly better results than the baselines, their accuracy and which method works best strongly correlates with the age of the texts, meaning that the choice of the best method will depend on the properties of the historical language which is to be modernised. As an extrinsic evaluation, we also compare the quality of part-of-speech tagging and lemmatisation directly on historical text and on its modernised words. We show that, depending on the age of the text, annotation on modernised words also produces significantly better results than annotation on the original text.
With NLP services now widely available via cloud APIs, tasks like named entity recognition and sentiment analysis are virtually commodities. We look at what's on offer, and make some suggestions for how to get rich.
Ontologising is the task of associating terms, in text, with an ontological representation of their meaning, in an ontology. In this article, we revisit algorithms that have previously been used to ontologise the arguments of semantic relations in a relationless thesaurus, resulting in a wordnet. For increased flexibility, the algorithms do not use the extraction context when selecting the most adequate synsets for each term argument. Instead, they exploit a term-based lexical network which can be established by knowledge extracted automatically, or obtained from the resource the relations are being ontologised to. On the latter idea, we made several experiments to conclude that the algorithms can be used both for wordnet creation and for their enrichment. Besides describing the algorithms with some detail, the aforementioned experiments, which target both English and Portuguese, and their results are reported and discussed.