To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Argumentation, which can be abstractly defined as the interaction of different arguments for and against some conclusion, is an important skill to learn for everyday life, law, science, politics and business. The best way to learn it is to try it out on real instances of arguments found in everyday conversational exchanges and legal argumentation. The introductory chapter of this book gives a clear general idea of what the methods of argumentation are and how they work as tools that can be used to analyze arguments. Each subsequent chapter then applies these methods to a leading problem of argumentation. Today the field of computing has embraced argumentation as a paradigm for research in artificial intelligence and multi-agent systems. Another purpose of this book is to present and refine tools and techniques from computing as components of the methods that can be handily used by scholars in other fields.
We have developed a heuristic method for unsupervised parsing of unrestricted text. Our method relies on detecting certain patterns of part-of-speech tag sequences of words in sentences. This detection is based on statistical data obtained from the corpus and allows us to classify part-of-speech tags into classes that play specific roles in the parse trees. These classes are then used to construct the parse tree of new sentences via a set of deterministic rules. Aiming to asses the viability of the method on different languages, we have tested it on English, Spanish, Italian, Hebrew, German, and Chinese. We have obtained a significant improvement over other unsupervised approaches for some languages, including English, and provided, as far as we know, the first results of this kind for others.
Events play an important role in natural language processing and information retrieval due to numerous event-oriented texts and information needs. Many natural language processing and information retrieval applications could benefit from a structured event-oriented document representation. In this paper, we propose event graphs as a novel way of structuring event-based information from text. Nodes in event graphs represent the individual mentions of events, whereas edges represent the temporal and coreference relations between mentions. Contrary to previous natural language processing research, which has mainly focused on individual event extraction tasks, we describe a complete end-to-end system for event graph extraction from text. Our system is a three-stage pipeline that performs anchor extraction, argument extraction, and relation extraction (temporal relation extraction and event coreference resolution), each at a performance level comparable with the state of the art. We present EvExtra, a large newspaper corpus annotated with event mentions and event graphs, on which we train and evaluate our models. To measure the overall quality of the constructed event graphs, we propose two metrics based on the tensor product between automatically and manually constructed graphs. Finally, we evaluate the overall quality of event graphs with the proposed evaluation metrics and perform a headroom analysis of the system.
Answer Set Prolog is a comparatively new knowledge representation (KR) language with roots in older nonmonotonic logics and the logic programming language Prolog. Early proponents of the logical approach to artificial intelligence believed that the classical logical formalism called first-order logic would serve as the basis for the application of the axiomatic method to the development of intelligent agents. In this chapter we briefly describe some important developments that forced them to question this belief and to work instead on the development of nonclassical knowledge representation languages including ASP. To make the chapter easier for people not familiar with mathematical logic, we give a very short introduction to one of its basic logical tools — first-order logic.
First-Order Logic (FOL)
First-order logic is a formal logical system that consists of a formal language, an entailment or consequence relation for this language, and a collection of inference rules that can be used to obtain these consequences. The language of FOL is parametrized with respect to a signature Σ. The notions of term and atom over Σ are the same as those defined in Section 2.1. The statements of FOL (called FOL formulas) are built from atoms using boolean logical connectives and quantifiers ∀ (for all) and ∃ (there exists). Atoms are formulas. If A and B are formulas and X is a variable, then (A ∧ B), (A ∨ B), (A ⊃ B), ¬A, ∀ X A, ∃X A are formulas.
In this chapter we discuss how to build agents capable of finding explanations of unexpected observations. To do that we divide actions of our domain into two disjoint classes: agent actions and exogenous actions. As expected the former are those performed by the agent associated with the domain, and the latter are those performed by nature or by other agents. As usual we make two simplifying assumptions:
The agent is capable of making correct observations, performing actions, and recording these observations and actions.
Normally the agent is capable of observing all relevant exogenous actions occurring in its environment.
Note that the second assumption is defeasible — some exogenous actions can remain unobserved. These assumptions hold in many realistic domains and are suitable for a broad class of applications. In other domains, however, the effects of actions and the truth-values of observations can only be known with a substantial degree of uncertainty, which cannot be ignored in the modeling process. We comment on such situations in Chapter 11, which deals with probabilistic reasoning.
In our setting a typical diagnostic problem is informally specified as follows:
• A symptom consists of a recorded history of the system such that its last collection of observations is unexpected (i.e., it contradicts the agent's expectations).
• An explanation of a symptom is a collection of unobserved past occurences of exogenous actions that may account for the unexpected observations.
This notion of explanation is closely connected with our second simplifying assumption.
We complete the book by a short discussion of an inference mechanism that is very different from the one presented in Chapter 7. It is only applicable to normal logic programs (nlps), and although sound with respect to answer set semantics of nlps, it is not complete and might not even terminate; however, it has a number of advantages. In particular, it is applicable to nlps with infinite answer sets, and it does not require grounding. The algorithm is implemented in interpreters for programming language Prolog. The language, introduced in the late 1970s, is still one of the most popular universal programming languages based on logic. Syntactically, Prolog can be viewed as an extension of the language of nlps by a number of nonlogical features. Its inference mechanism is based on two important algorithms called unification and resolution, implemented in standard Prolog interpreters. Unification is an algorithm for matching atoms; resolution uses unification to answer queries of the form “find X such that q(X) is the consequence of an nlp II.”
We end this chapter with several examples of the use of Prolog for finding declarative solutions to nontrivial programming problems. Procedural solutions to these problems are longer and much more complex.
The Prolog Interpreter
We start with defining unification and SLD resolution — an algorithm used by Prolog interpreters to answer queries to definite programs (i.e., nlps without default negation).
This is a book about knowledge representation and reasoning (KRR) — a comparatively new branch of science that serves as the foundation of artificial intelligence, declarative programming, and the design of intelligent agents — knowledge-intensive software systems capable of exhibiting intelligent behavior. Our main goal is to show how a software system can be given knowledge about the world and itself and how this knowledge can be used to solve nontrivial computational problems. There are several approaches to KRR that both compete with and complement each other. The approaches differ primarily by the languages used to represent knowledge and by corresponding computational methods. This book is based on a knowledge representation language called Answer Set Prolog (ASP) and the answer-set programming paradigm — a comparatively recent branch of KRR with a well-developed theory, efficient reasoning systems, methodology of use, and a growing number of applications.
The text can be used for classes in knowledge representation, declarative programming, and artificial intelligence for advanced undergraduate or graduate students in computer science and related disciplines, including software engineering, logic, and cognitive science. It will also be useful to serious researchers in these fields who would like to learn more about the answer-set programming paradigm and its use for KRR. Finally, we hope that it will be of interest to anyone with a sense of wonder about the amazing ability of humans to derive volumes of knowledge from a collection of basic facts.
What follows is a very brief, operational introduction to two currently existing ASP solvers, clingo and DLV Since the field is developing rapidly, we recommend that users of this information learn about the most current versions of these solvers. To find DLV, go to http://www.dlvsystem.com. To find clingo,goto http://potassco.sourceforge.net/.For quick access to the manuals, just search online for DLV manual or clingo manual.
To find all answer sets of a given program, type
clingo 0 program_name
or
dlv -n=0 program_name
The 0 tells the programs to return all answer sets. If you omit the parameter when calling clingo, the program will return only one answer set; DLV will return all answer sets. Changing the number will return the corresponding number of answer sets. Both systems use – and :– instead of ¬ and ←, respectively. Epistemic disjunction or is denoted by ∣.
Often we may want to limit what a solver will output when it prints answer sets because complete sets can be large and we may only be interested in a few predicates. When using clingo, it is useful to learn the #show commands. For example, if you had a program with predicate mother(X, Y), and you included the following line in your program:
#show mother/2.
clingo would output only those atoms of the answer sets that are formed by predicate mother.
The goal of artificial intelligence is to learn how to build software components of intelligent agents capable of reasoning and acting in a changing environment. To exhibit intelligent behavior, an agent should have a mathematical model of its environment and its own capabilities and goals, as well as algorithms for achieving these goals. Our aim is to discover such models and algorithms and to learn how to use them to build practical intelligent systems. Why is this effort important? There are philosophical, scientiic, and practical reasons. Scientists are getting closer to understanding ancient enigmas such as the origins and the physical and chemical structure of the universe, and the basic laws of development of living organisms, but still know comparatively little about the enigma of thinking. Now, however, we have the computer — a new tool that gives us the ability to test our theories of thought by designing intelligent software agents. In the short time that this tool has been applied to the study of reasoning, it has yielded a greater understanding of cognitive processes and continues to produce new insights on a regular basis, giving us much hope for the future. On the software engineering front, mathematical models of intelligent agents and the corresponding reasoning algorithms help develop the paradigm of declarative programming, which may lead to a simpler and more reliable programming methodology. And, of course, knowledge-intensive software systems, including decision support systems, intelligent search engines, and robots, are of great practical value.