44 results
5 - Ontologies
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 155-173
-
- Chapter
- Export citation
-
Summary
WHAT IS AN ONTOLOGY?
An ontology is an explicit formal specification of the terms that are used to represent an agent's world (Gruber, 1993).
In an ontology, definitions associate names of entities in the agent's world (e.g., classes of objects, individual objects, relations, hypotheses, problems) with human-readable text and formal axioms. The text describes what a name means. The axioms constrain the interpretation and use of a term. Examples of terms from the ontology of the PhD advisor assessment agent include student, PhD student, professor, course, and publication. The PhD advisor assessment agent is a Disciple agent that helps a PhD student in selecting a PhD advisor based on a detailed analysis of several factors, including professional reputation, learning experience of an advisor's students, responsiveness to students, support offered to students, and quality of the results of previous students (see Section 3.3). This agent will be used to illustrate the various ontology issues discussed in this chapter.
The ontology is a hierarchical representation of the objects from the application domain. It includes both descriptions of the different types of objects (called concepts or classes, such as professor or course) and descriptions of individual objects (called instances or individuals, such as CS580), together with the properties of each object and the relationships between objects.
The underlying idea of the ontological representation is to represent knowledge in the form of a graph (similar to a concept map) in which the nodes represent objects, situations, or events, and the arcs represent the relationships between them, as illustrated in Figure 5.1.
The ontology plays a crucial role in cognitive assistants, being at the basis of knowledge representation, user–agent communication, problem solving, knowledge acquisition, and learning.
First, the ontology provides the basic representational constituents for all the elements of the knowledge base, such as the hypotheses, the hypothesis reduction rules, and the solution synthesis rules. It also allows the representation of partially learned knowledge, based on the plausible version space concept (Tecuci, 1998), as discussed in Section 7.6.
Second, the agent's ontology enables the agent to communicate with the user and with other agents by declaring the terms that the agent understands. Consequently, the ontology enables knowledge sharing and reuse among agents that share a common vocabulary that they understand.
Preface
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp xv-xx
-
- Chapter
- Export citation
-
Summary
BOOK PURPOSE
This is a book on knowledge engineering, the discipline concerned with the development of intelligent agents that use knowledge and reasoning to perform problem-solving and decision-making tasks. The book covers the theory and practice of the main stages in the development of a knowledge-based agent: understanding the application domain, modeling problem solving in that domain, developing the ontology, learning the reasoning rules, and testing the agent. However, it does this by focusing on a special class of agents: cognitive assistants that learn complex problem-solving expertise directly from human experts, support experts, and nonexperts in problem solving and decision making and teach their problem-solving expertise to students. These are learning agents that are taught by their users in ways that are similar to how a student, an apprentice, or a new collaborator is taught, through problem-solving examples and explanations and by supervising and correcting their behavior. Because such agents learn to replicate the problem-solving behavior of their users, we have called them Disciple agents.
This book presents a significant advancement in the theory and practice of knowledge engineering, where many tasks are performed by a typical computer user and a learning agent, with only limited support from a knowledge engineer. To simplify further the development of the cognitive assistants by typical users, we have focused on the development of cognitive assistants for evidence-based reasoning. Evidence-based reasoning is at the core of many problem-solving and decision-making tasks in a wide variety of domains, including intelligence analysis, cybersecurity, law, forensics, medicine, physics, chemistry, history, archaeology, education, and many others. Nevertheless, the last part of the book presents Disciple agents for applications that did not involve evidence-based reasoning.
Because knowledge engineering is a practical activity, it is best learned by doing. Therefore, this book presents the theory and methodology of developing cognitive assistants in conjunction with a practical tool, Disciple-EBR, a learning agent shell for evidence-based reasoning (EBR). Consequently, each chapter typically contains a theoretical part presenting general concepts and methods, a methodological part with guidelines on the application of the methods, and a practical part on the actual employment of these methods with Disciple-EBR. It also includes project assignments and review questions.
Acknowledgments
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp xxi-xxii
-
- Chapter
- Export citation
11 - Abstraction of Reasoning
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 329-337
-
- Chapter
- Export citation
-
Summary
Up until this point, the methodology for developing intelligent agents has encouraged the expert to be very explicit and detailed, to provide clear descriptions of the hypotheses (or problems), and to formulate detailed questions and answers that guide the reduction of hypotheses (or problems) to subhypotheses (or subproblems). This is important because it facilitates a clear and correct logic and the learning of the reasoning rules.
The developed agents can solve complex problems through the generation of reasoning trees that can be very large, with hundreds or even thousands of nodes. In such cases, browsing and understanding these reasoning trees become a challenge.
In this section, we will discuss an approach to abstract a large reasoning tree that involves abstracting both hypotheses/problems and subtrees. The goal is to obtain a simpler representation where the abstract tree has fewer nodes and each node has a simpler description. At the same time, however, we want to maintain the correspondence between the abstract tree and the original tree, in order to have access to the full descriptions of the nodes.
STATEMENT ABSTRACTION
By abstraction of a statement (hypothesis or problem), we simply mean a shorter statement summarizing its meaning. Consider, for example, the following hypothesis:
John Doe will stay on the faculty of George Mason University for the duration of the PhD dissertation of Bob Sharp.
Any of the following shorter statements is an abstraction of the preceding hypothesis:
John Doe will stay on the faculty of George Mason University.
John Doe will stay on the faculty.
John Doe will not leave.
The expert needs to define abstractions that are short enough to simplify the display of the reasoning tree while still conveying the meaning of the original hypotheses. One abstraction technique is to eliminate some of the words, as illustrated by the first two of the preceding examples. Additionally, one may abstract phrases by using new words, as illustrated by the last example.
The specific hypothesis, “John Doe will stay on the faculty of George Mason University for the duration of the PhD dissertation of Bob Sharp,” is shown also in the upper-left part of Figure 11.1. As discussed in Section 9.10, from each specific hypothesis Disciple-EBR automatically learns a general hypothesis with applicability conditions, which can be further refined.
About the Authors
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp xxiii-xxiv
-
- Chapter
- Export citation
1 - Introduction
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 1-45
-
- Chapter
- Export citation
-
Summary
UNDERSTANDING THE WORLD THROUGH EVIDENCE-BASED REASONING
We can try to understand the world in various ways, an obvious one being the employment of empirical methods for gathering and analyzing various forms of evidence about phenomena, events, and situations of interest to us. This will include work in all of the sciences, medicine, law, intelligence analysis, history, political affairs, current events, and a variety of other contexts too numerous to mention. In the sciences, this empirical work will involve both experimental and nonexperimental methods. In some of these contexts, notably in the sciences, we are able to devise mathematical and logical models that allow us to make inferences and predictions about complex matters of interest to us. But in every case, our understanding rests on our knowledge of the properties, uses, discovery, and marshaling of evidence. This is why we begin this book with a careful consideration of reasoning based on evidence.
What Is Evidence?
You might think this question is unnecessary since everyone knows what evidence is. However, matters are not quite that simple, since the term evidence is not so easy to define and its use often arouses controversy. One problem with the definition of evidence is that several other terms are often used synonymously with it, when in fact there are distinctions to be made among these terms that are not always apparent. Quite unnecessary controversy occurs since some believe that the term evidence arises and has meaning only in the field of law.
Consulting a dictionary actually does not assist us much in defining the term. For example, look at the Oxford English Dictionary under the term evidence and you will be led in a circle; evidence is ultimately defined as being evidence.
A variety of terms are often used as synonyms for the term evidence: data, items of information, facts, and knowledge. When examined carefully, there are some valid and important distinctions to be made among these terms, as we will now discuss.
Evidence, Data, and Information
Consider the terms data and items of information.
Data are uninterpreted signals, raw observations, measurements, such as the number 6, the color “red,” or the sequence of dots and lines “…–…”.
Information is data equipped with meaning provided by a certain context, such as “6 am,” “red traffic light,” “red tomato,” or the “S O S” emergency alert.
12 - Disciple Agents
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 338-425
-
- Chapter
- Export citation
-
Summary
INTRODUCTION
The agent building theory, methodology and tool presented in this book evolved over many years, with developments presented in numerous papers and a series of PhD theses (Tecuci, 1988; Dybala, 1996; Hieb, 1996; Keeling, 1998; Boicu 2002; Bowman, 2002; Boicu, 2006; Le, 2008; Marcu, 2009). Although this book has emphasized the development of Disciple agents for evidence-based reasoning applications, the learning agent theory and technology are applicable and have been applied to a wide range of knowledge-intensive tasks, such as those discussed in Section 1.6.2.
A previous book (Tecuci, 1998) presented the status of this work at that time and included descriptions of Disciple agents for designing plans for loudspeaker manufacturing, for assessing students’ higher-order thinking skills in history or in statistics, for configuring computer systems, and for representing a virtual armored company commander in distributed interactive simulations.
More recent Disciple agents and their applications include Disciple-WA, an agent for the development of military engineering plans; Disciple-COA, for the critiquing of military courses of action; Disciple-COG, for military center of gravity determination; Disciple agents representing virtual experts for collaborative emergency response planning; Disciple-LTA, for intelligence analysis; Disciple-FS, for regulatory compliance in financial services industries; Disciple-WB, for assessing the believability of websites; and Disciple agents for modeling the behavior of violent extremists.
The following sections present four of these agents and their applications. While all illustrate the general agent development approach discussed in this book, they differ in some of their capabilities and appearance, each reflecting a different stage or trajectory in the development of the Disciple approach.
DISCIPLE-WA: MILITARY ENGINEERING PLANNING
The Workaround Planning Problem
The workaround planning problem consists of assessing how rapidly and by what method a military unit can reconstitute or bypass damage to a transportation infrastructure, such as a damaged and/or mined bridge, a blocked tunnel, or a cratered road (Cohen et al. 1998; Jones, 1998).
10 - Rule Refinement
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 294-328
-
- Chapter
- Export citation
Frontmatter
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp i-iv
-
- Chapter
- Export citation
Contents
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp v-xiv
-
- Chapter
- Export citation
4 - Modeling the Problem-Solving Process
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 113-154
-
- Chapter
- Export citation
References
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 433-442
-
- Chapter
- Export citation
7 - Reasoning with Ontologies and Rules
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 202-221
-
- Chapter
- Export citation
3 - Methodologies and Tools for Agent Design and Development
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 83-112
-
- Chapter
- Export citation
Appendixes
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 443-446
-
- Chapter
- Export citation
-
Summary
SUMMARY: KNOWLEDGE ENGINEERING GUIDELINES
Knowledge Base Guidelines
Guideline 3.1. Work with only one knowledge base loaded in memory (p. 111)
Guideline 3.2. Create a knowledge base and save successive versions (p. 111)
Modeling Guidelines
Guideline 4.1. Structure the modeling process based on the agent's specification (p. 147)
Guideline 4.2. Define reduction trees in natural language using simple questions (p. 148)
Guideline 4.3. Identify the specific instances, the generic instances, and the constants (p. 148)
Guideline 4.4. Guide the reduction by the possible need of future changes (p. 149)
Guideline 4.5. Learn and reuse reduction patterns (p. 149)
Ontology Development Guidelines
Guideline 6.1. Define similar siblings (p. 186)
Guideline 6.2. Group similar siblings under natural concepts (p. 187)
Guideline 6.3. Recognize that a single subconcept may indicate ontology incompleteness or error (p. 187)
Guideline 6.4. Adopt and follow a naming convention (p. 188)
Guideline 6.5. Name subconcepts based on superconcepts (p. 189)
Guideline 6.6. Represent well-established categories fromthe real world as concepts (p. 195)
Guideline 6.7. Define concepts and instances to represent knowledge corresponding to n-ary relations (p. 196)
Guideline 6.8. Define feature names that distinguish them from concept names (p. 196)
Guidelines for Rule and Hypothesis Learning
Guideline 9.1. Properly identify all the entities in the example before starting rule learning (p. 285)
Guideline 9.2. Avoid learning from examples that are too specific (p. 286)
Guideline 9.3. Use modeling-based ontology extension before starting rule learning (p. 286)
Guideline 9.4. Carefully define the domains and the ranges of the features (p. 286)
Guideline 9.5. Provide hints to guide explanation generation (p. 288)
Guideline 9.6. Avoid learning rules without explanations (p. 288)
Guideline 9.7. Recognize concepts in the reasoning tree (p. 288)
Guidelines for Rule Refinement
Guideline 10.1. Assess similar hypotheses to refine the rules (p. 321)
Guideline 10.2. Extend the ontology to define failure explanations (p. 321)
Abstraction Guideline
Guideline 11.1. Define short context-dependent hypothesis names for the abstract tree (p. 334)
Planning Guidelines
Guideline 12.1. Use a plausible task ordering when specifying a task decomposition (p. 398)
Guideline 12.2. Specify the planning tree in a top-down and left-to-right order (p. 399)
Guideline 12.3. Define preconditions when reducing an abstract task to a concrete task (p. 399)
Guideline 12.4. Specify the goal of the current task to enable the specification of the follow-on tasks (p. 400)
Index
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 447-455
-
- Chapter
- Export citation
6 - Ontology Design and Development
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 174-201
-
- Chapter
- Export citation
-
Summary
DESIGN AND DEVELOPMENT METHODOLOGY
Ontology design is a creative process whose first step is determining the scope of the ontology by specifying its main concepts, features, and instances. One approach is to elicit them from a subject matter expert or some other sources, as will be discussed in Section 6.3.
Another approach is to extract a specification of the ontology from the reasoning trees developed as part of the rapid prototyping of the agent. During this phase, the subject matter expert and the knowledge engineer define a set of typical hypotheses (or problems) that the envisioned agent should be able to assess (or solve). Then they actually assess these hypotheses the way they would like Disciple-EBR to assess them. This process identifies very clearly what concepts and features should be present in the ontology to enable the agent to assess those types of hypotheses. This modeling-based ontology specification strategy will be discussed in Section 6.4. Once a specification of the ontology has been developed, one has to complete its design.
Because ontology design and development is a complex process, it makes sense to import relevant concepts and features from previously developed ontologies (including those from the Semantic Web) rather than defining them from scratch. In particular, one may wish to look for general-purpose ontologies, such as an ontology of time, space, or units of measures, if they are necessary to the agent under development. Significant foundational and utility ontologies have been developed and can be reused (Obrst et al., 2012), as discussed in Section 3.2.2.
The actual development of the ontology is performed by using ontology tools such as Protégé (Noy and McGuinness, 2001) or those that will be presented in this section. As will be discussed next, ontology development is an iterative process during which additional concepts, features, and instances are added while teaching the agent to assess hypotheses (or solve problems).
An important aspect to emphasize is that the ontology will always be incomplete. Moreover, one should not attempt to represent all of the agent's knowledge in the ontology. On the contrary, the ontology is intended to represent only the terms of the representation language that are used in the definitions of hypotheses and rules. The more complex knowledge will be represented as rules.
8 - Learning for Knowledge-based Agents
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 222-251
-
- Chapter
- Export citation
-
Summary
The previous chapters introduced the main knowledge elements from the knowledge base of an agent, which are all based on the notion of concept. This chapter presents the basic operations involved in learning, including comparing the generality of concepts, generalizing concepts, and specializing concepts. We start with a brief overview of several machine learning strategies that are particularly useful for knowledge-based agents.
INTRODUCTION TO MACHINE LEARNING
What Is Learning?
The following are definitions of learning given by some of the most prominent researchers in this field:
• “Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the same task or tasks drawn from the same population more efficiently and more effectively the next time” (Simon, 1983, p. 28).
• “‘Learning’ is making useful changes in the workings of our minds” (Minsky, 1986, p. 120).
• “Learning is constructing or modifying representations of what is being experienced” (Michalski, 1986, p. 10).
• “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” (Mitchell, 1997, p. 2).
Given the preceding definitions, we may characterize learning as denoting the way in which people and computers:
• Acquire, discover, and organize knowledge by building, modifying, and organizing internal representations of some external reality.
• Acquire skills by gradually improving their motor or cognitive abilities through repeated practice, sometimes involving little or no conscious thought.
There are two complementary dimensions of learning: competence and efficiency. A system is improving its competence if it learns to solve a broader class of problems and to make fewer mistakes in problem solving. The system is improving its efficiency if it learns to solve the problems from its area of competence faster or by using fewer resources.
Machine learning is the domain of artificial intelligence that is concerned with building adaptive computer systems that are able to improve their performance (competence and/or efficiency) through learning from input data, from a user, or from their own problem-solving experience.
13 - Design Principles for Cognitive Assistants
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 426-432
-
- Chapter
- Export citation
-
Summary
This book has presented an advanced approach to developing personal cognitive assistants. Although the emphasis in this book has been on cognitive assistants for evidence-based hypothesis analysis, the Disciple approach is also applicable to other types of tasks, as was illustrated by the agents presented in Chapter 12. Moreover, the Disciple approach illustrates the application of several design principles that are useful in the development of cognitive assistants in general. In this chapter, we review these principles, which have been illustrated throughout this book. Each of the following sections starts with the formulation of a principle and continues with its illustration by referring back to previous sections of the book.
LEARNING-BASED KNOWLEDGE ENGINEERING
Employ learning technology to simplify and automate the knowledge engineering process.
It is generally accepted that knowledge engineering is very difficult, involving many creative tasks. One way to simplify this process significantly is to automate as much of the knowledge engineering process as possible. As discussed in Section 3.3.6, the approach taken with Disciple is to replace each knowledge base development activity of the knowledge engineer (e.g., modeling the problem-solving process, ontology development, rule learning, rule refinement) with an equivalent activity that can be performed directly by a subject matter expert and the Disciple agent, with limited or no support from the knowledge engineer (see Figure 3.19, p. 107).
Consider, for example, the modeling of the problem-solving process. A knowledge engineer would need to instruct a subject matter expert how to express his or her reasoning in the divide-and-conquer analysis and synthesis framework. Then the expert and the agent can model the solutions of new problems by themselves. In this process, the agent will support the expert in various ways. For example, the agent may employ previously learned rules to suggest likely reductions of the current problem or hypothesis. Or it may learn and reuse reasoning patterns to suggest reductions to the expert.
Now consider the development and testing of the reasoning rules, which the knowledge engineer does through interviews with the subject matter expert, as discussed in Section 3.1.4. This time-consuming and error-prone task is reduced to several tasks that the subject matter expert and the Disciple agent can easily perform, as discussed in Sections 9.4 and 10.1.2.
2 - Evidence-based Reasoning: Connecting the Dots
- Gheorghe Tecuci, George Mason University, Virginia, Dorin Marcu, George Mason University, Virginia, Mihai Boicu, George Mason University, Virginia, David A. Schum, George Mason University, Virginia
-
- Book:
- Knowledge Engineering
- Published online:
- 05 September 2016
- Print publication:
- 08 September 2016, pp 46-82
-
- Chapter
- Export citation
-
Summary
In Section 1.4.3, we have briefly introduced evidence-based reasoning in various domains (see Figure 1.10, p. 28). In this section, we start with discussing the complexity of evidence-based reasoning by using the “connecting the dots” metaphor. Then we discuss in more detail evidence-based reasoning in a representative EBR domain, intelligence analysis. We conclude this section with other examples of evidence-based reasoning. Then the following chapters will address the development of such systems and of knowledge-based agents in general.
HOW EASY IS IT TO CONNECT THE DOTS?
The “connecting the dots” metaphor seems appropriate for characterizing evidence-based reasoning. This metaphor may have gained its current popularity following the terrorist attacks in New York City and Washington, D.C., on September 11, 2001. It was frequently said that the intelligence services did not connect the dots appropriately in order to have possibly prevented the catastrophes that occurred. Since then, we have seen and heard this metaphor applied in the news media to inferences in a very wide array of contexts, in addition to the intelligence, including legal, military, and business contexts. For example, we have seen it applied to allegedly faulty medical diagnoses; to allegedly faulty conclusions in historical studies; to allegedly faulty or unpopular governmental decisions; and in discussions involving the conclusions reached by competing politicians. What is also true is that the commentators on television and radio, or the sources of written accounts of inferential failures, never tell us what they mean by the phrase “connecting the dots.” A natural explanation is that they have never even considered what this phrase means and what it might involve.
But we have made a detailed study of what “connecting the dots” entails. We have found this metaphor very useful, and quite intuitive, in illustrating the extraordinary complexity of the evidential and inferential reasoning required in the contexts we have mentioned. Listening or seeing some media accounts of this process may lead one to believe that it resembles the simple tasks we performed as children when, if we connected some collection of numbered dots correctly, a figure of Santa Claus, or some other familiar figure, would emerge. Our belief is that critics employing this metaphor in criticizing intelligence analysts and others have very little awareness of how astonishingly difficult the process of connecting unnumbered dots can be in so many contexts (Schum, 1987).