18 results
6 - Nonmonotonic Reasoning
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 117-151
-
- Chapter
- Export citation
-
Summary
Introduction
Classical logic—including first order logic, which we studied in Chapter 2—is concerned with deductive inference. If the premises are true, the conclusions drawn using classical logic are always also true. Although this kind of reasoning is not inductive, in the sense that any conclusion we can draw from a set of premises is already “buried” in the premises themselves, it is nonetheless fundamental to many kinds of reasoning tasks. In addition to the study of formal systems such as mathematics, in other domains such as planning and scheduling a problem can in many cases also be constrained to be mainly deductive.
Because of this pervasiveness, many logics for uncertain inference incorporate classical logic at the core. Rather than replacing classical logic, we extend it in various ways to handle reasoning with uncertainty. In this chapter, we will study a number of these formalisms, grouped under the banner nonmonotonic reasoning. Monotonicity, a key property of classical logic, is given up, so that an addition to the premises may invalidate some previous conclusions. This models our experience: the world and our knowledge of it are not static; often we need to retract some previously drawn conclusion on learning new information.
Logic and (Non)monotonicity
One of the main characteristics of classical logic is that it is monotonic, that is, adding more formulas to the set of premises does not invalidate the proofs of the formulas derivable from the original premises alone. In other words, a formula that can be derived from the original premises remains derivable in the expanded premise set.
Uncertain Inference
- Henry E. Kyburg, Jr, Choh Man Teng
-
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001
-
Coping with uncertainty is a necessary part of ordinary life and is crucial to an understanding of how the mind works. For example, it is a vital element in developing artificial intelligence that will not be undermined by its own rigidities. There have been many approaches to the problem of uncertain inference, ranging from probability to inductive logic to nonmonotonic logic. Thisbook seeks to provide a clear exposition of these approaches within a unified framework. The principal market for the book will be students and professionals in philosophy, computer science, and AI. Among the special features of the book are a chapter on evidential probability, which has not received a basic exposition before; chapters on nonmonotonic reasoning and theory replacement, matters rarely addressed in standard philosophical texts; and chapters on Mill's methods and statistical inference that cover material sorely lacking in the usual treatments of AI and computer science.
Names Index
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 291-292
-
- Chapter
- Export citation
Frontmatter
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp i-iv
-
- Chapter
- Export citation
Preface
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp xi-xii
-
- Chapter
- Export citation
-
Summary
This book is the outgrowth of an effort to provide a course covering the general topic of uncertain inference. Philosophy students have long lacked a treatment of inductive logic that was acceptable; in fact, many professional philosophers would deny that there was any such thing and would replace it with a study of probability. Yet, there seems to many to be something more traditional than the shifting sands of subjective probabilities that is worth studying. Students of computer science may encounter a wide variety of ways of treating uncertainty and uncertain inference, ranging from nonmonotonic logic to probability to belief functions to fuzzy logic. All of these approaches are discussed in their own terms, but it is rare for their relations and interconnections to be explored. Cognitive science students learn early that the processes by which people make inferences are not quite like the formal logic processes that they study in philosophy, but they often have little exposure to the variety of ideas developed in philosophy and computer science. Much of the uncertain inference of science is statistical inference, but statistics rarely enter directly into the treatment of uncertainty to which any of these three groups of students are exposed.
At what level should such a course be taught? Because a broad and interdisciplinary understanding of uncertainty seemed to be just as lacking among graduate students as among undergraduates, and because without assuming some formal background all that could be accomplished would be rather superficial, the course was developed for upper-level undergraduates and beginning graduate students in these three disciplines. The original goal was to develop a course that would serve all of these groups.
4 - Interpretations of Probability
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 68-97
-
- Chapter
- Export citation
-
Summary
Introduction
In Chapter 3, we discussed the axioms of the probability calculus and derived some of its theorems. We never said, however, what “probability” meant. From a formal or mathematical point of view, there was no need to: we could state and prove facts about the relations among probabilities without knowing what a probability is, just as we can state and prove theorems about points and lines without knowing what they are. (As Bertrand Russell said [Russell, 1901, p. 83] “Mathematics may be defined as the subject where we never know what we are talking about, nor whether what we are saying is true.”)
Nevertheless, because our goal is to make use of the notion of probability in understanding uncertain inference and induction, we must be explicit about its interpretation. There are several reasons for this. In the first place, if we are hoping to follow the injunction to believe what is probable, we have to know what is probable. There is no hope of assigning values to probabilities unless we have some idea of what probability means. What determines those values? Second, we need to know what the import of probability is for us. How is it supposed to bear on our epistemic states or our decisions? Third, what is the domain of the probability function? In the last chapter we took the domain to be a field, but that merely assigns structure to the domain: it doesn't tell us what the domain objects are.
There is no generally accepted interpretation of probability.
12 - Scientific Inference
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 270-290
-
- Chapter
- Export citation
-
Summary
Introduction
We have abandoned many of the goals of the early writers on induction. Probability has told us nothing about how to find interesting generalizations and theories, and, although Carnap and others had hoped otherwise, it has told us nothing about how to measure the support for generalizations other than approximate statistical hypotheses. Much of uncertain inference has yet to be characterized in the terms we have used for statistical inference. Let us take a look at where we have arrived so far.
Objectivity
Our overriding concern has been with objectivity. We have looked on logic as a standard of rational argument: Given evidence (premises), the validity (degree of entailment) of a conclusion should be determined on logical grounds alone. Given that the Hawks will win or the Tigers will win, and that the Tigers will not win, it follows that the Hawks will win. Given that 10% of a large sample of trout from Lake Seneca have shown traces of mercury, and that we have no grounds for impugning the fairness of the sample, it follows with a high degree of validity that between 8% and 12% of the trout in the lake contain traces of mercury.
The parallel is stretched only at the point where we include among the premises “no grounds for impugning. …” It is this that is unpacked into a claim about our whole body of knowledge, and embodied in the constraints discussed in the last three chapters under the heading of “sharpening.”
11 - Applications
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 247-269
-
- Chapter
- Export citation
-
Summary
Introduction
We are now in a position to reap the benefits of the formal work of the preceding two chapters. The key to uncertain inference lies, as we have suspected all along, in probability. In Chapter 9, we examined a certain formal interpretation of probability, dubbed evidential probability, as embodying a notion of partial proof. Probability, on this view, is an interval-valued function. Its domain is a combination of elementary evidence and general background knowledge paired with a statement of our language whose probability concerns us, and its range is of [0, 1]. It is objective. What this means is that if two agents share the same evidence and the same background knowledge, they will assign the same (interval) probabilities to the statements of their language. If they share an acceptance level 1 – α for practical certainty, they will accept the same practical certainties.
It may be that no two people share the same background knowledge and the same evidence. But in many situations we come close. As scientists, we tend to share each other's data. Cooked data is sufficient to cause expulsion from the ranks of scientists. (This is not the same as data containing mistakes; one of the virtues of the system developed here is that no data need be regarded as sacrosanct.) With regard to background knowledge, if we disagree, we can examine the evidence at a higher level: is the item in question highly probable, given that evidence and our common background knowledge at that level?
There are a number of epistemological questions raised by this approach, and some of them will be dealt with in Chapter 12.
Index
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 293-298
-
- Chapter
- Export citation
5 - Nonstandard Measures of Support
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 98-116
-
- Chapter
- Export citation
-
Summary
Support
As Carnap points out [Carnap, 1950], some of the controversy concerning the support of empirical hypotheses by data is a result of the conflation of two distinct notions. One is the total support given a hypothesis by a body of evidence. Carnap's initial measure for this is his c*; this is intended as an explication of one sense of the ordinary language word “probability.” This is the sense involved when we say, “Relative to the evidence we have, the probability is high that rabies is caused by a virus.” The other notion is that of “support” in the active sense, in which we say that a certain piece of evidence supports a hypothesis, as in “The detectable presence of antibodies supports the viral hypothesis.” This does not mean that that single piece of evidence makes the hypothesis “highly probable” (much less “acceptable”), but that it makes the hypothesis more probable than it was. Thus, the presence of water on Mars supports the hypothesis that that there was once life on Mars, but it does not make that hypothesis highly probable, or even more probable than not.
Whereas c*(h, e) is (for Carnap, in 1950) the correct measure of the degree of support of the hypothesis h by the evidence e, the increase of the support of h due to e given background knowledge b is the amount by which e increases the probability of h: c*(h, b Λ e) – c*(h, b). We would say that e supports h relative to background b if this quantity is positive, and undermines h relative to b if this quantity is negative.
2 - First Order Logic
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 21-41
-
- Chapter
- Export citation
-
Summary
Introduction
Traditionally, logic has been regarded as the science of correct thinking or of making valid inferences. The former characterization of logic has strong psychological overtones—thinking is a psychological phenomenon—and few writers today think that logic can be a discipline that can successfully teach its students how to think, let alone how to think correctly. Furthermore, it is not obvious what “correct” thinking is. One can think “politically correct” thoughts without engaging in logic at all. We shall, at least for the moment, be well advised to leave psychology to one side, and focus on the latter characterization of logic: the science of making valid inferences.
To make an inference is to perform an act: It is to do something. But logic is not a compendium of exhortations: From “All men are mortal” and “Socrates is a man” do thou infer that Socrates is mortal! To see that this cannot be the case, note that “All men are mortal” has the implication that if Charles is a man, he is mortal, if John is a man, he is mortal, and so on, through the whole list of men, past and present, if not future. Furthermore, it is an implication of “All men are mortal” that if Fido (my dog) is a man, Fido is mortal; if Tabby is a man, Tabby is mortal, etc. And how about inferring “If Jane is a man, Jane is mortal”? As we ordinarily construe the premise, this, too is a valid inference. We cannot follow the exhortation to perform all valid inferences: There are too many, they are too boring, and that, surely, is not what logic is about.
Contents
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp v-x
-
- Chapter
- Export citation
7 - Theory Replacement
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 152-174
-
- Chapter
- Export citation
-
Summary
Introduction
We form beliefs about the world, from evidence and inferences made from the evidence. Belief, as opposed to knowledge, consists of defeasible information. Belief is what we think is true, and it may or may not be true in the world. On the other hand, knowledge is what we are aware of as true, and it is always true in the world.
We make decisions and act according to our beliefs, yet they are not infallible. The inferences we base our beliefs on can be deductive or uncertain, employing any number of inference mechanisms to arrive at our conclusions, for instance, statistical, nonmonotonic, or analogical. We constantly have to modify our set of beliefs as we encounter new information. A new piece of evidence may complement our current beliefs, in which case we can hold on to our original beliefs in addition to this new evidence. However, because some of our beliefs can be derived from uncertain inference mechanisms, it is inevitable that we will at some point encounter some evidence that contradicts what we currently believe. We need a systematic way of reorganizing our beliefs, to deal with the dynamics of maintaining a reasonable belief set in the face of such changes.
The state of our beliefs can be modeled by a logical theory K, a deductively closed set of formulas. If a formula φ is considered accepted in a belief set, it is included in the corresponding theory K; if it is rejected, its negation ¬φ is in K. In general the theory is incomplete.
9 - Evidential Probability
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 200-229
-
- Chapter
- Export citation
-
Summary
Introduction
The idea behind evidential probability is a simple one. It consists of two parts: that probabilities should reflect empirical frequencies in the world, and that the probabilities that interest us—the probabilities of specific events—should be determined by everything we know about those events.
The first suggestions along these lines were made by Reichenbach [Reichenbach, 1949]. With regard to probability, Reichenbach was a strict limiting-frequentist: he took probability statements to be statements about the world, and to be statements about the frequency of one kind of event in a sequence of other events. But recognizing that what concerns us in real life is often decisions that bear on specific events—the next roll of the die, the occurrence of a storm tomorrow, the frequency of rain next month—he devised another concept that applied to particular events, that of weight. “We write P(a) = p thus admitting individual propositions inside the probability functor. The number p measures the weight of the individual proposition a. It is understood that the weight of the proposition was determined by means of a suitable reference class, …” [Reichenbach, 1949, p. 409]. Reichenbach appreciated the problem of the reference class: “… we may have reliable statistics concerning a reference class A and likewise reliable statistics for a reference class C, whereas we have insufficient statistics for the reference class A·C. The calculus of probability cannot help in such a case because the probabilities P(A, B) and P(C, B) do not determine the probability P(A · C, B)” [Reichenbach, 1949, p. 375]. The best the logician can do is to recommend gathering more data.
8 - Statistical Inference
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 175-199
-
- Chapter
- Export citation
-
Summary
Introduction
We consider a group of puppies, take what we know about that group as a premise, and infer, as a conclusion, something about the population of all puppies. Such an inference is clearly risky and invalid. It is nevertheless the sort of inference we must make and do make. Some such inferences are more cogent, more rational than others. Our business as logicians is to find standards that will sort them out.
Statistical inference includes inference from a sample to the population from which it comes. The population may be actual, as it is in public opinion polls, or hypothetical, as it is in testing an oddly weighted die (the population is then taken to be the hypothetical, population of possible tosses or possible sequences of tosses of the die). Statistical inference is a paradigm example of uncertain inference.
Statistical inference is also often taken to include the uncertain inference we make from a population to a sample, as when we infer from the fairness of a coin that roughly half of the next thousand coin tosses we make will yield heads–a conclusion that might be false. Note that this is not probabilistic inference: the inference from the same premises to the conclusion that the probability is high that roughly half of the next thousand tosses will yield heads is deductive and (given the premises) not uncertain at all.
The inference from a statistical premise about a population to a nonprobabilistic conclusion about part of that population is called direct inference. The inference from a premise about part of a population to the properties of the population as a whole is called inverse inference.
1 - Historical Background
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 1-20
-
- Chapter
- Export citation
-
Summary
The theory of Induction is the despair of philosophy–and yet all our activities are based upon it.
Alfred North Whitehead: Science and the Modern World, p. 35.Introduction
Ever since Adam and Eve ate from the tree of knowledge, and thereby earned exile from Paradise, human beings have had to rely on their knowledge of the world to survive and prosper. And whether or not ignorance was bliss in Paradise, it is rarely the case that ignorance promotes happiness in the more familiar world of our experience—a world of grumbling bellies, persistent tax collectors, and successful funeral homes. It is no cause for wonder, then, that we prize knowledge so highly, especially knowledge about the world. Nor should it be cause for surprise that philosophers have despaired and do despair over the theory of induction: For it is through inductive inferences, inferences that are uncertain, that we come to possess knowledge about the world we experience, and the lamentable fact is that we are far from consensus concerning the nature of induction.
But despair is hardly a fruitful state of mind, and, fortunately, the efforts over the past five hundred years or so of distinguished people working on the problems of induction have come to far more than nought (albeit far less than the success for which they strove). In this century, the debate concerning induction has clarified the central issues and resulted in the refinement of various approaches to treating the issues. To echo Brian Skyrms, a writer on the subject [Skyrms, 1966], contemporary inductive logicians are by no means wallowing in a sea of total ignorance and continued work promises to move us further forward.
3 - The Probability Calculus
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 42-67
-
- Chapter
- Export citation
-
Summary
Introduction
The opposing football captains watch as the coin arcs, glinting, through the air before landing on the turf at the referee's feet. Heads. Whatchamacallit U. kicks off.
For thousands of years, people have depended on devices that yield outcomes beyond the control of human agency; it has been a way of consulting the gods, or the fates. For us, the point of tossing a coin to determine who kicks off is that the outcome is a matter of chance. The probability of heads is one-half: The coin could, with equal conformity to the laws of the physical world, have landed tails.
Typically, for probability, matters are not as simple as they seem. In ancient times the outcome of chance events—the toss of a knucklebone or a coin—was often taken to reveal the will of the gods. Even today many people take the outcome of a chance event, at least if they have wagered on it, to be a matter of “luck,” where luck plays the role of the old gods, and can be cajoled, sacrificed to, encouraged with crossed fingers and rabbit's feet. In most cases, however, chance events are understood to be outside of human control, and to yield the outcomes they yield in accord with the laws of probability.
The early probabilists (Pascal, Fermat, the Bernoullis, and Laplace) believed in a deterministic world in which chance events did not really exist. Our belief in any chance event (say the outcome of the toss of a six-sided die) is less than certain only as a consequence of the limitations of our knowledge.
10 - Semantics
- Henry E. Kyburg, Jr, University of Rochester, New York, Choh Man Teng, Institute for Human and Machine Intelligence
-
- Book:
- Uncertain Inference
- Published online:
- 07 December 2009
- Print publication:
- 06 August 2001, pp 230-246
-
- Chapter
- Export citation
-
Summary
Introduction
Uncertain reasoning and uncertain argument, as we have been concerned with them here, are reasoning and argument in which the object is to establish the credibility or acceptability of a conclusion on the basis of an argument from premises that do not entail that conclusion. Other terms for the process are inductive reasoning, scientific reasoning, nonmonotonic reasoning, and probabilistic reasoning. What we seek to characterize is that general form of argument that will lead to conclusions that are worth accepting, but that may, on the basis of new evidence, need to be withdrawn.
What is explicitly excluded from uncertain reasoning, in the sense under discussion, is reasoning from one probability statement to another. Genesereth and Nilsson [Nilsson, 1986; Genesereth & Nilsson, 1987], for example, offer as an example of their “probabilistic logic” the way in which constraints on the probability of Q can be established on the basis of probabilities for P and for P → Q. This is a matter of deduction: as we noted in Chapter Five, it is provable that any function prob satisfying the usual axioms for probability will be such that if prob(P) = r and prob(P → Q) = s then prob(Q) must lie between s + r − 1 (or 0) and s. This deductive relation, though often of interest, is not what we are concerned with here. It has been explored by Suppes and Adams [Suppes, 1966; Adams, 1966] as well as Genesereth and Nilson.