To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The idea of probability leads in two different directions: belief and frequency. Probability makes us think of the degree to which we can be confident of something uncertain, given what we know or can find out. Probability also makes us think of the relative frequency with which some outcome occurs on repeated trials on a chance setup.
Thus far we have used both ideas almost interchangeably, because the basic rules for calculating with each are virtually identical. But now we have to distinguish between them, because the philosophical and practical uses of these two ideas are very different. The distinction is essential for the rest of this book (and for all clear thinking about probability).
We have been doing all these calculations about probabilities, and have not said a word about what we mean by “probability” itself. Now we are going to set things right. Up to now it has not mattered a great deal what we mean by the word. From now on it will make all the difference.
This chapter is an example of one kind of philosophy, often called analytic philosophy. We will try to come to grips with different concepts associated with the idea of probability. Many students find this chapter the hardest one of all. Not surprising! The distinctions that we have to make have bedeviled probability theorists–including some of the very best–for more than 200 years.
Sometimes a decision problem can be solved without using probabilities or expected value at all. These are situations in which one strategy dominates all others, no matter what happens in the actual world. This is called dominance. It is illustrated by a very famous argument for acting as if you believed in God. Variations on that argument lead to other decision rules.
The expected value rule can be used only when some probabilities are available. Sometimes we are so ignorant that we are not inclined to talk even of the probabilities of different alternatives. That is the extreme end of uncertainty. Yet there may still be more or less reasonable decisions.
DOMINANCE
It was a dark and foggy night when Peter made his first trip to Holland, the homeland of his parents. His parents gave him enough money to rent a car to see the family, but after that he was practically broke. He was planning to stay with his distant relatives. So he was driving along a road, somewhat lost, when he came to an overhead signpost. Unfortunately, a garbage bag had blown over the front of the sign, obscuring the first three letters of each town. What he saw was:
▪ AVENHAGE ↑
▪ TERDAM →
Peter figured the topmost town must be “'s Gravenhage” (in English, “The Hague,” where the World Court is located). But the second town might be “Amsterdam” or “Rotterdam.” What should he do?
One of the most useful consequences of the basic rules helps us understand how to make use of new evidence. Bayes' Rule is one key to “learning from experience.”
Chapter 5 ended with several examples of the same form: urns, shock absorbers, weightlifters. The numbers were changed a bit, but the problems in each case were identical.
For example, on page 51 there were two urns A and B, each containing a known proportion of red and green balls. An urn was picked at random. So we knew:
Pr(A) and Pr(B).
Then there was another event R, such as drawing a red ball from an urn. The probability of getting red from urn A was 0.8. The probability of getting red from urn B was 0.4. So we knew:
Pr(R/A) and Pr(R/B).
Then we asked, what is the probability that the urn drawn was A, conditional on drawing a red ball? We asked for:
Pr(A/R) =? Pr(B/R) =?
Chapter 5 solved these problems directly from the definition of conditional probability. There is an easy rule for solving problems like that. It is called Bayes' Rule.
In the urn problem we ask which of two hypotheses is true: Urn A is selected, or Urn B is selected. In general we will represent hypotheses by the letter H.
We perform an experiment or get some evidence: we draw at random and observe a red ball. In general we represent evidence by the letter E.
There have been two fundamentally different approaches to probability. One emphasizes the frequency idea. The other emphasizes the belief idea.
Some theorists say that only one of those two ideas really matters. We will call them dogmatists. In this book we are eclectic. Here are two definitions taken from a dictionary:
♦ Eclectic. Adjective. 1. (in art, philosophy, etc.) Selecting what seems best from various styles, doctrines, ideas, methods, etc.
♦ Dogmatic. Adjective. 1. Characterized by making authoritative or arrogant assertions or opinions, etc.
FREQUENCY DOGMATISTS
Some experts believe that all inductive reasoning should be analyzed in terms of frequency-type probabilities. This is a dogmatic philosophy, saying that inductive reasoning should rely on exactly one use of probability. Belief dogmatists often say that frequency-type probabilities “have no role in science.”
BELIEF DOGMATISTS
Some experts believe that all inductive reasoning should be analyzed in terms of belief-type probabilities. This is a dogmatic philosophy, saying that inductive reasoning should rely on exactly one use of probability. Belief dogmatists often say that frequency-type probability “doesn't even make sense.”
OUR ECLECTIC APPROACH
Chapters 16–19 and 22 use the frequency idea. Chapters 13–15 and 21 use the belief idea.
Luckily, most (but not all) data and arguments that a frequency dogmatist can analyze, a belief dogmatist can analyze too. And vice versa. Only in rather specialized situations do the two schools of thought draw really different inferences from the same data.
Statistical hypotheses are compared with data, often collected in carefully designed experiments. Evidence may lead us tentatively to accept or reject hypotheses. Evidence can be good or bad; it can be more, or less, convincing. When is it significant? What are the underlying ideas about accepting and rejecting hypotheses? This chapter introduces two fundamentally different ways of thinking about these issues, both of which are deeply entrenched in statistical practice. One idea is that of significance tests. Another is the power of a test to discriminate false hypotheses.
ASTROLOGY
Four members of this class went for coffee after the first meeting. Two of them had the same astrological sign (of the zodiac). There are 12 signs of the zodiac. Is this significant? Were they fated to meet?
We need to consider plausible models and ask, how likely is it, that this phenomenon would occur by chance alone?
Theoretical probability model: each person is assigned a sign by a chance setup with equal probability for each sign–just as if you drew your sign from a pack of twelve different cards, say a pack of all the clubs except the ace.
Think of a deck of cards with the aces removed, leaving 4 suits of 12 different cards each (analogous to the 12 signs and the 4 people). If we select at random a card from each suit, what is the probability that we get at least two cards that match in value?
(b) I myself don't know if it is true, unless I trust the newspaper.
(c) Yes.
(d) Yes.
(e) Once again, I don't know if it is true, unless I trust the newspaper. There might be a big debate among snake scientists. Some suggest that the ball python was not named after its tendency to curl up into a ball. Instead, it was named after the famous explorer and snake expert, Emily Ball. The question may never be settled. We may never be really sure why this snake is called a ball python. We may not know whether what the paper wrote is true, or whether it is false. But we know it is true-or-false.
(f) Yes.
(g) No. The snakes may be attractive to Joe, but probably some other people think the snakes are repulsive or scary. This statement is not what logicians call a proposition, because it expresses an attitude more than a matter of fact.
(h) Yes. This is a proposition about what Joe feels or thinks.
(i) No. It is not even a sentence. Hence it does not express a proposition.
(j) No. The slang expression “I'm not really too thrilled” is more a way of expressing an attitude than stating a fact, so we will not count this as a proposition.
Inductive logic is about risky arguments. It analyses inductive arguments using probability. There are other kinds of risky arguments. There is inference to the best explanation, and there are arguments based on testimony.
Valid arguments are risk-free. Inductive logic studies risky arguments. A risky argument can be a very good one, and yet its conclusion can be false, even when the premises are true. Most of our arguments are risky.
Begin with the big picture. The Big Bang theory of the origin of our universe is well supported by present evidence, but it could be wrong. That is a risk.
We now have very strong evidence that smoking causes lung cancer. But the reasoning from all that evidence to the conclusion “smoking causes lung cancer” is still risky. It might just turn out that people predisposed to nicotine addiction are also predisposed to lung cancer, in which case our inference, that smoking causes lung cancer, would be in question after all.
After a lot of research, a company concludes that it can make a profit by marketing a special left-handed mouse for personal computers. It is taking a risk.
You want to be in the same class as your friend Jan. You reason that Jan likes mathematics, and so will take another logic class. You sign up for inductive logic. You have made a risky argument.
Inductive logic is unlike deductive or symbolic logic. In deductive reasoning, when you have true premises and a valid argument, the conclusion must be true too. Valid deductive arguments do not take risks.
Inductive logic takes risks. You can have true premises, a good argument, but a false conclusion. Inductive logic uses probability to analyse that kind of risky argument.
Good News
Inductive reasoning is a guide in life. People make risky decisions all the time. It plays a much larger part in everyday affairs than deductive reasoning.
Bad News
People are very bad when reasoning about risks. We make a lot of mistakes when we use probabilities.
This book starts with a list of seven Odd Questions. They look pretty simple. But most people get some of the answers wrong. The last group of nine-year-olds I tested did better than a group of professors. Try the Odd Questions. Each one is discussed later in the book.
Practical Aims
This book can help you understand, use, and act on probabilities, risks, and statistics. We live our lives taking chances, acting when we don't know enough. Every day we experience a lot of uncertainties. This book is about the kinds of actions you can take when you are uncertain what to do. It is about the inferences you can draw when your evidence leaves you unsure what is true.
This chapter explains the usual notation for talking about probability, and then reminds you how to add and multiply with probabilities.
WHAT HAS A PROBABILITY?
Suppose you want to take out car insurance. The insurance company will want to know your age, sex, driving experience, make of car, and so forth. They do so because they have a question in mind:
What is the probability that you will have an automobile accident next year?
That asks about a proposition (statement, assertion, conjecture, etc.):
“You will have an automobile accident next year.”
The company wants to know: What is the probability that this proposition is true?
The insurers could ask the same question in a different way:
What is the probability of your having an automobile accident next year?
This asks about an event (something of a certain sort happening). Will there be “an automobile accident next year, in which you are driving one of the cars involved”?
The company wants to know: What is the probability of this event occurring?
Obviously these are two different ways of asking the same question.
PROPOSITIONS AND EVENTS
Logicians are interested in arguments from premises to conclusions. Premises and conclusions are propositions. So inductive logic textbooks usually talk about the probability of propositions.
Most statisticians and most textbooks of probability talk about the probability of events.
So there are two languages of probability, propositions and events.
Most of the main ideas about probability come up right at the beginning. Two major ones are independence and randomness. Even more important for clear thinking is the notion of a probability model.
ROULETTE
A gambler is betting on what he thinks is a fair roulette wheel. The wheel is divided into 38 segments, of which:
▪ 18 segments are black.
▪ 18 segments are red.
▪ 2 segments are green, and marked with zeroes.
If you bet $10 on red, and the wheel stops at red, you win $20. Likewise if you bet $10 on black and it stops at black, you win $20. Otherwise you lose. The house always wins when the wheel stops at zero.
Now imagine that there has been a long run–a dozen spins–in which the wheel stopped at black. The gambler decides to bet on red, because he thinks:
The wheel must come up red soon.
This wheel is fair, so it stops on red as often as it stops on black.
Since it has not stopped on red recently, it must stop there soon. I'll bet on red.
The argument is a risky one. The conclusion is, “The wheel must stop on red in the next few spins.” The argument leads to a risky decision. The gambler decides to bet on red. There you have it, an argument and a decision. Do you agree with the gambler?
How personal degrees of belief can be represented numerically by using imaginary gambles.
Chapters 1–10 were often deliberately ambiguous about different kinds of probability. That was because the basic ideas usually applied, across the board, to most kinds of probability.
Now we develop ideas that matter a lot for belief-type probabilities. They do not matter so much from the frequency point of view.
THE PROGRAM
There are three distinct steps in the argument, and each deserves a separate chapter.
▪ This chapter shows how you might use numbers to represent your degrees of belief.
▪ Chapter 14 shows why these numbers should satisfy the basic rules of probability. (And hence they should obey Bayes' Rule.)
▪ Chapter 15 shows how to use Bayes' Rule to revise or update personal probabilities in the light of new evidence. This is the fundamental motivation for the group of chapters, 13–15.
In these chapters we are concerned with a person's degrees of belief. We are talking about personal probabilities. But this approach can be used for other versions of belief-type probability, such as the logical perspective of Keynes and Carnap.
Because Bayes' Rule is so fundamental, this approach is often called Bayesian. “Belief dogmatists” are often simply called Bayesians because the use of Bayes' rule as a model of learning from experience plays such a large part in their philosophy. But notice that there are many varieties of Bayesian thinking. This perspective ranges from the personal to the logical.
How do you choose among possible acts? The most common decision rule is an obvious one. Perform the action with the highest expected value. There are, however, a few more paradoxes connected with this simple rule.
RISKY DECISIONS
Logic analyzes reasons and arguments. We can give reasons for our beliefs. We can also give reasons for our actions and our decisions. What is the best thing to do under the circumstances? Inductive logic analyzes risky arguments. It also helps with decision theory, the theory of making risky decisions.
Should I go out in a thunderstorm to fetch a book, even though I am scared of lightning? I go out in a thunderstorm because I believe I left a book outside. I believe it will get wet and be ruined. I also believe I will not be struck by lightning. But I also go outside because I want the book, among other things. Of course, my beliefs are not certainties–I am pretty confident I left the book there. I am pretty sure it will get wet if it is there. I know it is not probable that I will be hit by lightning.
Decisions depend on two kinds of thing:
▪ What we believe.
▪ What we want.
Sometimes we can represent our degrees of belief or confidence by probabilities. Sometimes we can represent what we want by dollar values, or at least by judgments of value, which we call utilities.