To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
“To characterize a person as rational is not to relate him to any order, or system, or framework, which would constrain his activities,” writes David Gauthier. Reason is free, he thinks, to develop its own path to any ends it takes as given. The suggestion appears in the earliest of Gauthier's essays in which he recognizes the freedom of reason from the strict constraints of maximization. And while Gauthier did not intend his words in this sense, he might have put his point about maximization in this form as well: To conceive of practical reason as subservient to a system of ends is to conceive its independence from any determinate path in the pursuit of those ends. The form of reasoning, as well as its content, must be instrumentally supplied. Gauthier's famous claim that instrumental reason sometimes demands the constraint of its own maximizing activity is thus dependent on the recognition of reason's freedom from constraint.
But to say that reason is free from constraint in this sense is to posit reason's constraint in another. Like many committed to understanding practical reason instrumentally, Gauthier accepts the economist's picture of the ends it is reason's purpose to pursue as established by the preferences of the agent whose ends they are. Reason's freedom, Gauthier thinks, is the freedom to serve the preferences of rational agents. Gauthier's recognition of reason's freedom from maximization thus depends on his willingness to accept the idea of reason's constraint, constraint by the preferences of the rational agents it serves.
Much of the response to Morals by Agreement has focused on David Gauthier's most philosophical claims: Is constrained maximization rational? Is it moral? While these traditional questions are important, they might lead us to overlook two features of Gauthier's innovative project. First, he develops Rawls's constructive approach to moral theorizing, seeking to invent new devices and principles that can meet pragmatic and moral demands. In particular, Gauthier offers his constrained maximizer as a new kind of agent who benefits by her moral constraint. Significantly, Morals by Agreement ends with a discussion of Nietzsche's question: “ ‘To breed an animal with the right to make promises – is not this the paradoxical task that nature has set itself in the case of man?’ … Such an animal is able to interact with its world in a new and distinctive way, which we have sought to capture in the conception of constrained maximization.” But mention of Nietzsche and Rawls reminds us that moral innovation is often controversial. As in the case of Rawls's difference principle, Gauthier's principle of constrained maximization has struck some critics as less demanding than morality requires. This leads to the second feature of Gauthier's approach: His appeal to rational choice suggests new ways to address such criticisms, by allowing a sophisticated pragmatic theory to select the best among various approaches (moralized and not) to problematic interaction.
The concept of preference plays a central role in much recent moral and political philosophy. Partly because of its pedigree in such widely admired disciplines as economics and decision theory, its status seems secure. Preferences are taken by various philosophers to provide everything from a starting point for moral inquiry to the sole factor that elected officials should take into account. My aim in this essay is to call that status into question. I shall argue that the concept of preference cannot bear the theoretical weight in normative inquiry that it has been asked to support. The argument has two parts. The first examines the place of actual or “revealed” preference in moral argument and shows how it fails to meet even minimal standards as an account of practical reason. The second part considers more sophisticated accounts of ideal or considered preferences, arguing that although they have enough structure to function in accounts of practical reason, their employment presupposes independent standards. This needn't be a damning criticism, except for the manner in which advocates of preference-based accounts maintain that those accounts do not incorporate any controversial normative claims. Much of the appeal of preference-based accounts of practical reason stems from their promise of providing a normative account of practical reason using minimal formal constraints of consistency and the prior motivations of the agent in question.
My strategy will be to establish and exploit parallels between the role of preference in contemporary moral philosophy and the role of perception in classical empiricism. Empiricist epistemology and utilitarian and contractarian moral philosophy have a common ancestry.
Starting with Adam Smith, and running like a bright thread throughout virtually all the subsequent theoretical literature on political economy, one can mark a preoccupation with the conditions under which public structures can emerge that will permit individuals to cooperatively transact with one another to their mutual advantage. The good news that there are in fact conditions under which such structures do emerge has to be tempered, however, by the bad news (1) that historical processes of institutional development tend to be path-dependent in ways that work against adaptive efficiency and (2) that virtually all institutional arrangements are subject to manipulation by special interests, to the short-term advantage of some, but often to the longterm disadvantage of all. Recent work in political economy has in fact identified deep pressures on human interaction that tend to prevent anything like the full realization of the mutual gains that cooperation can make possible. A truly optimal arrangement would seem, then, to be an ideal that is remote from social reality.
In one sense, there should be nothing surprising about this. After all, the historical record hardly supports any other picture. What is surprising, however, is that many of these conclusions are driven by models of ideally rational beings who have substantial and common knowledge of each other's rationality and preferences and the strategic structure of their interactions. On the accounts offered, Pareto-suboptimality does not flow just from assuming that some are less than fully rational, or possess limited or asymmetrical knowledge. Mutually disadvantageous free-riding and conflict over the distribution of goods are taken to be natural to the way in which even hyperrational and fully informed individuals interact.
In an early article, Gauthier argued that, to act rationally, we must act morally. I tried to refute that argument. Since Gauthier was not convinced, I shall try again.
Gauthier assumes that, to be rational, we must maximize our own expected utility. Though he distinguishes between ‘utility’ and ‘benefit’, this distinction does not affect his main arguments. We can regard him as appealing to the Self-interest Theory.
Many writers have argued that, in self-interested terms, it is always rational to act morally. According to most of these writers, morality and self-interest coincide. But that is not Gauthier's line. Gauthier concedes that acting morally may be, and be known to be, worse for us. He claims that, even in such cases, it is rational to act morally.
If we appeal to the Self-interest Theory, it may seem impossible to defend that claim. How can our acts be rational, in self-interested terms, if we know them to be worse for us? But Gauthier revises the Self-interest Theory. On the standard version of this theory, an act is rational if it will maximize our expected benefit – or be expectably-best for us. On Gauthier's version, we should aim to benefit ourselves not with our acts but only with our dispositions. A disposition is rational if having it will be expectably-best for us. An act is rational if it results from such a disposition.
Besides revising the Self-interest Theory, Gauthier restricts the scope of morality. To act morally, Gauthier claims, we must honour our agreements.
The frequentist agrees that no reasons can be given for inductive inferences, but holds that reasons can be given for inductive behavior, using certain procedures based on the idea of confidence intervals.
The Bayesian is able to attach personal probabilities, or degrees of belief, to individual propositions. The hard-line frequency dogmatist thinks that probabilities can be attached only to a series of events.
Probability, says this dogmatist, just means the relative frequency of some kind of event produced by a chance setup. Or it refers to the propensity of a chance setup to produce events with a certain stable frequency. Or it refers to certain underlying symmetry properties.
At any rate, we cannot talk sensibly about the probability of a single event, for that event either happens or does not happen. It has “probability” 0 or 1, and that is that.
So the frequency dogmatist will never talk about the frequency-type probability that a particular hypothesis is true.
The hypothesis is either true or false, and there is no frequency about it. At most, we can discuss the relative frequency with which hypotheses of a certain kind are true.
Thus far the dogmatic frequentist is happy to agree with Hume.
INDUCTIVE BEHAVIOR
Nevertheless, continues the frequentist, we may sometimes be able to apply a system for making inferences or drawing conclusions such that the conclusions are usually right.
We can talk about the relative frequency with which inferences drawn by a certain method are in fact correct.
Logic is about good and bad reasoning. In order to talk clearly about reasoning, logicians have given precise meanings to some ordinary words. This chapter is a review of their language.
ARGUMENTS
Logicians attach a special sense to the word argument. In ordinary language, it usually takes two to argue. One dictionary defines an argument as:
A quarrel.
A discussion in which reasons are put forward in support of and against a proposition, proposal, or case.
A point or series of reasons presented to support a proposition which is the conclusion of the argument.
Definition (3) is what logicians mean by an argument.
Reasoning is stated or written out in arguments. So logicians study arguments (in sense 3).
An argument thus divides up into:
A point or series of reasons which are called premises, and a conclusion.
Premises and conclusion are propositions, statements that can be either true or false. Propositions are “true-or-false.”
GOING WRONG
The premises are supposed to be reasons for the conclusion. Logic tries to understand the idea of a good reason.
We find arguments convincing when we know that the premises are true, and when we see that they give a good reason for the conclusion.
So two things can go wrong with an argument:
▪ the premises may be false.
▪ the premises may not provide a good reason for the conclusion.
For philosophers, this is the most important question about induction. It is not a problem within inductive logic. It questions the very possibility of inductive reasoning itself.
DAVID HUME
In 1739, David Hume (1711–1776), the Scottish philosopher, published A Treatise of Human Nature, one of the half-dozen most influential books of Western philosophy. He was twenty-eight years old at the time. In 1748, he published An Enquiry Concerning Human Understanding.
These books, especially the second, include the classic statement of what came to be called the problem of induction.
Hume's problem about induction is only a small part of a very general theory of knowledge. Here we study just this one aspect of Hume's philosophy.
SKEPTICISM
In ordinary English, a skeptic is:
♦ Someone who habitually doubts accepted beliefs.
♦ A person who mistrusts other people or their ideas.
♦ Someone who rejects traditional beliefs, such as religious beliefs.
PHILOSOPHICAL SKEPTICISM
Philosophers attach a far more sweeping sense to the idea of skepticism. A philosophical skeptic is someone who claims to:
♦ Doubt that any real knowledge or sound belief about anything is possible.
There are more specialized types of philosophical skepticism, depending upon what kind of knowledge is in doubt. Think of any field of knowledge or belief X, where X may be religious, or scientific, or moral. X may be knowledge about other people, or about the reality of the world around us, or even knowledge about yourself.
Consistent personal betting rates satisfy the basic rules of probability. Consistency in this sense is called coherence.
Personal probabilities and betting rates are all very well. But thus far they have no structure, no rules–in fact, not much meaning. Now we give one argument that betting rates ought to satisfy the basic rules for probability. We have already had three thought experiments in Chapter 13. Here are two more.
FOURTH THOUGHT EXPERIMENT: SETS OF BETTING RATES
A group of beliefs can be represented by a set of betting rates.
Imagine yourself advertising a set of betting rates. For each of the propositions A, B, … K in the set, you offer betting rates pa, pb, pc, …, pk.
In this imaginary game, you are prepared to bet, say,
on A at rate pa, or
against A at rate (1–pa).
FIFTH THOUGHT EXPERIMENT: SIMPLE INCONSISTENCY
These are personal betting rates. Couldn't you choose any fractions you like?
Of course. But you might be inconsistent.
For example, suppose you are concerned with just two possibilities, B (for “below zero”) and ~B:
B: On the night of next March 21, the temperature will fall below 0°C at the Toronto International Airport meteorological station.
~B: On the night of next March 21, the temperature will not fall below 0°C at the Toronto International Airport meteorological station.
We have seen that relative frequencies converge on theoretical probabilities. But how fast? When can we begin to use an observed relative frequency as a reliable estimate of a probability? This chapter gives some answers. They are a little more technical than most of this book. For practical purposes, all you need to know is how to use is the three boxed Normal Facts below.
EXPERIMENTAL BELL-SHAPED CURVES
On page 191 we had the result of a coin-tossing experiment. The graph was roughly in the shape of a bell. Many observed distributions have this property.
Example: Incomes. In modern industrialized countries we have come to expect income distributions to look something like Curve 1 on the next page, with a few incredibly rich people at the right end of the graph. But in feudal times there was no middle class, so we would expect the income distribution in Curve 2. It is “bimodal”–it has two peaks.
Example: Errors. We can never measure with complete accuracy. Any sequence of “exact” measurements of the same quantity will show some variation. We often average the results. We can think of this as a sample mean. A good measuring device will produce results that cluster about the mean, with a small sample standard deviation. A bad measuring device gives results that vary wildly from one another, giving a large standard deviation.
The most important new idea about probability is the probability that something happens, on condition that something else happens. This is called conditional probability.
CATEGORICAL AND CONDITIONAL
We express probabilities in numbers. Here is a story I read in the newspaper. The old tennis pro Ivan was discussing the probability that the rising young star Stefan would beat the established player Boris in the semifinals. Ivan was set to play Pete in the other semifinals match. He said,
The probability that Stefan will beat Boris is 40%.
Or he could have said,
The chance of Stefan's winning is 0.4.
These are categorical statements, no ifs and buts about them. Ivan might also have this opinion:
Of course I'm going to win my semifinal match, but if I were to lose, then Stefan would not be so scared of meeting me in the finals, and he would play better; there would then be a 50–50 chance that Stefan would beat Boris.
This is the probability of Stefan's winning in his semifinal match, conditional on Ivan losing the other semifinal. We call it the conditional probability. Here are other examples:
Categorical: The probability that there will be a bumper grain crop on the prairies next summer.
Conditional: The probability that there will be a bumper grain crop next summer, given that there has been very heavy snowfall the previous winter. […]
Inductive logic is risky. We need it when we are uncertain. Not just uncertain about what will happen, or what is true, but also when we are uncertain about what to do. Decisions need more than probability. They are based on the value of possible outcomes of our actions. The technical name for value is utility. This chapter shows how to combine probability and utility. But it ends with a famous paradox.
ACTS
Should you open a small business?
Should you take an umbrella?
Should you buy a Lotto ticket?
Should you move in with someone you love?
In each case you settle on an act. Doing nothing at all counts as an act.
Acts have consequences.
You go broke (or maybe found a great company).
You stay dry when everyone else is sopping wet (or you mislay your umbrella).
You waste a dollar (or perhaps win a fortune).
You live happily ever after (or split up a week later).
You do absolutely nothing active at all: that counts as an act, too. Some consequences are desirable. Some are not. Suppose you can represent the cost or benefit of a possible consequence by a number–so many dollars, perhaps. Call that number the utility of the consequence.
Suppose you can also represent the probability of each possible consequence of an act by a number.