We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we show how to represent a non-Archimedean preference over a set of random quantities by a nonstandard utility function. Non-Archimedean preferences arise when some random quantities have no fair price. Two common situations give rise to non-Archimedean preferences: random quantities whose values must be greater than every real number, and strict preferences between random quantities that are deemed closer in value than every positive real number. We also show how to extend a non-Archimedean preference to a larger set of random quantities. The random quantities that we consider include real-valued random variables, horse lotteries, and acts in the theory of Savage. In addition, we weaken the state-independent utility assumptions made by the existing theories and give conditions under which the utility that represents preference is the expected value of a state-dependent utility with respect to a probability over states.
We report a mathematical result that casts doubt on the possibility ofrecalibration of probabilities using calibration curves. We then discuss how tointerpret this result in the light of behavioral research.
Gordon Belot argues that Bayesian theory is epistemologically immodest. In response, we show that the topological conditions that underpin his criticisms of asymptotic Bayesian conditioning are self-defeating. They require extreme a priori credences regarding, for example, the limiting behavior of observed relative frequencies. We offer a different explication of Bayesian modesty using a goal of consensus: rival scientific opinions should be responsive to new facts as a way to resolve their disputes. Also we address Adam Elga’s rebuttal to Belot’s analysis, which focuses attention on the role that the assumption of countable additivity plays in Belot’s criticisms.
Let κ be an uncountable cardinal. Using the theory of conditional probability associated with de Finetti (1974) and Dubins (1975), subject to several structural assumptions for creating sufficiently many measurable sets, and assuming that κ is not a weakly inaccessible cardinal, we show that each probability that is not κ-additive has conditional probabilities that fail to be conglomerable in a partition of cardinality no greater than κ. This generalizes a result of Schervish, Seidenfeld, & Kadane (1984), which established that each finite but not countably additive probability has conditional probabilities that fail to be conglomerable in some countable partition.
The Sleeping Beauty problem has spawned a debate between “thirders” and “halfers” who draw conflicting conclusions about Sleeping Beauty's credence that a coin lands heads. Our analysis is based on a probability model for what Sleeping Beauty knows at each time during the experiment. We show that conflicting conclusions result from different modeling assumptions that each group makes. Our analysis uses a standard “Bayesian” account of rational belief with conditioning. No special handling is used for self-locating beliefs or centered propositions. We also explore what fair prices Sleeping Beauty computes for gambles that she might be offered during the experiment.
Statistical decision theory, whether based on Bayesian principles or other concepts such as minimax or admissibility, relies on minimizing expected loss or maximizing expected utility. Loss and utility functions are generally treated as unit-less numerical measures of value for consequences. Here, we address the issue of the units in which loss and utility are settled and the implications that those units have on the rankings of potential decisions. When multiple currencies are available for paying the loss, one must take explicit account of which currency is used as well as the exchange rates between the various available currencies.
This important collection of essays is a synthesis of foundational studies in Bayesian decision theory and statistics. An overarching topic of the collection is understanding how the norms for Bayesian decision making should apply in settings with more than one rational decision maker and then tracing out some of the consequences of this turn for Bayesian statistics. There are four principal themes to the collection: cooperative, non-sequential decisions; the representation and measurement of 'partially ordered' preferences; non-cooperative, sequential decisions; and pooling rules and Bayesian dynamics for sets of probabilities. The volume will be particularly valuable to philosophers concerned with decision theory, probability, and statistics, statisticians, mathematicians, and economists.
According to Mark Rubinstein (2006) ‘In 1952, anticipating Kenneth Arrow and John Pratt by over a decade, he [de Finetti] formulated the notion of absolute risk aversion, used it in connection with risk premia for small bets, and discussed the special case of constant absolute risk aversion.’ The purpose of this note is to ascertain the extent to which this is true, and at the same time, to correct certain minor errors that appear in de Finetti's work.
It has long been known that the practice of testing all hypotheses at the same level (such as 0.05), regardless of the distribution of the data, is not consistent with Bayesian expected utility maximization. According to de Finetti's “Dutch Book” argument, procedures that are not consistent with expected utility maximization are incoherent and they lead to gambles that are sure to lose no matter what happens. In this paper, we use a method to measure the rate at which incoherent procedures are sure to lose, so that we can distinguish slightly incoherent procedures from grossly incoherent ones. We present an analysis of testing a simple hypothesis against a simple alternative as a case-study of how the method can work.
In this chapter we discuss the sensitivity of the minimax theorem to the cardinality of the set of pure strategies. In this light, we examine an infinite game due to Wald and its solutions in the space of finitely additive (f.a.) strategies.
Finitely additive joint distributions depend, in general, upon the order in which expectations are composed out of the players' separate strategies. This is connected to the phenomenon of “non-conglomerability” (so-called by deFinetti), which we illustrate and explain. It is shown that the player with the “inside integral” in a joint f.a. distribution has the advantage.
In reaction to this asymmetry, we propose a family of (weighted) symmetrized joint distributions and show that this approach permits “fair” solutions to fully symmetric games, e.g., Wald's game. We develop a minimax theorem for this family of symmetrized joint distributions using a condition formulated in terms of a pseudo-metric on the space of f.a. strategies. Moreover, the resulting game can be solved in the metric completion of this space. The metrical approach to a minimax theorem is contrasted with the more familiar appeal to compactifications, and we explain why the latter appears not to work for our purposes of making symmetric games “fair.” We conclude with a brief discussion of three open questions relating to our proposal for f.a. game theory.
INTRODUCTION
In this essay we derive results for finitely additive (mixed) strategies in two-person, zero-sum games with bounded payoffs.
For Savage (1954) as for deFinetti (1974), the existence of subjective (personal) probability is a consequence of the normative theory of preference. (DeFinetti achieves the reduction of belief to desire with his generalized Dutch-Book argument for previsions.) Both Savage and deFinetti rebel against legislating countable additivity for subjective probability. They require merely that probability be finitely additive. Simultaneously, they insist that their theories of preference are weak, accommodating all but self-defeating desires. In this chapter we dispute these claims by showing that the following three cannot simultaneously hold:
i. Coherent belief is reducible to rational preference, i.e. the generalized Dutch- Book argument fixes standards of coherence.
ii. Finitely additive probability is coherent.
iii. Admissible preference structures may be free of consequences, i.e. they may lack prizes whose values are robust against all contingencies.
I. INTRODUCTION
One of the most important results of the subjectivist theories of Savage and deFinetti is the thesis that, normatively, preference circumscribes belief. Specifically, these authors argue that the theory of subjective probability is reducible to the theory of reasonable preference, i.e. coherent belief is a consequence of rational desire. In Savage's (1954) axiomatic treatment of preference, the existence of a quantitative subjective probability is assured once the postulates governing preference are granted. In deFinetti's (1974) discussion of prevision, avoidance of a (uniform) loss for certain is thought to guarantee agreement with the requirements of subjective probability (sometimes called the avoidance of “Dutch Book”).
In the seventeen years between the first (1954) and second (1971) editions of his book The Foundations of Statistics, Savage reported a change in the“climate of opinion” about foundations. That change, he said,“would obliterate rather than restore” his earlier thinking about the relationship between the two major schools of statistics that were the subject of his inquiry. What in the early 1950s started out for Savage as a task of building Bayesian expected utility foundations for common, so-called Frequentist statistics – which for Savage included the Fisher-Neyman-Pearson-Wald program that was dominant in the British-American school from the 1930s – revealed itself, instead, to be an impossibility. Contrary principles separated Bayesian decision theory from what practicing statisticians of the day were taught to do. Significance tests, tests of hypotheses, and confidence intervals give quantitative indices, such as confidence levels, that only accidentally and approximately cohere with the Bayesian theory that Savage hoped might elucidate and justify them.
Since the second edition of The Foundations of Statistics and Savage's premature death (both in 1971), we have come to understand much better the extent of the conflict between Bayesian and Frequentist statistical principles of evidence. Some limitations in Frequentist methods, which existed only in the lore of practicing statisticians, gained theoretical footing through the Bayesian point of view. Consider the very general problem of how, within the Frequentist program, to deal with conditional inference, e.g., whether or not to condition on an ancillary statistic.
Applying the theory (of personal probability) naively one quickly comes to the conclusion that randomization is without value for statistics. This conclusion does not sound right; and it is not right. Closer examination of the road to this untenable conclusion does lead to new insights into the role and limitations of randomization but does by no means deprive randomization of its important function in statistics.
L. J. Savage (1961)
Though we all feel sure that randomization is an important invention, the theory of subjective probability reminds us that we have not fully understood randomization.… The need for randomization presumably lies in the imperfection of actual people and, perhaps, in the fact that more than one person is ordinarily concerned with an investigation.
L. J. Savage (1962)
Randomization has thus been a puzzle for Bayesian theory for many years. In this essay, we give our current views on this subject.
There are two principal arguments for randomization that we are familiar with. The first is to support a randomization-analysis of the data. This notion goes back to Fisher, and is exposited in a series of papers by Kempthorne (1955, 1966, 1977). It asks whether what is observed is surprising given all the other designs that might have been randomly selected and data that might have been observed, but were not. By its appeal to what did not occur, such an analysis violates the likelihood principle; hence, it is not compatible with Bayesian ideas.
This investigation combines two questions for expected utility theory:
When do the shared preferences among expected utility maximizers conform to the dictates of expected utility?
What is the impact on expected utility theory of allowing preferences for prizes to be state-dependent?
Our principal conclusion (Theorem 4) establishes very restrictive necessary and sufficient conditions for the existence of a Pareto, Bayesian compromise of preferences between two Bayesian agents, even when utilities are permitted to be state-dependent and identifiable. This finding extends our earlier result (Theorem 2, 1989a), which applies provided that all utilities are stateindependent. A subsidiary theme is a decision theoretic analysis of common rules for “pooling” expert probabilities.
Against the backdrop of “horse lottery” theory (Anscombe and Aumann 1963) and subject to a weak Pareto rule, we show, generally, that there is no Bayesian compromise between two Bayesian agents even when state-dependent utilities are entertained in an identifiable way. The word “identifiable” is important because if state-dependence is permitted merely by dropping the Anscombe-Aumann axiom (Axiom 4 here) for “state-independence,” though a continuum of possible Bayesian compromises emerges, also it leads to an extreme underdetermination of an agent's personal probability and utility given the agent's preferences. Instead, when state-dependence is monitored through (our version of) the approach of Kami, Schmeidler, and Vind (1983), the general impossibility of a Bayesian, Pareto compromise in preferences reappears.
When can a Bayesian investigator select an hypothesis H and design an experiment (or a sequence of experiments) to make certain that, given the experimental outcome(s), the posterior probability of H will be lower than its prior probability? We report an elementary result which establishes sufficient conditions under which this reasoning to a foregone conclusion cannot occur. Through an example, we discuss how this result extends to the perspective of an onlooker who agrees with the investigator about the statistical model for the data but who holds a different prior probability for the statistical parameters of that model. We consider, specifically, one-sided and two-sided statistical hypotheses involving i.i.d. Normal data with conjugate priors. In a concluding section, using an “improper” prior, we illustrate how the preceding results depend upon the assumption that probability is countably additive.
EXPECTED CONDITIONAL PROBABILITIES AND REASONING TO FOREGONE CONCLUSIONS
Suppose that an investigator has his or her designs on rejecting, or at least making doubtful, a particular statistical hypothesis H. To what extent does basic inductive methodology insure that, without violating the total evidence requirement, this intent cannot be sure to succeed? We distinguish two forms of the question:
Can the investigator plan an experiment so that he or she is certain it will halt with evidence that disconfirms W.
Can the investigator plan an experiment so that others are certain that the investigator will halt with evidence that disconfirms H?
We review the axiomatic foundations of subjective utility theory with a view toward understanding the implications of each axiom. We consider three different approaches, namely, the construction of utilities in the presence of canonical probabilities, the construction of probabilities in the presence of utilities, and the simultaneous construction of both probabilities and utilities. We focus attention on the axioms of independence and weak ordering. The independence axiom is seen to be necessary to prevent a form of Dutch Book in sequential problems.
Our main focus is to examine the implications of not requiring the weak order axiom. We assume that gambles are partially ordered. We consider both the construction of probabilities when utilities are given and the construction of utilities in the presence of canonical probabilities. In the first case we find that a partially ordered set of gambles leads to a set of probabilities with respect to which the expected utility of a preferred gamble is higher than that of a dispreferred gamble. We illustrate some comparisons with theories of upper and lower probabilities. In the second case, we find that a partially ordered set of gambles leads to a set of lexicographic utilities, each of which ranks preferred gambles higher than dispreferred gambles.
I. INTRODUCTION: SUBJECTIVE EXPECTED UTILITY [SEU] THEORY
The theory of (subjective) expected utility is a normative account of rational decision making under uncertainty.