To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This Element situates epistemic game theory at the intersection of decision theory, game theory, and interactive epistemology. It provides an overview and a critical assessment of some of the most classical results and contributions in the field: the epistemic characterization of Nash equilibrium, the epistemic interpretation of mixed actions, rationalizability in static games, and sub-game perfect equilibrium in dynamic games. The book furthermore discusses more recent contributions that highlight the importance of correlated beliefs in games, and as well as experimental and empirical findings on higher-order strategic reasoning.
What price should you be willing to pay for a tiny probability of an astronomically large gain, or to avoid a tiny probability of an astronomically large loss? Should you be willing to pay any finite price, if the potential gains or losses are large enough? Fanaticism says you should, while anti-fanaticism says you should not. Focusing on morally motivated decision-making, this Element explores arguments for and against both positions, ultimately defending the intermediate view that rationality permits a range of dispositions toward extreme risks, while ruling out the most comprehensive forms of both fanaticism and anti-fanaticism. The final section considers practical implications, arguing that under real-world circumstances any view satisfying a minimal principle of rationality must very often rank options by expected value, and thus sometimes give great weight to intuitively small probabilities, but that we nonetheless retain rational flexibility in sufficiently extreme cases.
Disagreement is a common feature of a social world. For various reasons, however, we sometimes need to resolve a disagreement into a single set of opinions. This can be achieved by pooling the opinions of individuals that make up the group. In this Element, we provide an opinionated survey on some ways of pooling opinions: linear pooling, multiplicative pooling (including geometric), and pooling through imprecise probabilities. While we give significant attention to the axiomatic approach in evaluating pooling strategies, we also evaluate them in terms of the epistemic and practical goals they might meet. In doing so, we connect opinion pooling to some philosophical problems in social epistemology and the philosophy of action, illuminating different perspectives one might take when figuring out how to pool opinions for a given purpose. This title is also available as Open Access on Cambridge Core.
We often care not only about what happens to us, but when it happens to us. We prefer that good experiences happen sooner, rather than later, and that our suffering lies in our past, rather than our future. Common sense suggests that some ways of caring about time are rational, and others are not, but it is surprisingly challenging to provide justifying explanations for these tendencies. This Element is an opinionated, nontechnical guided tour through the major arguments for and against different kinds of so-called time bias.
Beliefs come in degrees, and we often represent those degrees with numbers. We might say, for example, that we are 90% confident in the truth of some scientific hypothesis, or only 30% confident in the success of some risky endeavour. But what do these numbers mean? What, in other words, is the underlying psychological reality to which the numbers correspond? And what constitutes a meaningful difference between numerically distinct representations of belief? In this Element, we discuss the main approaches to the measurement of belief. These fall into two broad categories-epistemic and decision-theoretic-with divergent foundations in the theory of measurement. Epistemic approaches explain the measurement of belief by appeal to relations between belief states themselves, whereas decision-theoretic approaches appeal to relations between beliefs and desires in the production of choice and preferences.
For most of its history, decision theory has investigated the rational choices of humans under the assumption of static preferences. Human preferences, however, change. In recent years, decision theory has increasingly acknowledged the reality of preference change throughout life. This Element provides an accessible introduction and new contributions to the debates on preference change. It is divided into three chapters. In the first chapter, the authors discuss what preference change is and whether we can integrate it into decision theory. In the second chapter, they present models of preference change, including a novel proposal of their own. In the third and final chapter, they discuss how we can rationally choose a course of action when our preferences might change. Both the transformative experience literature and recent work on choosing for changing selves are discussed.
This Element offers an accessible but technically detailed review of expected utility theory (EU), which is a model of individual decision-making under uncertainty that is central for both economics and philosophy. The Element's approach falls between the history of ideas and economic methodology. At the historical level, it reviews EU by following its conceptual evolution from its original formulation in the eighteenth century through its transformations and extensions in the mid-twentieth century to its more recent supersession by post-EU theories such as prospect theory. In reconstructing the history of EU, it focuses on the methodological issues that have accompanied its evolution, such as whether the utility function and the other components of EU correspond to actual mental entities. On many of these issues, no consensus has yet been reached, and in this Element the author offers his view on them.
Evolutionary game theory originated in population biology from the realisation that frequency-dependent fitness introduced a strategic element into evolution. Since its development, evolutionary game theory has been adopted by many social scientists, and philosophers, to analyse interdependent decision problems played by boundedly rational individuals. Its study has led to theoretical innovations of great interest for the biological and social sciences. For example, theorists have developed a number of dynamical models which can be used to study how populations of interacting individuals change their behaviours over time. In this introduction, this Element covers the two main approaches to evolutionary game theory: the static analysis of evolutionary stability concepts, and the study of dynamical models, their convergence behaviour and rest points. This Element also explores the many fascinating, and complex, connections between the two approaches.
The Nash bargaining problem provides a framework for analyzing problems where parties have imperfectly aligned interests. This Element reviews the parts of bargaining theory most important in philosophical applications, and to social contract theory in particular. It discusses rational choice analyses of bargaining problems that focus on axiomatic analysis, according to which a solution of a given bargaining problem satisfies certain formal criteria, and strategic bargaining, according to which a solution results from the moves of ideally rational and knowledgeable claimants. Next, it discusses the conventionalist analyses of bargaining problems that focus on how members of a society can settle into bargaining conventions via learning and focal points. In the concluding section this Element discusses how philosophers use bargaining theory to analyze the social contract.
Suppose that you prefer A to B, B to C, and C to A. Your preferences violate Expected Utility Theory by being cyclic. Money-pump arguments offer a way to show that such violations are irrational. Suppose that you start with A. Then you should be willing to trade A for C and then C for B. But then, once you have B, you are offered a trade back to A for a small cost. Since you prefer A to B, you pay the small sum to trade from B to A. But now you have been turned into a money pump. You are back to the alternative you started with but with less money. This Element shows how each of the axioms of Expected Utility Theory can be defended by money-pump arguments of this kind. This title is also available as Open Access on Cambridge Core.
Drawing and building on the existing literature, this Element explores the interesting and challenging philosophical terrain where issues regarding cooperation, commitment, and control intersect. Section 1 discusses interpersonal and intrapersonal Prisoner's Dilemma situations, and the possibility of a set of unrestrained choices adding up in a way that is problematic relative to the concerns of the choosers involved. Section 2 focuses on the role of precommitment devices in rational choice. Section 3 considers the role of resoluteness in rational choice and action. And Section 4 delves into some related complications concerning the nature of actions and the nature of intentions.
Evidential Decision Theory is a radical theory of rational decision-making. It recommends that instead of thinking about what your decisions *cause*, you should think about what they *reveal*. This Element explains in simple terms why thinking in this way makes a big difference, and argues that doing so makes for *better* decisions. An appendix gives an intuitive explanation of the measure-theoretic foundations of Evidential Decision Theory.
The main aim of this Element is to introduce the topic of limited awareness, and changes in awareness, to those interested in the philosophy of decision-making and uncertain reasoning. While it has long been of interest to economists and computer scientists, this topic has only recently been subject to philosophical investigation. Indeed, at first sight limited awareness seems to evade any systematic treatment: it is beyond the uncertainty that can be managed. On the one hand, an agent has no control over what contingencies she is and is not aware of at a given time, and any awareness growth takes her by surprise. On the other hand, agents apparently learn to identify the situations in which they are more and less likely to experience limited awareness and subsequent awareness growth. How can these two sides be reconciled? That is the puzzle we confront in this Element.
An agent often does not have precise probabilities or utilities to guide resolution of a decision problem. I advance a principle of rationality for making decisions in such cases. To begin, I represent the doxastic and conative state of an agent with a set of pairs of a probability assignment and a utility assignment. Then I support a decision principle that allows any act that maximizes expected utility according to some pair of assignments in the set. Assuming that computation of an option's expected utility uses comprehensive possible outcomes that include the option's risk, no consideration supports a stricter requirement.
Our beliefs come in degrees. I'm 70% confident it will rain tomorrow, and 0.001% sure my lottery ticket will win. What's more, we think these degrees of belief should abide by certain principles if they are to be rational. For instance, you shouldn't believe that a person's taller than 6ft more strongly than you believe that they're taller than 5ft, since the former entails the latter. In Dutch Book arguments, we try to establish the principles of rationality for degrees of belief by appealing to their role in guiding decisions. In particular, we show that degrees of belief that don't satisfy the principles will always guide action in some way that is bad or undesirable. In this Element, we present Dutch Book arguments for the principles of Probabilism, Conditionalization, and the Reflection Principle, among others, and we formulate and consider the most serious objections to them.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.