We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Laurie Paul argues that, when it comes to many of your most significant life-changing decisions, the principles of rational choice are silent. That is because, in these cases, you anticipate that one of your choice options would yield a transformative experience. We argue that such decisions are best seen as ones in which you anticipate awareness growth. You do not merely lack knowledge about which possible outcome will arise from a transformative option; you lack knowledge about what are the possible outcomes. We show how principles of rational choice can be extended to cases of anticipated awareness growth.
The main aim of this Element is to introduce the topic of limited awareness, and changes in awareness, to those interested in the philosophy of decision-making and uncertain reasoning. While it has long been of interest to economists and computer scientists, this topic has only recently been subject to philosophical investigation. Indeed, at first sight limited awareness seems to evade any systematic treatment: it is beyond the uncertainty that can be managed. On the one hand, an agent has no control over what contingencies she is and is not aware of at a given time, and any awareness growth takes her by surprise. On the other hand, agents apparently learn to identify the situations in which they are more and less likely to experience limited awareness and subsequent awareness growth. How can these two sides be reconciled? That is the puzzle we confront in this Element.
The Repugnant Conclusion is an implication of some approaches to population ethics. It states, in Derek Parfit's original formulation,
For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living. (Parfit 1984: 388)
Many examples of calibration in climate science raise no alarms regarding model reliability. We examine one example and show that, in employing classical hypothesis testing, it involves calibrating a base model against data that are also used to confirm the model. This is counter to the ‘intuitive position’ (in favor of use novelty and against double counting). We argue, however, that aspects of the intuitive position are upheld by some methods, in particular, the general cross-validation method. How cross-validation relates to other prominent classical methods such as the Akaike information criterion and Bayesian information criterion is also discussed.
This article considers a puzzling conflict between two positions that are each compelling: (a) it is irrational for an agent to pay to avoid ‘free’ evidence, and (b) rational agents may have imprecise beliefs. An important aspect of responding to this conflict is resolving the question of how rational (imprecise) agents ought to make sequences of decisions—we make explicit what the key alternatives are and defend our own approach. We endorse a resolution of the aforementioned puzzle—we privilege decision theories that merely permit avoiding free evidence over decision theories that make avoiding free information obligatory.
There has been much recent interest in imprecise probabilities, models of belief that allow unsharp or fuzzy credence. There have also been some influential criticisms of this position. Here we argue, chiefly against Elga (2010), that subjective probabilities need not be sharp. The key question is whether the imprecise probabilist can make reasonable sequences of decisions. We argue that she can. We outline Elga's argument and clarify the assumptions he makes and the principles of rationality he is implicitly committed to. We argue that these assumptions are too strong and that rational imprecise choice is possible in the absence of these overly strong conditions.
Richard Rudner famously argues that the communication of scientific advice to policy makers involves ethical value judgments. His argument has, however, been rightly criticized. This article revives Rudner's conclusion, by strengthening both his lines of argument: we generalize his initial assumption regarding the form in which scientists must communicate their results and complete his ‘backup’ argument by appealing to the difference between private and public decisions. Our conclusion that science advisors must, for deep-seated pragmatic reasons, make value judgments is further bolstered by reflections on how the scientific contribution to policy is far less straightforward than the Rudner-style model suggests.