Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-04-14T23:20:14.656Z Has data issue: false hasContentIssue false

Thoroughly Modest Believing: Immodesty to the Rescue?

Published online by Cambridge University Press:  09 March 2026

David Christensen*
Affiliation:
Brown University , USA
Rights & Permissions [Opens in a new window]

Abstract

In response to the self-undermining problem for modest accounts of rational belief, some have proposed that an agent may rationally lose confidence in the truth of these accounts, while continuing to believe as the accounts prescribe. Such agents believe akratically. Many reject the possibility of rational akrasia. Others have defended it—at least in cases where an agent rationally sees her own beliefs as more accurate than rational alternatives would be. This paper argues that akrasia can be rational, but that defending rational akrasia based on an agent’s views about accuracy cannot succeed. Fortunately, however, the defense is not necessary.

Information

Type
Symposium
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press or the rights holder(s) must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of The Canadian Journal of Philosophy, Inc

1. Motivation for Epistemic Modesty

A prominent strain in the epistemological literature on disagreement, and on higher-order evidence in general, argues for the importance of a certain kind of epistemic modesty. The motivation for the idea is especially clear in the context of fields—like philosophy—where persistent disagreement is rife, even among the field’s most able practitioners. So, for example, suppose that Jocko is a philosopher with strong opinions on a number of controversial issues. He knows there are other philosophers—of equal or better training, professional qualifications, and so on—who reject each of those opinions. By Jocko’s own lights, he’s gotten right what they have gotten wrong. So, if Jocko is minimally self-reflective, he ought to wonder how he has managed to pull this off.Footnote 1

The question becomes particularly pointed in light of the way disagreement is distributed in philosophy. David Bourget, analyzing philosophers’ answers to the 2020 PhilPapers Survey, notes how “messy” philosophy is: looking at philosophers’ opinions, “we don’t see large clusters or patterns emerge despite our best efforts to group similar respondents together.”Footnote 2 Bourget notes that “if you pick a random philosopher, you’re pretty likely to disagree with them on roughly half of the 100 questions of the survey.”Footnote 3 If there were a couple of major schools of thought, then Jocko might think himself lucky enough, or well-enough educated, to belong to the large group of philosophers who were consistently getting things right. But this is not the way things are in philosophy. As Bourget puts it, it’s as if each philosopher is their own school of thought. So, if Jocko is actually correct about a high percentage of the issues on which he has strong opinions, he must either be extremely lucky or be possessed of truly exceptional philosophical talent!

This is where some in the literature on disagreement (“Conciliationists”) urge a certain kind of modesty. It seems that it would be irrational for Jocko to be confident that he’s either extremely lucky or possessed of truly exceptional philosophical talent—and irrational in a distinctly immodest way. But if Jocko avoids irrationally immodest self-assessment, it would seem that coherence requires that he not be confident in his controversial philosophical views.Footnote 4 So losing confidence in these views is an outgrowth of taking an appropriately modest view of his own cognitive abilities—it’s modest in the sense of taking seriously evidence of his own possible cognitive failure.Footnote 5

Similar conclusions can be drawn from cases involving other kinds of “higher-order evidence,” which I’ll understand roughly as evidence that directly bears on one’s reliability as a believer. A standard example, due to Adam Elga, involves hypoxia, or oxygen deprivation. Hypoxic people are highly prone to making cognitive errors, even though they feel perfectly clear-headed. So suppose that Jocko is taking part in a psychology experiment. He’s put in a special chamber, is given a calculation to do, and arrives confidently at a certain conclusion. He is then told about the effects of hypoxia and informed that the chamber he’s in has low enough oxygen pressure to render him hypoxic. He could, of course, insist that he’s special—that he has the strength of mind to reason well even while hypoxic. But if he has an appropriately modest view of his own powers, he will see it as fairly likely that his calculation was in error, and he’ll become much less confident in the conclusion he reached.

For a more everyday example, suppose I get evidence that American men tend to harbor implicit bias against women, or members of certain ethnic or racial groups. Even if they sincerely avow equality of the sexes and races, it turns out that they evaluate papers differently if they have white male-sounding names on the top of them than if they have female- or Black-sounding names. Now, I have never detected this type of bias in myself introspectively—when I grade papers, it seems to me that I’m just reacting directly to the quality of the papers.Footnote 6 But suppose someone explained the psychology to me, and suggested that I ought to use anonymous grading, where it was practical, and I replied, “Nah, I’m confident that my grades are accurate, so there’s really no point.” I would be failing to take seriously evidence of my own cognitive fallibility, and in that sense, I’d be exhibiting immodesty. I should be at least somewhat less confident in the accuracy of my grades if I do not grade anonymously.

And this line of thought should apply to our philosophical beliefs, even apart from actual disagreement about those particular beliefs. If there’s one thing that’s clear from the whole history of philosophy, it’s that any interesting philosophical position that’s advanced is going to be disputed once philosophers consider it. Clearly, humans are just not very good at discerning interesting philosophical truths. So if I’ve come up with a cool new theory while sitting alone in my armchair, then even if it seems right to me after much careful thought, and even if no one has disagreed with it yet, still, my confidence in my cool new theory should be moderated by my awareness of the ample evidence suggesting that I, as a human indulging in philosophy, am pretty likely to mess up. To brush off that evidence, and be completely confident in my cool new theory, would again betray an immodest refusal to take seriously evidence of my own cognitive fallibility.Footnote 7

Reflections of this sort suggest that something like the following (admittedly vague) principle applies to rational belief:Footnote 8

Modest Believing: Rational degrees of confidence appropriately reflect evidence of the believer’s cognitive fallibility.

Spelling out precisely what “appropriate reflection” amounts to turns out to be quite tricky. But it should be understood here in a way so that it generally means having significantly lower confidence in claims that seem right to you when you think about them directly, if you have strong evidence that you are likely to believe inaccurately about those topics. So, conciliatory views about disagreement would count as paradigmatic instances of Modest Believing. But exactly how to characterize that “higher-order” evidence, and how to characterize the way that higher-order evidence should interact with one’s ordinary evidence, is something I cannot explore here.Footnote 9 For present purposes, it should suffice to stipulate that Modest Believing should be understood in a way that would, for most of us philosophers, generally preclude having high confidence in many controversial, interesting philosophical beliefs.

There have been a host of objections raised to various versions of Modest Believing. Some worry that its consequences in areas like philosophy are excessively skeptical. Some worry that modest accounts of rationality can make it rationally required for agents to violate other epistemic norms. For example, I may do a logic proof correctly and then get higher-order evidence that my thinking in doing the proof was messed up. So if I believe modestly, I may end up believing the premises of a clearly valid argument while doubting the conclusion.Footnote 10 And there are other worries as well. Here, I’d like to look at a worry that seems to strike at the heart of Modest Believing: that the kind of reflective self-doubt that motivates the principle ends up—if it’s applied consistently and thoroughly—undermining the very principle it’s supposed to motivate.

2. Modesty about Modesty?

Clearly, Modest Believing is itself a controversial and interesting philosophical claim. So if it’s a true claim, then no one is rational to be very confident that it’s true! For those of us who go around defending Modest Believing, this fact can seem a little…awkward. And it has caused a number of people to think that Modest Believing must be false. The worry has been raised several ways, but they have in common a central worry: that it cannot be rational to believe as Modest Believing requires, while simultaneously doubting that that Modest Believing is correct—that is, while doubting that the beliefs one forms in conformance with the principle are rational.Footnote 11

Adam Elga (Reference Elga, Feldman and T.2010) responded to this worry in a way inspired by David Lewis’s (Reference Lewis1971) paper “Immodest Inductive Methods.” He suggested that the Conciliatory View about disagreement (an instance of Modest Believing) should be tweaked: restricted, so as not to apply to itself.

The idea is something like this: Let us call a way of forming beliefs—we can think of it as a function from evidential situations to doxastic states—an “inductive method.” Modest Believing is not an inductive method itself (it’s not a function). Rather, it is a (partial) account of rationality: it is a claim about which inductive methods are rational. And we could tweak this claim so that it said that a rational inductive method will allow higher-order evidence to affect confidence in most sorts of claims, but will not allow higher-order evidence to reduce confidence in the correctness of the tweaked version of Modest Believing itself. So suppose that Twyla is a generally modest believer whose inductive method satisfies Tweaked Modest Believing. She’ll have only modest confidence in, say, Two-Boxing in Newcomb’s problem because she knows that expert opinion on the topic is divided, even if Two-Boxing seems clearly right to her when she thinks about the arguments directly. But she’ll be fully confident of the correctness of Tweaked Modest Believing, even if tons of experts on rationality tell her it’s false. This would allow Twyla to keep believing (mostly) as Modest Believing requires, without doubting that her own beliefs are rational.Footnote 12

Of course, this idea invites worries about whether the tweaked view’s treatment of itself is consistent with the general motivations behind the view. Obviously, the type of thinking that leads philosophers to endorse claims in epistemology is essentially similar to the type of thinking that leads them to endorse views about Newcomb’s problem. If Twyla should not be confident in Two-Boxing because she realizes that humans are unreliable in coming up with the truth about interesting philosophical questions, or that a significant number of decision theorists endorse One-Boxing, then it would seem that the same sorts of considerations should apply to Tweaked Modest Believing. If Twyla were fully confident in Tweaked Modest Believing, we might well ask how she should explain the fact that she arrived at such an interesting and ambitious truth about rationality while so many other epistemologists had been misled. It’s hard to see how an appropriately modest view of herself would square with her maintaining full confidence in Tweaked Modest Believing.

Now, Lewis gave an argument that any inductive method had to be Immodest about itself, on pain of a certain kind of inconsistency. And if any consistent inductive method had to output confidence in the claim that it was a rational method, then it might be argued that Twyla was, in a sense, forced into the seemingly immodest position of being confident in her own interesting and controversial philosophical view.Footnote 13 But at this point, it’s important to notice that the condition on inductive methods that Lewis called “Immodesty” was not that an inductive method should output full confidence that it was rational. The condition was that it output expecting the beliefs it outputs to be at least as accurate as the credences output by any competing method.Footnote 14 Here’s how he put it intuitively:

Call a method immodest if it estimates that it is at least as accurate as any of its rivals. It would be unreasonable to adopt any but an immodest method. (Lewis, Reference Lewis1971 , 54)

As Lewis emphasized, this is a schema that leaves open exactly how “accuracy” will be understood. But putting that issue temporarily aside, it’s worth noticing that it’s possible for an inductive method that satisfies even untweaked Modest Believing to also satisfy a Lewisian Immodesty requirement.

Here’s an example to illustrate the general point: Suppose, for the sake of argument, that Modest Believing is true. And suppose that Modini is a beginning philosophy student whose inductive method—the function that correctly describes her pattern of believing—is rational and satisfies Modest Believing, untweaked. When Modini hears about Newcomb’s problem, and thinks about the arguments directly, she is sharply drawn toward Two-Boxing—say, she’s .95 confident that it’s correct. But then she learns from the PhilPapers survey that while a bit over 70% of decision theorists also favor Two-Boxing, over 20% favor One-Boxing, with the rest undecided or favoring some combination of views.Footnote 15 So her confidence in Two-Boxing modestly drops to a bit over .7—say, to .75.

Modini has also thought a bit about epistemology by herself and has arrived at high confidence in the correctness of Modest Believing. But then she attends a meeting of the Advanced Philosophy Club at her college and meets a group of senior philosophy students who confidently espouse a very different theory about rational belief. They claim that rational belief requires a certain kind of authenticity: making up one’s mind on the facts as one sees them, rather than spinelessly caving in to epistemic peer pressure. And they even support their position by quoting a famous philosopher:

The man of authentic self-confidence is the man who relies on the judgment of his own mind. Such a man is not malleable; he may be mistaken, he may be fooled in a given instance, but he is inflexible in regard to the absolutism of reality, i.e., in seeking and demanding truth… There is only one source of authentic self-confidence: reason.Footnote 16

A rational mind does not work under compulsion; it does not subordinate its grasp of reality to anyone’s orders, directives, or controls; it does not sacrifice its knowledge, its view of the truth, to anyone’s opinions, threats, wishes, plans.Footnote 17

Let us formulate the theory of rationality espoused by members of the Advanced Philosophy Club as follows:

Authentic Believing: Rational degrees of confidence reflect the agent’s unmalleable confidence in the judgment of their own mind, and are not influenced by considerations of the agent’s cognitive fallibility.

I take it that Authentic Believing is a false theory of rationality. But if we suppose that Modini has good reason to think that the members of the Club are more likely than she is to get things right about rationality, then Modini, given her inductive method, will become much less confident in Modest Believing—perhaps she’ll end up giving greater credence to this false theory.

Suppose, now, that although Modini no longer has confidence in the truth of Modest Believing, she keeps believing in the way Modest Believing calls rational. Unlike Twyla, she faces no pressure to think that she’s managed to find the truth about rationality where so many others have failed; in this sense, her modesty is preserved. But also unlike Twyla, thoroughly Modest Modini will end up doubting that the beliefs she forms in accordance with her own inductive method are rational: after all, they certainly do not satisfy Authentic Believing. Can Modini’s way of believing be rational?Footnote 18

Modini is now exhibiting what is sometimes called epistemic akrasia. The notion of epistemic akrasia has been formulated in several different ways, but I’ll understand it here, roughly, as believing in a way that one believes to be irrational. And many have thought that this cannot be a rational way to believe. Akratic beliefs, it is claimed, will not make sense from the agent’s own perspective; they violate a basic coherence condition on rational believing.Footnote 19

But others have made the case that while akratic beliefs are usually irrational, they can in fact be rational—given certain special conditions. In particular, some have argued that epistemic akrasia can be rational in cases where the agent has good reason to expect the rationality of their beliefs to diverge from the accuracy of their beliefs. David Christensen (Reference Christensen2022, 17) writes:

We should allow that, in some cases, agents rationally give credence to views of rationality on which it can be rational to expect rationality and accuracy to diverge in their own case. Once we see that, we can see how akrasia may be the most rational response to certain evidential situations; and we can also see how akratic beliefs may make perfectly good sense, even from the akratic agent’s own perspective.

The idea is this: In most ordinary cases, it is indeed irrational to believe akratically. But insofar as there are cases where one has good reason to expect that the rationality of one’s beliefs will come apart from the accuracy of one’s beliefs, it can be rational to be epistemically akratic, having the beliefs one expects to be accurate, rather than the ones one expects to be rational.Footnote 20

And in fact, Modini’s situation would seem to fit naturally into this category. She believes modestly because she’s sensitive to evidence suggesting that certain aspects of her thinking (such as believing interesting, controversial philosophical theses on the basis of her assessment of the direct arguments) are likely to be inaccurate. This is why she now has low confidence in Modest Believing. Of course, she now sees this credence as likely irrational because it’s not “Authentic.” It does reflect Modini’s evidence of her own cognitive fallibility. But from her perspective, if she were to have authentic credences in controversial philosophical claims, where she has good evidence that the direct “judgments of her own mind” are likely to misfire, she’d end up with less accurate (even if more rational) beliefs. So her akratic beliefs make sense, from her own perspective, after all.

We might illustrate this line of thought by imagining a conversation between Modini and one of the dudes from the Advanced Philosophy Club:

Modini: Well, you folks are much more expert in philosophy than I am, so you are probably right. I guess Authentic Believing is probably correct.

Phibro: Haha, seriously, Modini? I mean, how could that be rational? You just literally started believing Authentic Believing on exactly the sort of grounds that Authentic Believing—your own new view—says are irrational! I mean, amirite?

Modini: No, yeah. OK, fair. But…if I ignore evidence of my own fallibility, I’ll end up with inaccurate beliefs. Guess I’d rather be right than rational, given the choice.

Phibro: Haha, sure, Modini. Guess we can’t all have it both ways!

Modini: <walks away without commenting on the likelihood of Philbro’s strong controversial beliefs being mostly accurate>Footnote 21.

Modini’s inductive method reflects epistemic modesty about her own cognitive prowess. But this does not show that Modini’s inductive method would fail to be Immodest in the Lewisian sense: We may suppose that Modini’s method would not output credences that it itself would expect to be less accurate than some particular alternative credences. So it turns out that the sort of epistemic modesty that’s expressed by Modest Believing is fully consistent with one’s inductive method being Immodest, in Lewis’s intuitive sense.

Moreover, the point goes beyond the simple one about consistency. For it seems that there’s a nice way in which Lewisian Immodesty has actually come to the rescue of Modest Believing. In thinking about how Modini’s akrasia could be rational, we noted that she should expect her own credences to be more accurate than the credences called rational by Authentic Believing. That helped make clear why Modini was rational to keep believing modestly, despite thinking that doing so was probably irrational. And it made clear how Modini’s akratic beliefs could make sense from her own perspective. So insofar as Modini’s inductive method is Immodest in Lewis’s sense, it seems that we have a nice intuitive explanation of how Modini can consistently and rationally believe as Modest Believing requires, not only with respect to controversial beliefs in general, but even with respect to the truth of Modest Believing.Footnote 22

3. Worries about the Rescue Plan

Suppose we find Modini’s answer to Philbro’s challenge a reasonable and intuitively satisfying one. That would explain how Modini’s thorough modesty could be rational, despite inducing akrasia. But the next question we might ask is whether this response can serve as a paradigm for explaining the general rationality of thoroughly conforming to Modest Believing. In order for it to do that, it will have to turn out that Modini’s sort of response is generally available to rational agents who believe modestly, but who have higher-order evidence against the truth of principles like Modest Believing. So, in particular, we should ask: Can we generally assume, for agents in this type of position, that their inductive methods will output beliefs that the credences they output are at least as expectedly accurate as any other particular credences? Before answering this question, it’s worth pausing over a point mentioned above: that our intuitive Immodesty principle lacks a definition of accuracy.

There is surely something very attractive about the general idea that a rational inductive method will not output, say, belief in Incompatibilism while also outputting the expectation that a belief in Compatibilism would be more accurate. If we think about belief categorically, there’s a natural understanding of accuracy which makes Immodesty compelling: Beliefs are accurate when they are true. So, for example, it would make little sense for an inductive method to output both a categorical belief that P, and also a belief that a belief that ~P would be more accurate. Something like this idea, I think, is at the heart of the intuitive appeal of Immodesty principles. And so, if invoking Immodesty is to help explain the intuitive rationality of certain cases of epistemic akrasia, this sort of idea is at the heart of our explanation.

But we should notice that when we think about accuracy, things get complicated when we apply the idea to credences. What is it for a credence in P of, say, .7, to be accurate?

There are, it turns out, various accounts of what accuracy of credences amounts to. Most start with a very natural idea: that we should count, for example, a credence of .7 in P as more accurate than a credence of .6 if P is true, and as less accurate if P is false. More generally, if we represent truth values by assigning 1 to truths and 0 to falsehoods, credences are more accurate to the extent that they are closer to the actual truth value, and more inaccurate to the extent that they are farther away. But even if this is agreed upon, there remains the question of how exactly degrees of accuracy relate to these distances.

The simplest way of relating them would seem to be by just taking inaccuracy to be measured by the straight distance between the (number representing the) agent’s credence in P and the (number representing the) truth value of P. But while this might seem the obviously right account, it actually fits quite badly with Immodesty principles. For example, consider what the rational credence should be for H (“Heads on the next flip”) when one has excellent evidence that the relevant coin is biased so as to come up heads 60% of the time. Absent any special information about the next flip, it’s clear that the rational credence for H would be .6. But if I have .6 credence in H, and if I think of accuracy in straight-distance terms, then for me, a credence of 1 would be more expectedly accurate!Footnote 23 Clearly, there is nothing irrational about having credences that fail to be Immodest in the straight-distance sense of accuracy.

For this reason, advocates of Immodesty requirements focus on other measures of accuracy. The most popular is the Brier score, which measures inaccuracy by squaring the distance between the credence and the truth value. This may seem less natural initially. But it solves the problem described above: if an agent’s credences obey the laws of probability, then if we analyze expected accuracy given her credences, in Brier score terms, her own credences will come out as most expectedly accurate. And the Brier measure of accuracy has been supported by a number of sophisticated arguments. So one might well agree with Lewis’s intuitive formulation of Immodesty, and also hold that the correct theory of credence-accuracy is given by the Brier score.

So now, let us return to our question: Suppose that arbitrary modest agent Armod is in a situation much like Modini’s. Armod is a beginner in philosophy; his inductive method satisfies Modest Believing; and he believes that Modest Believing is true. Then, he learns about the views of the students in the Advanced Philosophy Club and loses confidence in the truth of Modest Believing in favor of confidence in Authentic Believing, but he keeps on believing modestly. Our question, now, is this: Will we, in general, be able to explain the intuitive rationality of Armod’s having akratic credences by noting that he will expect his own credences to be more accurate than the credences that would be called rational by Authentic Believing?

I think that there is reason to doubt that this sort of explanation will generally be available.

We might first notice that, insofar as this explanation would require Armod to think about his credences in terms of Brier scores, it would require a great deal of intellectual sophistication on his part. And if Armod lacks this degree of sophistication, he will not be able to rationalize his credences in the suggested way. But it is implausible that the rationality of Armod’s believing modestly, while being convinced that rational beliefs must be Authentic, would require that he somehow have a grip on the mathematics of assessing expected accuracy using Brier scores. Surely Armod’s failure to believe Authentically makes perfect sense, from his own perspective, even if he has never heard of Brier scores, and even if he does not even know what the square of a distance is.Footnote 24

This observation does suggest to me that our intuitive explanation of how akrasia can be rational is insufficiently general, since it’s hard to see how thoughts about accuracy could help Armod’s akratic credences make sense from his own perspective. But one might point out that the difficulty only arises if we consider Armod as cognitively limited. And one might worry that the problem in the example may be an artefact of Armod’s sub-ideal rationality.Footnote 25 So let us ask: Is there a reason to worry about our explanation that does not derive from positing cognitive limitations in the agent—a reason that would even apply to a maximally rational Armod?

I think that there is. There is a deeper reason to worry about our rescue plan, and it derives not from positing cognitive limitations in an agent, but from thoroughgoing Modest Believing itself. To see it, begin by considering the reason that Modini’s response to Philbro was intuitively satisfying. It was satisfying because it was intuitively rational for her to believe in a way that, by her lights, aimed at accuracy rather than at rationality. But even if Armod is mathematically adept, can we assume that he rationally expects his own (Modest) credences to be more accurate than Authentic (and thus rational) credences would be? As noted earlier, there are a number of different accounts of what the accuracy of credences comes to—not everyone takes Brier scores to be the correct measure. So whether Armod will expect his Modest credences to be more accurate will, of course, depend on how Armod understands accuracy.

Now, different accounts of what accuracy amounts to for credences are, of course, philosophical claims, just like different accounts of what rational believing amounts to. And—given a thorough application of Modest Believing—what Armod is rational to believe about philosophical claims can be affected by what he knows about the beliefs of others. So it will be instructive to look at a case where the opinions of others give Armod reason to have a good deal of credence in a theory of credence-accuracy that does not play well with Immodesty. Let us suppose, then, that the members of the Advanced Philosophy Club have been discussing credential accuracy and that they have come to have high confidence in the straight-distance measure of inaccuracy (perhaps they are impressed by Patrick Maher’s (Reference Maher2002) defense of the naturalness of the straight-distance measure). So Armod modestly comes to have pretty high confidence that the straight-distance measure of accuracy is correct.

Suppose now that, like Modini, Armod originally had .95 credence in Two-Boxing, but modestly changed it to .75 on the basis of disagreement. He now thinks that his credence in Two-Boxing is irrational, since it’s not Authentic; so he’s akratic. But how should he think about the accuracy of his Modest credence? Insofar as he’s highly confident in the straight-distance measure of accuracy, he’ll expect his Modest credence to be less accurate than an Authentic credence of .95 would be!Footnote 26

Does this mean that Armod’s .75 credence in Two-Boxing is irrational? No—and we can see why if we think back to the biased coin case. It is not a rational requirement that one’s credences be expectedly most straight-distance accurate. (If it were, the only possible rational credences would be 1, .5, and 0. So on the straight-distance measure, even Armod’s possible Authentic .95 credence in Two-Boxing would be less expectedly accurate than absolute certainty about Two-Boxing.)

Does this mean that Armod’s .75 credence in Two-Boxing does not make sense from his own perspective? Again, the answer is no, and we can see why by thinking about the biased coin case. Suppose you think of credential accuracy in straight-distance terms, and have .6 credence that the coin will come up Heads. While from your perspective, a credence of 1 would be more expectedly accurate, this does not mean that your .6 credence fails to make sense from your own perspective. If you have a straight-distance conception of credential accuracy, there’s nothing untoward about credences that fail to maximize expected accuracy. Similarly, Armod’s seeing his own Modest credence as less expectedly accurate than an Authentic credence would be does not itself mean that his credence is irrational, or that it fails to make sense from his own perspective.

However, if it can be rational for Armod to have credences he thinks are less accurate than certain other credences would be, then our intuitive explanation for the rationality of akrasia would seem to be in trouble. It cannot be that Armod’s akratic credences are rational in virtue of Armod’s thinking—or being rational to think—that his own Modest credences are more accurate than rational (Authentic) credences would be, because he is not rational to think that. And if that’s right, then it seems that we cannot, in general, rest the intuitive rationality of akratic beliefs on the agent’s being rational to see their own beliefs as more expectedly accurate than rational beliefs would be.

To underline this point, it’s worth noticing that there is not even universal agreement among professional philosophers about the general idea that unites the Brier score and the straight-distance measure of accuracy—that is, the idea that credences closer to the truth value of the relevant proposition are more accurate. Some have suggested that calibration—matching the general frequencies of truth for “relevantly similar” propositions—can be the criterion of “rightness” for credences, or play the conceptual role that truth plays in other contexts. (See Lange (Reference Lange1999), van Fraassen (Reference van Fraassen, Cohen and Laudan1983). Alan Hajek (Reference Hájekunpublished) has argued that objective chances are to credences what truth is to categorical beliefs. And Scott Sturgeon (Reference Sturgeon2020) offers a sophisticated argument that credences (except for very high or low ones) are simply not the kind of thing that can be accurate in the way that categorical beliefs can be accurate. He argues (Sturgeon, Reference Sturgeon2019) that to the extent one calls some distance-based measure “accuracy,” one is not really capturing any pretheoretic understanding of accuracy for credences.

The upshot of these considerations is this: The idea that Lewis-style Immodesty principles can come to the rescue of principles like Modest Believing does not pan out. Principles like Modest Believing can, if followed consistently, lead to agents believing akratically. But we cannot, in general, explain the rationality of these cases of akrasia, or show why the akratic beliefs make sense from the agent’s own perspective, by invoking the idea that the agents should expect their credences to be more accurate than other credences—including the ones they see as rational—would be. Once we take seriously the implications of thoroughly applying Modest Believing, we can see that it not only leads to akrasia, but that it also sabotages our rescue plan, since it entails that, in certain cases, it is not rational for agents to think of accuracy in the way that our rescue plan would require.

4. Where Does this Leave Modest Believing?

The strategy of invoking agents’ views on the accuracy of their beliefs, in order to explain how they can be rationally akratic, comes to grief over the fact that there’s no guarantee that agents are rational to have the kinds of sophisticated views about accuracy that would be needed to underwrite the strategy. To me, this suggests that the basic problem with the rescue strategy is that it’s over-intellectualized. So, is there something else—something simpler—that we can say to help make sense of the idea that Armod is believing rationally when he continues to believe modestly, even after he strongly doubts that Modest Believing is a correct account of rational belief, and thus thinks that his Modest beliefs are probably irrational?

I think that there is, once we take a suitably de-intellectualized view of rational believing. To begin with, it’s clear that in general, reacting rationally to evidence does not need to involve any thoughts at all about the accuracy of any credences that one might form on the basis of that evidence.Footnote 27 When you observe many rolls of a die and see it land with two spots up 1/6 of the time, it’s rational to have 1/6 credence that the next roll will be a two. But this is not somehow explained by it being rational for you to expect that a 1/6 credence in rolling a two will be more accurate than other credences would be. If anything, the explanation runs the other way: If you happen to be philosophically sophisticated, and if you also think about the accuracy of possible credences in terms of Brier scores, then the fact that you currently have 1/6 credence in rolling a two will explain your expecting that credence to be most accurate. But this expectation is surely a highly optional add-on. After all, even if you were for some reason convinced of the straight-distance measure of accuracy (on which a credence of 0 would maximize expected accuracy), it would still be rational for you to be 1/6 confident of heads on the next roll. And this would also be true if you thought that credences were not the kind of thing that even could be accurate or inaccurate.

I think that essentially the same point holds about higher-order evidence, where the relevant evidence is—as we epistemologists might describe it—about the reliability or accuracy of one’s thinking. Suppose you have observed many people doing calculations while hypoxic, and learn that while they feel perfectly clear-headed, they only reach correct answers about 1/6 of the time. If you then get evidence that you are currently hypoxic yourself, and you are given a calculation to do, you may react rationally by being much less confident than you would otherwise be in the answer that seems correct to you when you consider the problem directly, ending up with about a 1/6 credence in that answer. In this kind of case, you may need to have some concept related to getting things right or wrong, in order for the higher-order evidence to have its rational impact. But it would be enough to realize that hypoxic people often become highly confident of wrong answers. No fancy thoughts about the accuracy of various possible credences you might end up with need come into the picture at all.

So, with this in mind, let us think about what happens when someone gets strong evidence that one of their credences is irrational. To simplify things, let us suppose that the person’s credence is actually rational to begin with. Worries about the rationality of epistemic akrasia amount to thinking that there’s something irrational about the person maintaining their original credence, while coming to see that credence as irrational.

In the vast majority of cases, this seems exactly right. For example, suppose that stubborn Stu knows about the cognition-degrading effects of hypoxia, and so he takes beliefs reached by calculating while hypoxic to be usually irrational. He does a calculation and becomes highly confident in its conclusion (let us call it C), but then he’s informed that he is currently hypoxic. So he’s now gotten evidence that his high confidence in C is likely irrational because of hypoxia. If Stu acknowledges that this credence is likely irrational due to hypoxia, yet akratically maintains his high credence undiminished, this is irrational. But notice that his evidence of hypoxia-induced irrationality is simultaneously evidence that the thinking behind his high confidence in C was of the sort likely to lead to high confidence in false claims. So we should not suppose that Stu’s seeing his confidence as probably irrational is playing any essential role in making it irrational for him to retain high confidence in C.

To reinforce this point, we might imagine a variant of the case where Stu does not even have the concept of belief-rationality, but is informed that he’s hypoxic. In this variant case, Stu would not draw the conclusion that his high confidence in C was likely irrational. But given what he knows about the effects of hypoxia, he’d still be irrational to remain highly confident in C. Or we could even imagine a case where Stu does have the concept of rationality, but is convinced of some extreme variant of Authentic Believing, on which authenticity is sufficient, as well as necessary, for rationality, so that his original confidence in C would be rational, whether or not he was affected by hypoxia. Nonetheless, it would still be irrational for Stu to remain highly confident in C after finding out he was hypoxic.

Similar lessons apply in other paradigmatic cases of akrasia. Horowitz’s (Reference Horowitz2014) Sleepy Detective Sam comes to believe that Lucy is the thief, only to get evidence that he reasons sloppily while sleepy, and is almost always wrong about which culprit the evidence supports. As Horowitz argues, it would be irrational for Sam to remain confident in Lucy’s guilt while thinking that his evidence probably did not support her guilt. But notice that, as in the hypoxia case, the kind of evidence Sam gets that his belief is irrational is also evidence that rationally requires him to reduce confidence in Lucy’s guilt, quite independent of any beliefs he has about rationality per se (so even if Sam accepted extreme Authentic Believing, he would not be rational to remain confident in Lucy’s guilt). And the same goes for cases where agents get evidence that their beliefs are irrational due to emotional bonds, or implicit bias, or superstition, or wishful thinking. So even if akrasia is not intrinsically irrational, it seems that paradigm cases of akrasia do indeed involve irrationality.

In Armod’s case, however, things are different. When he gets evidence that his Modest level of confidence in Two-Boxing is irrational because it’s not Authentic, that evidence does not have the kind of implications that evidence of hypoxia had for Stu, or evidence of sleepiness had for Sam. And insofar as irrationality as Armod sees it (that is, the credence’s lacking authenticity) does not have these sorts of implications, there is simply nothing irrational in Armod having a credence he sees as irrational.

What is it, then, that makes so many cases of akrasia irrational? Why does evidence of a belief’s irrationality so often call for revising it?Footnote 28 The answer will have to lie in what the agent’s conception of rationality is like—what they take rationality to consist in.

Suppose, for example, that rationality in fact requires respecting explanatory relations. And suppose that an agent also has a conception of rationality which properly reflects this—so that the beliefs the agent’s conception deems rational are ones that in fact respect explanatory relations. If the agent has some belief based on explanatory considerations (say, that Lucy is the thief), and she gets evidence that her high credence in Lucy’s guilt is irrational due to the influence of a factor that likely disrupts assessment of explanatory relations, then maintaining her high credence in Lucy’s guilt becomes irrational because, given her conception of rationality, the evidence of irrationality is also evidence that Lucy’s guilt is not explanatorily supported.

On many conceptions of rationality, the rationality of beliefs will, correctly, require things like respecting induction, respecting explanatory relations, or respecting mathematical and logical relations. On conceptions of rationality which accurately reflect these features, evidence that one’s belief is irrational will often amount to evidence that one’s belief violates induction, explanatory relations, or mathematical or logical relations. And because evidence that one’s belief violates induction, explanatory relations, or mathematical or logical relations typically directly requires revision, belief-revision will often be rationally required in these cases.

By contrast, on the Authentic Believing conception of rationality, evidence that one’s belief is irrational has no such built-in implications. So when Armod gets evidence that his belief is irrational, that evidence does not also function as the sort of evidence that would undermine the rationality of a belief even absent considerations of rationality per se. So there is no need for Armod to have the kind of sophisticated thought that Modini expressed to Philbro. He does not need to think that while his lowered confidence in Two-Boxing is irrational, an Authentic .95 credence would be expectedly less accurate. As we have seen, this thought may not be available to him, for various reasons: He might not be up to the Brier-style math that would vindicate this thought. He might believe in the straight-distance measure for the accuracy of credences. He might have sophisticated reasons for rejecting the idea that credences are the kind of thing that could be accurate. Or he might simply lack any conception of credential accuracy. But none of this matters. Armod does not need to have any thoughts at all about the accuracy of his credence in order for that credence to be rational. Armod has responded to his evidence rationally, and responding rationally does not require entertaining fancy thoughts about credential accuracy.

Now, it is true that when we epistemologists theorize about why the evidence Stu got for the irrationality of his hypoxic belief has different implications from the evidence Armod got for the irrationality of his Inauthentic belief, we might naturally say that the former, but not the latter, was evidence of the relevant credence’s inaccuracy. But we should be careful not to over-interpret this way of talking.

Consider what we might say to contrast two possible pieces of evidence given to an agent who begins with an initial credence of 1/6 in two spots coming up on the next roll of a die. The first possible bit of evidence is that the die is red. The second possible bit of evidence is that the die has just come up two in 1/3 of the last 99 throws. It would be natural for epistemologists to contrast these two pieces of possible evidence by saying that while the first has no implications for the accuracy of the agent’s initial credence, the second suggests that a higher credence would be more accurate. And this may even seem to explain why only the second bit of evidence rationally requires revision of the agent’s original credence.

But notice that not all epistemologists would say this. Those who went for the straight-distance measure of accuracy, or who denied that credences could be accurate, would certainly not put the contrast this way. But they would presumably agree that only the second bit of possible evidence would rationalize a change in credence. And, more importantly, even if an epistemologist does explain the difference in the import of the two bits of evidence in accuracy terms, they should not think that the difference in import depends on the agent’s entertaining thoughts about, or even grasping, the right measure of credential accuracy.

The real lesson of the cases that are natural to describe in terms of expected divergences between accuracy and rationality, then, is not that akrasia can be rational so long as an agent can rationalize it by invoking this sort of divergence. The real lesson is that, on certain conceptions of rationality, believing one’s credence to be irrational simply does not rationally call for revising that credence. Our inclination to think that evidence for the irrationality of a credence automatically calls for revising it is understandable. Typical instances of irrationality are like those caused by biases, fatigue, or hypoxia, where evidence of a credence’s irrationality constitutes, in addition, evidence that would require revision even absent considerations of rationality. But evidence of irrationality per se need not have these implications. To apply the lesson to our central example: given what Authentic Believing says, there’s no reason to think that a rational inductive method would take evidence that one’s credence was not Authentic to mandate revision of that credence, even if one believed that Authentic Believing was a true account of rationality. Rationally believing that one of one’s credences is irrational need have no more rational impact on that credence than believing that a die is red has rational impact on one’s credence that it will show two spots up on the next throw.

5. Conclusion

This paper has sought to make room for accounts of rationality that recognize the natural implications of an agent’s taking a suitably modest view of their own cognitive capacities. Accounts along these lines, like Modest Believing, should be thoroughly modest—which means that they should apply to beliefs about themselves. And so agents whose beliefs count as rational on such accounts will sometimes believe akratically.

But this need not be a problem for these accounts, since akrasia can be rational in some cases. Most clearly, it can be rational in cases where the agent has good reason to expect that accuracy and rationality come apart. And one might see this sort of feature as key to rescuing modest views: the modest agent will Immodestly (in the Lewisian sense) expect their own credences to be more accurate than the credences they see as rational. But this is actually not a great description of the full range of cases where akrasia can be rational.

After all, a truly thorough modesty will apply to an agent’s views on accuracy, just as much as it applies to their views on rationality, or Newcomb’s problem, or anything else. And the results of this instance of modesty may not end up jibing with the “rationally expecting accuracy and rationality to come apart” formulation, or with the agent seeing her own credences as more expectedly accurate than the ones she sees as rational.

Fortunately, though, the rationality of thorough epistemic modesty does not need to be underwritten by any such fancy thoughts. In order to understand how thorough modesty can be rational, we only need to see that there is no reason to suppose that a rational inductive method would always take the irrationality of a credence to mandate changing it. So while Lewisian Immodesty cannot, in the end, come to the rescue of thoroughly Modest Believing, it turns out, happily, that no rescue is required in the first place.

Acknowledgements

Thanks to Nomy Arpaly, Zach Barnett, Jennifer Carr, Sophie Horowitz, Chris Meacham, Elizabeth Miller, Mark Moyer, Josh Shechter, and an anonymous referee for helpful discussion of these issues and/or comments on drafts of this paper. Thanks also to audiences at the Humility and Arrogance in Inquiry conference at ASU, the University of Vermont, and UMass-Amherst. Special thanks to Orfeas Zormpalas for discussions that pushed me to think harder about the relationship between epistemic akrasia and conceptions of accuracy.

David Christensen is Professor of Philosophy at Brown University. His work focuses on epistemology, and especially on the rational implications of a subject’s reflection on their own beliefs.

Footnotes

1 See Walker (Reference Walker2024, Ch. 1) for a recent and detailed development of this line of argument, which rightly emphasizes that many philosophical disputes are what he calls “multi-proposition disputes” where more than two positions are advocated, each with a minority share of supporters. As Walker puts it, a philosopher who believed he generally got such issues right would have to think of himself as an “Über Epistemic Superior.”

3 “[P]hilosophers’ views are quite varied. If there were two big “schools of thought” that dictated views across the board, the histogram would show a tall bar at 0 and another at the distance between the two schools of thought. What we’ve got is the opposite of this—everyone is their own school of thought.” https://www.facebook.com/PhilPapersPlus?locale=es_LA (accessed 7/25/2023).

4 This does not mean he cannot advocate for certain views or even “have” them as his own views. It only means that he cannot rationally be confident that his own views are correct. See Barnett (Reference Barnett2019).

5 A number of philosophers have connected Conciliationism to modesty. Others have made different connections between positions on disagreement and various intellectual virtues and vices. For a nice catalogue of these connections, and an interesting empirical investigation of the connections, see Beebe and Matheson (Reference Beebe and Matheson2022).

6 For illuminating philosophical discussions of the “Bias Blind Spot,” see Ballantyne (Reference Ballantyne2019, Ch. 5), or Kelly (Reference Kelly2022, Ch 4). The former is explicitly directed toward implications for epistemic modesty.

7 For detailed arguments along these lines, see Kornblith (Reference Kornblith, Feldman and Warfield2010, Reference Kornblith and Machuca2013) and Ballantyne (Reference Ballantyne2019, Ch. 6).

8 Here and throughout, I’ll use “belief” to refer both to degrees of confidence and categorical beliefs. I’ll mostly concentrate on degrees of confidence (credences), but will disambiguate when needed. And I’ll be talking about rationality in a distinctively epistemic sense, not about practical rationality.

9 Detailed work on this is done by proponents of “Conciliatory” views on disagreement or “Calibrationist” views of higher-order evidence in general.

10 Thus, “rational” as understood here will refer to the most rational doxastic response to a certain evidential situation, which may yet violate some principle of rationality.

11 I will not attempt here to survey different ways of pressing the problem, or the various responses that have been offered to it; instead, I’ll concentrate on what I take to be the most powerful version of the problem, and the most promising line of response.

12 This is a very quick sketch of Elga’s suggestion, and I’ve taken some liberties in my formulation, including being fussy about distinguishing between inductive methods and claims about inductive methods.

13 Elga’s suggestion is that the tweaked view is not guilty of arbitrariness in exempting itself from its own scope, since the alternative would be inconsistency.

14 I’ve put this roughly; a more precise specification would make clear that the beliefs licensed by the competing method be specified “transparently”—for example, as “.3 credence in P” rather than “the credence in P that God would have.”

17 Rand (Reference Rand1967, 8). Note that I’m not claiming that the members of the Club have interpreted this famous philosopher correctly.

18 One might worry that Authentic Believing is so wacky that we should not see Modini as having a belief about rationality at all, but just about “what’s called rationality.” But surely significant disagreement about rationality is possible, without changing the subject. And while someone who claimed that rational beliefs were just those that involved rats might well be just talking past us, Authentic Believing is hardly like that.

It’s also worth noting that the Modini example could easily be developed with a less wacky theory that takes frankly non-truth-related factors to help determine what’s rational to believe—for example, a theory, currently defended by serious philosophers, on which moral considerations affect what’s epistemically rational to believe. However, the point of the Modini example requires Modini to give credence to a false theory of rationality. So the example of Authentic Believing can make its point without presupposing the falsity of a theory of rationality that has current defenders.

19 See, e.g., Littlejohn (Reference Littlejohn2018), Titelbaum (Reference Titelbaum, Skipper and Steglich-Peterson2019), or Smithies (Reference Smithies2019 § 9.5). I’m concentrating on formulating akrasia in terms of rationality because Modest Believing is a theory of rational belief. Similar arguments could, I think, be made analogues of Modest Believing which concerned justified belief, or of what one “ought” to believe, in a distinctively epistemic sense of “ought.” But insofar as there are any important differences between these issues, getting into them here would take us too far afield.

20 This basic strategy for defending the possibility of rational akrasia originates in Horowitz (Reference Horowitz2014). Other defenses along these lines include Barnett (Reference Barnett2021) and Christensen (Reference Christensen2016, Reference Christensen2021). Different defenses can be found in Lasonen-Aarnio (Reference Lasonen-Aarnio2020), Weatherson (Reference Weatherson2019), and Hawthorne et. al (Reference Hawthorne, Isaacs and Lasonen-Aarnio2021), the latter two of which criticize the formulation of the accuracy-based strategy in Horowitz (Reference Horowitz2014). The example of Modini owes its basic strategy to an example from Barnett (Reference Barnett2021). See also Arpaly (Reference Arpaly2021) for an early use of Ayn Rand’s philosophy as a source of rational akrasia. It’s worth noting that while the Modini example involves belief in a false theory of rationality, many of the examples of rational akrasia offered in the literature do not.

21 One might worry that Modini’s apparent nonchalance might indicate that she’s not really thinking of her beliefs as irrational because our notion of rationality requires some sort of pro-attitude. While I would not endorse such a view, it’s worth noting that we could change the story so that Modini feels more troubled. Even if her akrasia is accompanied by some angst, I would argue that Modini’s akratic beliefs are the most rational ones possible in her evidential situation.

22 One might note that the rationality of Modini’s akrasia depends on her being rationally misled about rationality. Of course, any pair of beliefs of the form “P” and “It’s not rational for me to believe P” can be rational only if the agent is misled about rationality: If the second belief were true, the first one would not be rational. But this logical limitation on the scope of rational akrasia is no cause for worry about our defense of Modest Believing. The self-undermining worry is that if Modest Believing were correct, an agent could be rationally misled into believing an incompatible account of rationality, and so, if she continued to believe in accordance with Modest Believing, she would be forced to regard her own beliefs as irrational. Cases where an agent is not misled about rationality do not give rise to this sort of self-undermining. Thanks to an anonymous referee for prompting me to make this clear.

23 The figuring is simplest if we think in terms of inaccuracy: distance from the truth. I think it’s .6 probable that H is true. So my expected inaccuracy for my credence is figured as follows: .6 x .4 (the probability that H is true x the distance from truth if H is true) + .4 x .6 (the probability that H is false x the distance from truth if H is false) = .48. But my expectation of inaccuracy for a credence of 1 would be .6 x 0 + .4 x 1 = .4. So my .6 credence is more expectedly inaccurate—that is, less expectedly accurate—than credence 1!

24 This is of course not to deny that we, as epistemologists, can apply some measure of expected accuracy to an agent’s inductive method, and assess its rationality based on whether it satisfies a Brier-based Immodesty condition. This is like us assessing an agent’s beliefs for probabilistic coherence, and taking that as a rational requirement. When we do either of those things, we are looking at certain formal properties of an agent’s belief system, properties that we take to be in part constitutive of rationality. And we may even stipulate that Armod’s credences are Brier-Immodest. But in doing this, we do not (and should not) assume that the agent can self-administer our formal tests. We can, for example, use a probabilistic coherence condition to show that there’s something wrong with an agent’s having higher credence in P than in (P v Q); but our doing this certainly does not imply or presuppose that the agent herself has some grip on the mathematics of probabilistic coherence.

25 Thanks to Sophie Horowitz for bringing this worry to my attention.

26 The straight-distance account of accuracy would say this, given Armod’s .75 credence in Two-Boxing: The expected inaccuracy of his present credence is .75 x .25 + .25 x .75 = .375. The expected inaccuracy of an Authentic (.95) credence for him is lower: .75 x .05 + .25 x .95 = .275. So the Authentic credence would have significantly greater expected accuracy.

27 In fact, it need not involve any thoughts at all about beliefs or credences, period. The idea that this sort of reflection must play a part in rational believing can be encouraged by loose talk of “adopting” this belief rather than that one, or “choosing” a certain credence—as if coming to be in one doxastic state rather than another were a voluntary action, like putting on a blue shirt rather than a purple one. For a perceptive look at the trouble this can cause, and an exploration of how being clear on this point affects the “ethics of belief,” see Arpaly (Reference Arpaly2023).

28 Thanks to an anonymous referee for prompting me to address this question more fully.

References

Arpaly, N. (2021), interview at 3:16 https://www.3-16am.co.uk/articles/in-praise-of-desire-and-some (accessed 8/2/23).Google Scholar
Arpaly, N. (2023). Practical reasons to believe, epistemic reasons to act, and the baffled action theorist. Philosophical Issuesonline first. 10.1111/phis.12239.10.1111/phis.12239CrossRefGoogle Scholar
Ballantyne, N. (2019). Knowing our limits. Oxford University Press.10.1093/oso/9780190847289.001.0001CrossRefGoogle Scholar
Barnett, Z. (2019). Philosophy without belief. Mind, 128(509), 109138.2017.10.1093/mind/fzw076CrossRefGoogle Scholar
Barnett, Z. (2021). Rational moral ignorance. Philosophy and Phenomenological Research, 102, 645664. 10.1111/phpr.12684.10.1111/phpr.12684CrossRefGoogle Scholar
Beebe, J., & Matheson, J. (2022). Measuring virtuous responses to peer disagreement: The intellectual humility and actively open-minded thinking of Conciliationists. Journal of the American Philosophical Association, 9(3), 124.Google Scholar
Christensen, D. (2016). Disagreement, drugs, etc.: From accuracy to akrasia. Episteme, 13, 397422. 10.1017/epi.2016.20.10.1017/epi.2016.20CrossRefGoogle Scholar
Christensen, D. (2021). Akratic (epistemic) modesty. Philosophical Studies, 178, 21912214. 10.1007/s11098-020-01536-6.10.1007/s11098-020-01536-6CrossRefGoogle Scholar
Christensen, D. (2022). Epistemic akrasia: No apology required. Noûs, 58(2004), 5476. 10.1111/nous.12441.10.1111/nous.12441CrossRefGoogle Scholar
Elga, A. (2010). How to Disagree about how to Disagree. In Feldman, & T., Warfield (Eds.), Disagreement. Oxford University Press.Google Scholar
Hájek, A. (unpublished). “A Puzzle About Partial Belief,” http://fitelson.org/coherence/hajek_puzzle.pdf, accessed 8/1/23.Google Scholar
Hawthorne, J., Isaacs, Y., & Lasonen-Aarnio, M. (2021). The rationality of epistemic akrasia. Philosophical Perspectives, 35, 206228. 10.1111/phpe.12144.10.1111/phpe.12144CrossRefGoogle Scholar
Horowitz, S. (2014). Epistemic akrasia. Noûs, 48, 718744. 10.1111/nous.12026.10.1111/nous.12026CrossRefGoogle Scholar
Kelly, T. (2022). Bias: A Philosophical Study. Oxford University Press.10.1093/oso/9780192842954.001.0001CrossRefGoogle Scholar
Kornblith, H. (2010). Belief in the face of controversy. In Feldman, R. & Warfield, T. (Eds.), Disagreement. Oxford University Press.Google Scholar
Kornblith, H. (2013). Is philosophical knowledge possible. In Machuca, D. (Ed.), Disagreement and skepticism (pp. 260276). Routledge.Google Scholar
Lange, M. (1999). Calibration and the epistemological role of Bayesian conditionalization. Journal of Philosophy, 96, 294324.10.2307/2564680CrossRefGoogle Scholar
Lasonen-Aarnio, M. (2020). Enkrasia or evidentialism? Learning to love mismatch. Philosophical Studies, 177, 597632. 10.1007/s11098-018-1196-2.10.1007/s11098-018-1196-2CrossRefGoogle Scholar
Lewis, D. (1971). Immodest inductive methods. Philosophy of Science, 38, 5463.10.1086/288339CrossRefGoogle Scholar
Littlejohn, C. (2018). Stop making sense? On a puzzle about rationality. Philosophy and Phenomenological Research, 96, 257275. 10.1111/phpr.12271.10.1111/phpr.12271CrossRefGoogle Scholar
Maher, P. (2002). Joyce’s argument for probabilism. Philosophy of Science, 69(1), 7381.10.1086/338941CrossRefGoogle Scholar
Rand, A. (1967). What is capitalism. In Capitalism: The unknown ideal. Signet.Google Scholar
Rand, A. (1971). The age of envy. In Schwartz, P. (Ed.), Return of the primitive: The anti-industrial revolution. (Penguin1999).Google Scholar
Smithies, D. (2019). The epistemic role of consciousness. Oxford University Press. 10.1093/oso/9780199917662.001.0001.10.1093/oso/9780199917662.001.0001CrossRefGoogle Scholar
Sturgeon, S. (2019). Epistemology, Pettigrew style: A critical notice of accuracy and the Laws of credence, by Richard Pettigrew. Mind, 128(512), 13191336.10.1093/mind/fzy029CrossRefGoogle Scholar
Sturgeon, S. (2020). The rational mind. Oxford University Press.10.1093/oso/9780198845799.001.0001CrossRefGoogle Scholar
Titelbaum, M. (2019). Return to reason. In Skipper, M. & Steglich-Peterson, A. (Eds.), Higher-order evidence: New essays. Oxford University Press. 10.1093/oso/9780198829775.003.0011.Google Scholar
van Fraassen, B. (1983). Calibration: A frequency justification for personal probability. In Cohen, R. S. & Laudan, L. (Eds.), Physics, philosophy and psychoanalysis. D. Reidel.Google Scholar
Walker, M. (2024). Outlines of skeptical-dogmatism: On disbelieving our philosophical views. Lexington Books.Google Scholar
Weatherson, B. (2019). Normative externalism. Oxford University Press. 10.1093/oso/9780199696536.001.0001.10.1093/oso/9780199696536.001.0001CrossRefGoogle Scholar