A Satisficing Theory of Epistemic Justification

Abstract There is now a significant body of literature on consequentialist ethics that propose satisficing instead of maximizing accounts. Even though epistemology recently witnessed a widespread discussion of teleological and consequentialist theories, a satisficing account is surprisingly not developed yet. The aim of this paper is to do just that. The rough idea is that epistemic rules are justified if and only if they satisfice the epistemic good, i.e., reach some threshold of epistemic value (which varies with practical context), and believing is justified if and only if it follows said rules. I argue that this alternative to the implicitly established way of thinking in maximizing terms has significant advantages. First, maximizing epistemic value can be unreasonably demanding; second, a satisficing theory can make finding reasonable rules for belief formation and sustenance much more accessible; and third, a satisficing approach is a better alternative to both general subjectivist and maximizing objectivist attempts to spell out epistemic blame.

1. Structure I situate my theory in a framework of teleological epistemology (TE). To develop a satisficing theory of epistemic justification, it will be crucial to specify TE. Since there is still a shortcoming of precise formulations, I will develop general principles of epistemic justification before I specify my satisficing theory. A brief sketch of TE will be given in the second section; a general principle of indirect epistemic justification will be developed in the third section. In section 4, I will develop a satisficing theory on this basis. Section 5 shows how this framework applies to an episode in the history of science, gives three motivations for a satisficing theory, and discusses how to determine the threshold for justified believing. Sections 6 to 8 reply to objections of relativism, permissivism, and arbitrariness. 1 The term 'veritism' is introduced in Goldman (1999). It is also sometimes called "the Jamesian goal" after James ([1896] 2013), "the twin cognitive good" (Carter, Jarvis, and Rubin 2014), "value t-monism" (Pritchard 2010), or "veritistic value monism" (Ahlstrom-Vij 2013). 2 Berker (2013, 344ff.) slices the first two pillars for a different purpose into a theory of final value and a theory of overall value. 3 For example, Ronzoni (2010, 455) and Williams (1988, 21). 4 Ahlstrom-Vij and Dunn (2018) allow even consequentialists to restrict the consequence set to that of a single agent (i.e., restricting social trade-offs). Similarly, I think that some restrictions of the consequence set can be conceptualized as being not genuine restrictions that the right puts on the good (not side constraints) but as being part of the axiology. For example, advocating for the 'truth-now-goal' (cf. Foley 1993) has a clear teleological structure but avoids the implausible epistemic tradeoff of me justifiably believing an obvious falsehood now for gaining more epistemic value later. For the problems of epistemic trade-offs in a teleological framework see, e.g., Firth (1981), Fumerton (1995), and Berker (2013). For a recent argument for not allowing epistemic trade-offs as part of an argument for teleological nonconsequentialism, see Littlejohn (2018, 37-40). Ahlstrom-Vij and Dunn's (2018) restrictions are an answer to trade-offs for the consequentialist. 5 In today's normative ethics, 'teleology' is typically used in a broad sense, comparable to mine (cf. Portmore 2005, 96 n6;Rawls 1971). In epistemology, for a similar broad use see Littlejohn (2018), Wedgwood (2018), and, to some extent Berker (2013). For using deontology broadly and teleology narrowly instead, see Kagan (1998) and Klausen (2009). 6 For example, Feldman (1988) and Briesen (2016). 7 But see Berker (2013) and Ahlstrom-Vij and Dunn (2018).
I start by formulating a principle of direct objective epistemic justification, since it is the most straightforward and all other can be viewed as adding various restrictions or extensions.
(EJ-DO) Principle of Direct Objective Epistemic Justification: For all subjects S and propositions p: believing that p is epistemically justified for S if and only if believing that p promotes the correct epistemic goal(s). This is a modified version based on considerations by Briesen (2016, 281), Chisholm ([1966] 1989), Feldman (1988, 248), and Klausen (2009, 163). Without going into too many details, I want to mention three points of departure.
First, to ensure generality in my formulation, I left out proposing a relevance condition or proposing a specific epistemic goal. Second, Feldman's explications have quite a deontological flavor, whereas Klausen's are strictly consequentialist. Klausen goes even as far as allowing for interpersonal trade-offs. EJ-DO (and my other upcoming principles) are teleological but stay mostly uncommitted about the amount of restrictions, except they restrict interpersonal tradeoffs since I am concerned with agent-centered epistemology.
Third, Feldman (1988, 248) suggests the following principle: "For any proposition p and person S, if S considers p then S is epistemically obligated to try his best to bring it about that S believes p if and only if p is true." This principle implies that if p is false, then S is not obligated to try to believe. It does not imply that if p is false, then S is obligated to avoid believing that p. S is still permitted to believe since not being obligated does not imply not being permitted. Feldman, of course, does want an obligation to try to avoid believing falsehoods. Therefore, his principle, as it stands, is too weak. It permits everyone to believe everything; it merely does not obligate everyone to believe anything. Invoking the twin cognitive good is another way of fixing Feldman's principle, something he infers afterwards without him recognizing that it is not implied by his quoted principle.
I will now build on EJ-DO to develop a principle of indirect justification. I develop my satisficing account in a framework of indirect justification because it is the implicitly presupposed metaepistemology for most theories of justification and I can, as such, cover the most space. 8 For instance, process reliabilism is considered a version of rule consequentialism, i.e., indirect justification (e.g., Driver 2018, 114;Firth 1981, 12;Goldman 1986;Kornblith 2018, 70;Resnik 1994). The case for evidentialism is similar. For instance, Feldman argues that justification is entirely explained by evidential support (Feldman 1988, 254) but "one's epistemic goal is to get at the truth" (255). Again, one has this intermediate layer of evidential support that mediates between justified believing and the truth goal.
In ethics, the most prominent representative of indirect justification is 'rule-consequentialism.' Rule-consequentialists state that there is some fundamental rule (or set of rules), and their correctness is defined via the conduciveness to the ethical good. Moreover, the acts of a person are morally right if and only if that person follows the rule(s). Hooker (2016, 6.1)-the main contemporary proponent of rule-consequentialism-explicates actualist (i.e., objective) ruleconsequentialism as follows: "An act is wrong if and only if it is forbidden by rules the acceptance of which would actually result in the greatest good." Using the same technicalities as in the formulation of EJ-DO, this transfers to epistemology as follows: (EJ-IO) Principle of Indirect Objective Epistemic Justification: For all exhaustive sets of rules R: R is epistemically justified if and only if following R promotes the epistemic good and for all subjects S and propositions p: believing that p is epistemically justified for S if and only if believing that p follows R.
EJ-IO extends the assignment of deontic properties from beliefs to rules. Restrictions can come in again on the level of the epistemic good, by explicating it for example as 'believing truths and avoiding error (now, on matters of relevance, for S, …).' With 'exhaustive sets of rules R' (henceforth simply R), I refer to a complete rule set of belief evaluation-one that correctly assigns being justified or not being justified to every belief in question. As a contrast, take an incomplete R c consisting only of the rule not to believe contradictions. It is reasonable to think that R c promotes the epistemic good and is thus justified. Now, suppose you believe consistently that the earth is flat based on insufficient evidence, ignoring counterevidence, and not trying to gather further evidence. Since you followed the justified rule R c , and rule following is sufficient for justification, your belief that the earth is flat would be justified. That cannot be correct. It is not correct because even though you did not violate R c , you violated other rules which you should have considered as well.

4.a The basic idea of a satisficing approach
So far, EJ-IO uses the vague notion of 'promotion.' But what epistemologists typically suppose is that only those R are justified that maximize the epistemic good, or, at least, are better than all alternatives in promoting epistemic value. Going for less than value maximization parallels the discussion of 'satisficing deontic theories' in normative ethics and since there is no elaborate account of a satisficing theory in epistemology, it is advisable to take a brief look to normative ethics.
Michael Slote (1984) introduces a form of act consequentialism as 'satisficing consequentialism' in contrast to the traditional view of 'maximizing consequentialism' proposed by Sidgwick (1874). 9 Fundamentally, Slote proposes that some acts promote the ethical good sufficiently to be right even if they are not maximizing. Transferred to epistemology, this amounts to the following: reaching some threshold of epistemic value is sufficient for a belief, rule, or method of justification to be justified. As a result, one obtains more relaxed principles of belief formation and sustenance with lower epistemic standards, thereby allowing for a wider variety of rational agents than a maximizing approach does.
Note that this account diverges substantially from the basic idea of Simon (1956) in rational choice theory or Gigerenzer and Goldstein's (1996) considerations of bounded rationality. Slote's and my account are not replying to the problem of limited information. Even if one knows that F-ing promotes the ethical or epistemic good better than G-ing, one is justified in G-ing as long as G-ing satisfices value. You are morally permitted to buy a present for your mother even if it does not maximize utility and you know that it does not.
Furthermore, Slote and I attempt to explicate conditions for right actions or right ways of believing respectively and our accounts operate in an explicit teleological argumentative structure, including an associated value theory, which a theory of bounded rationality does not. I would even argue that reframing Gigernezer and Goldstein's (1996) account as a naturalized epistemology in TE simply makes for a maximizing theory. It attempts to prove model theoretically that violating rules of traditional rationality in favor of simplicity increases inferential speed and predictive accuracy. Given that the epistemic good is to increase inferential speed and predictive accuracy, then, what is really shown is that such an alternative set of rules is aimed at maximizing epistemic value. Something similar can be said in the case of Simon's account. Since my theory is, however, properly satisficing (in that it is not aiming at any form of value maximization), I claim to offer a genuinely new approach. The precise formulation based on EJ-IO is as follows: (Satisficing EJ-IO) Satisficing Principle of Indirect Objective Epistemic Justification: For all exhaustive sets of rules R: R is epistemically justified if and only if following R satisfices the epistemic good and for all subjects S and propositions p: believing that p is epistemically justified for S if and only if believing that p follows R.
Note that Satisficing EJ-IO does not propose contradicting deontic properties for beliefs in the case of competing exhaustive sets of rules. Satisficing EJ-IO should not be understood as allowing for beliefs to be absolutely justified and unjustified simultaneously. The correct understanding is as proposing a form of relativism: satisficing EJ-IO allows that some beliefs are justified relative to some R but unjustified relative to some R'.

4.b Two wrong perspectives on a satisficing theory
It is frequently maintained that reliabilism in epistemology is analogous to rule consequentialism in ethics (e.g., Driver 2018, 114;Firth 1981, 12;Goldman 1986;Kornblith 2018, 70;Resnik 1994). Ahlstrom-Vij and Dunn even propose an analogy between reliabilism and satisficing consequentialism. If this were correct, then a satisficing theory was present in epistemology all along without recognition; it is merely reliabilism. They write: Another view says that right beliefs must instead lead to some threshold level of epistemic value. This is the view of the reliabilist-a process can generate justification while failing to be maximally reliable-and it is in this respect analogous to the satisficing consequentialist in ethics. (2018,5) I think that this is not the best analogy from ethics. Consider a very strict process-reliabilist rule, such as the following: believing that p is justified for S at t if and only if S's believing that p at t results from a maximally reliable cognitive belief-forming process. Such a rule will in fact not maximize epistemic value, since the rule is much too strict and excludes far too many true beliefs. On the other hand, take a more relaxed rule: believing that p is justified for S at t if and only if S's believing that p at t results from a sufficiently reliable cognitive belief-forming process. After specifying the threshold of sufficient reliability, such a rule is a much better candidate for maximizing epistemic value without proposing maximal reliability. This should point to the fact that there is a disconnection between the maximization of epistemic value and being a maximally reliable process. If one identifies the two with each other, then one confuses different layers of TE.
It could be objected that Ahlstrom-Vij and Dunn's point is more basic. A maximally reliable belief-forming process produces only true beliefs but any reasonable version of process reliabilism always falls short of this ideal. As such, any reasonable version of process reliabilism allows for beliefs to be justified but not true. This is what classifies them as satisficing theories. In this case, however, the exact same could be said of any version of indirect justification. Reliabilism, evidentialism, and any rule-based deontic theory would be a satisficing theory. However, this would draw upon a flawed analogy from ethics, since this would suggest that rule consequentialism in ethics constitutes a satisficing theory as well but, typically, it is not conceived as such. It is also misleading to pick out process reliabilism as something special by associating it with a satisficing theory, since such a use of the term 'satisficing theory' would not differentiate reliabilism from any other theory of indirect epistemic justification.
Ahlstrom-Vij and Dunn's explication can partly be explained by the following disanalogy between a satisficing account in ethics and a satisficing explication of reliabilism in epistemology: Slote developed satisficing consequentialism explicitly for act consequentialism (i.e., a direct form of ethical justification), but reliabilism is structurally a form of indirect justification. As already indicated, to draw an analogy to indirect justification in epistemology, one has to propose that rules are epistemically justified as long as they meet some threshold of realizing epistemic value. This is what makes beliefs only indirectly aiming at the epistemic good. For reliabilism this reads as follows: reliabilism is a method of justification which is itself justified by promoting (maximizing or satisficing) epistemic value, and beliefs are justified by being formed according to reliabilism.
Let me move to a second wrong perspective on a satisficing theory. Consider a method of justification, call it R p , stating that if a cognitive belief-forming process reaches some threshold x of reliability, then the beliefs formed by this process are justified. 10 Suppose further that both believing based on some source A and some source B follows R p . Furthermore, A is more reliable than B. Then, choosing B still follows R p and is justified even though there is a more reliable source available. If R p is truly better than all alternatives in maximizing epistemic value, then one could view this as a maximizing theory on the level of principles, but as a satisficing theory on a lower level.
It has to be objected that R p by itself cannot truly maximize value. For a maximizing theory there is a need for an additional condition which specifies the following: if multiple belief-acquiring options are above the threshold x, then one has to choose the one with the highest reliability. Not adding this condition makes for a truly satisficing theory since it allows for less than maximizing epistemic value. As such, one version of a satisficing theory is simply one that omits this condition. Both believing based on A or based on B follows R p . R p proposes the ideal threshold for reliability, and this might be a good reason why choosing B as a source is justified and the beliefs formed accordingly are as well. If the general idea of this case is convincing, then one might consider it already as a first motivation for a satisficing approach to epistemology.
This motivation for a satisficing approach is especially striking if one formulates the case on the level of processes: if a belief-forming process A is above the ideal threshold for reliability x, then it is reasonable to think that beliefs formed according to A are justified, despite there being another more reliable belief-forming process.

5.a A maximizing approach is too demanding
The aim of this subsection is twofold. My exposition so far is quite abstract because my focus is on explicating a general framework. In this section, I will show how my framework applies to a case in the history of science. Secondly, I will generalize the case as a first motivation for a satisficing approach. The general idea is that value maximization is too demanding.
Han Li gives an interesting case as a motivation for his theory of epistemic supererogation which I want to extend as a motivation for a satisficing theory. He writes: Take, for example, Albert Einstein's famous theory of general relativity. […] [It] was an epistemic achievement of the highest order. But it seems that much of the evidence that supported Einstein's theory was well known to physics of the time. Probably every sufficiently well-educated physicist was in position to justifiably believe in the theory of general relativity before it was actually discovered. If all this is correct, should we say, then, that all these scientists were actually irrational in failing to believe in the theory of general relativity? This seems far too harsh a verdict. (2018, Li speaks from a perspective of virtue epistemology and is fundamentally concerned with the explication of the rational person, not with justification. However, Li's case can be adopted as a motivation for satisficing TE, insofar as it shows that maximizing is too demanding. Let me show first that this very general description of the case is actually not problematic for a maximizing theory. Sure, we would not want to demand of Einstein's colleagues to be epistemically obligated to believe the theory of special relativity (SR) or general relativity (GR) as soon as the evidence was available. However, a maximizing theory can also accommodate this verdict. Usual versions of maximizing evidentialism or reliabilism do not suppose that any belief produced by a reliable belief-forming process, or any belief sufficiently backed up by evidence, ought to be formed. The reason is, as mentioned earlier, that obligations to believe have to be restricted to questions of relevance, or the truth-goal has to be relativized to propositions under consideration. Before anyone worked out GR with its associated propositions, various of those propositions were not even under serious considerations. Consequently, for many of the propositions associated with GR, there were no obligations to believe, even under the presupposition of a maximizing theory.
The more interesting case, contrary to Li's focus about matters before their discovery, are the false beliefs that were still sustained by Einstein's colleagues, contrary to the evidence (interpreted in the right way). To make this case more specific, let me add some historical details on the development of SR and GR and how they compare to Henri Poincaré's relativity theory. There is some disagreement among historians of physics, as to what extent Einstein's and Poincaré's theories were similar (cf. Darrigol 2004) but at least the following is agreed upon: similar to Einstein's SR (1905), Poincaré (1904) built on the mathematics of Lorentzian electrodynamics and recognized the invariance of the Maxwell-Lorentz equation under the Lorentz transformations. He developed a principle of relativity almost identical to Einstein's, understood the relativity of simultaneity, and recognized that the measurements of the velocity of light were identical in different inertial frames. One main difference on the path to GR, however, was their treatment of Euclidean geometry, which I will discuss soon.
What separated them were not merely some details in their proposal but rather a different epistemological framework which resulted in a different outcome. 11 Poincaré took up part of the Kantian tradition insofar as he relied on organizing principles for organizing perception and empirical data. This he shares with Einstein. The fundamental difference is that for Einstein those organizing principles were freely created (cf. Miller 1984, 40-41), whereas for Poincaré, they were synthetic a priori innate principles (1898; [1902] 1952). This led Einstein to a more flexible framework. It made Poincaré (1898, 41-42) believe that "geometry is not an experimental science; experience forms merely the occasion for our reflecting upon the geometrical ideas which pre-exist in us." Poincaré took a conventionalist approach to geometry (Poincaré 1898; also see Hagar and Memmo 2013, 363). Consequently, he believed until his death that the choice to add time as a fourth dimension is at best an instrumentally useful convention. This goes even as far as Poincaré thinking that the laws of mechanics cannot be empirically disconfirmed (cf. Miller 1984, 41). For Einstein, on the other hand, axiomatic geometry was epistemologically on a par with physics. He wrote: "Geometry thus completed is evidently a natural science; we may in fact regard it as the most ancient branch of physics" ([1921] 1970, 8). As such, he believed that empirical testing will decide between competing geometries, something Poincaré denied. In 1908, Poincare argued for the privileged position of Euclidean geometry, whereas Einstein was driving towards GR with its rejection of Euclidian geometry (Miller 1984, 42).
We can see here two different methods of justification at work. Roughly summarized, one method (Einstein's) had fewer a priori aspects and was thus more flexible in accommodating empirical counterevidence, allowing Einstein to realize the failures of Euclidean geometry, whereas another method (Poincaré's) integrated more a priori aspects and was thus more static, making Poincaré ultimately retain the Euclidean picture. Both methods were, however, excellent in getting the physics of relativity right in a way that was truly exceptional, realizing an amount of epistemic value on the subject matter that clearly superseded their colleagues. Even though, as Darrigol (2004, 619) states, "Einstein provided the version that is now judged better." 12 Analyzing this episode in maximizing terms would result in judging that Poincaré's epistemological method was not justified even considering his extraordinary epistemic results simply because Einstein, by using his method, got some additional parts of the picture right. This seems much too harsh a verdict, which thus speaks against a maximizing theory. A satisficing theory evaluates both Einstein's and Poincaré's epistemological methods as justified considering the exceptional epistemic consequences they yielded, which is the preferable verdict.
To generalize this motivation: the method of justification that is better than all alternatives might turn out to be highly complex and nearly impossible to follow for the average rational person or even the highly trained epistemologist. Going for the highest possible standard for evaluating methods might be too demanding and this can motivate lowering the threshold for picking out a method of justification.

5.b The threshold
The Einstein Case motivates the claim that epistemic value maximization demands too much of us. But there is still the question of how much one should be allowed to lower the threshold. I will argue that this varies with practical context, and furthermore depends on the amount of epistemic good needed, one's ability to do epistemically good, and the position one is in. I want to start by putting forward a typical case of pragmatic encroachment on epistemic standards (cf. Stanley 2005).
Egg Case: Suppose there is an exceptional method of belief formation M 1 that is 0.95 reliable 13 in some domain D. Suppose for a subject S, such M 1 is reading the grocery list she wrote yesterday to determine whether there are eggs in her fridge. Suppose there is another method M 2 . M 2 is 0.99 reliable for S in D-trusting the testimony of her daughter's direct perception via a phone call. Suppose further that there is a method M 3. M 3 is 0.999 reliable for S in D. S could go back home to check herself, eliminating potential errors (buying a detector that checks for fake eggs, etc.). Finally, method M 0 has 0.9 reliability for S in D: trusting her recent memory from yesterday based on her sense perception. M 0 -M 3 are all methods available.
What should S believe? We need to know more about the threshold. In normative ethics, McKay (2021) recently argued that a satisficing theory must determine the threshold via three factors: amount of good needed, one's ability to do good, and uniqueness of one's position. The reasons to choose these three factors are quite analogous in epistemology and can be motivated as follows: (a) Amount of good needed: If there were a specific amount of good needed in the world, then agents acting as satisficers will be just good enough to get to this ideal state of affairs.
This is the notorious trickiest part for any satisficing theory in ethics. For many standard ethical theories (e.g., classical utilitarianism), there is no maximally good state of the world. There is no natural upper limit. As McKay (2021) admits, this makes a satisficing theory in ethics challenging. Fortunately, epistemologists are in a better position. James ([1896] 2013) already noticed that there is some variability to how much we (ought to) value believing truths compared to avoiding error and practical considerations inform us as such. Some contexts require higher reliability, stronger evidential support, etc., than others. This brings me to the Egg Case. I suggest that S should not form beliefs based on M 0 , but not because 0.9 reliability is bad. Rather, switching from M 0 to M 1 gives S an increase of reliability by 0.05 just by looking at the grocery list. It is thus reasonable to choose M 1 over M 0 (you can adjust the percentages, if you have different intuitions). It also seems perfectly reasonable not to call the daughter, since S's memory is sufficiently reliable given the situation, but it 13 A similar story can be told for strong evidence or high confidence. seems also perfectly reasonable to call her just to make sure. Both M 1 and M 2 are reasonable. Only M 3 seems to be unreasonable and excessive. Hence, in the Egg Case for S, the threshold for a method to be justified is 0.95 reliability. But if the importance of having eggs increases (maybe S needs to prepare a dinner for important dinner guests), M 1 would start to be unreasonable because the value of avoiding error increases. This is the practically informed context relativity of the threshold. Notice that the lesson here is not that only M 3 maximizes value because it has the highest reliability, using M 3 is wrong, and thus we should not be maximizers. This would be the mischaracterization of a satisficing theory warned for in section 4.b. M 3 is actually not maximizing value because it values avoiding error too highly and believing truths not highly enough. If we were to demand such high epistemic standards for our methods of justification in such contexts, we would end up hardly believing anything and miss out on many relevant truths. M 3 is not maximizing value.
An additional advantage of this picture with a variable threshold is that it solves the notorious tricky problem of lottery beliefs for veritists. Many commentators (cf. e.g., Littlejohn 2012, 79) have raised the following worry: the belief that l (you lose in a large enough fair lottery) is intuitively unjustified, but since the chance of winning can be set arbitrarily low, the threshold for believing would need to be arbitrarily high for this belief to come out as unjustified. Following a rule with such a high threshold would make almost all of our beliefs come out as unjustified. That must be wrong.
My response is that lottery beliefs too have practical implications and believing that l would make you throw away your ticket. This, however, is unreasonable 14 because there is no practical advantage to throwing away your ticket but there is a disadvantage if you win. As such, the value of accurately believing that l if it is true approaches 0 and the value of accurately believing that l is false if it is false approaches infinity. Thus, you are not justified believing that l. 15 In sum, the amount of good needed explains the range of the threshold by the value we put on believing truths versus avoiding error, which is informed by the context of our practical demands. My explication of a satisficing theory and some maximizing theories can give such a range. 16 However, additionally satisficing theories propose that this range is relative to the subject based on two factors: (b) One's ability to do good: ought implies can, but this grades off.
In the Egg Case, consider a person S 2 who forms his beliefs based on his shopping lists with only 0.9 reliability. Furthermore, S 2 could also use M 2 (calling his daughter with 0.99 reliability), and these are all methods available. Now 0.9 (i.e., M 0 -just using her memory) was not good enough for S 1 because the jump to 0.95 was too easy. Thus, we set the threshold at 0.95. For S 1 , however, getting over 0.95 would mean applying M 2 , which is significantly more costly. We conclude that for S 2 , 0.9 is sufficient. (Again, you can modify the numbers if you have different intuitions.) S 2 is justified to believe based on 0.9 reliability while S 1 is not, and this is explained by their different abilities to do epistemically good. 17 We get the same subject relativity in the Einstein Case. Einstein, Poincaré, and their scientific peers all had different abilities to achieve epistemically good outcomes. Einstein and Poincaré were 14 This part is quite similar to Littlejohn's solution. 15 Of course, your credences in l still ought to be very high. However, in some contexts and for some propositions high credences will not imply full belief. See Dorst (2017, 186-92) for such a proposal. 16 Recently, Dorst (2017, 188) argues quite analogously in a Lockean framework that the magnitudes of the values of true and false belief determine the threshold, and this threshold varies with context and proposition in question. The threshold varies, but Dorst's theory is still a maximizing one (as he recognizes) since for every proposition p in a specified context c there is only one correct threshold, the one that maximizes value. 17 This strategy is quite similar to Rogers's (2010) and Chappell's (2019) strategies for their satisficing theory. They only allow nonmaximization if the maximizing strategy is too costly or an undue burden. extraordinary individuals, so we demand epistemically more from them. We lower the threshold for the average scientist, but we still expect to change their mind to the high standards of the community and the expertise of their peers after the results are more accessible.
(c) Unique position: Some are simply in a better position to do good than others.
In the epistemic realm, this applies especially to scientists and public science communicators. We expect scientists to have extraordinary responsibility for accuracy. Einstein's approach, even though supererogatory, is still not beyond expectancy for a scientist because we expect epistemic supererogation from scientists. If we think that science is one of our best knowledge-generating source, then it is reasonable to think that in a mature progressive science at least the median scientist fulfills their epistemic demands. This is of course open to further evaluation. 18 Poincaré clearly fulfilled such demands, and thus his epistemic methods were justified.
This version of a satisficing theory can also answer Bradley's (2006) well-known objection for ethical satisficing consequentialism. Bradley's criticism transferred to (ego-centered) epistemology is as follows: if one's total epistemic value is above the threshold n, one could permissibly believe a random falsehood as long as one's total epistemic value does not drop below n. The presented theory does not fall prey to this objection. We cannot simply choose a method of justification with lower (expected) epistemic value if this choice is not motivated by its higher practicability on the background of one's ability to do good and one's unique position.

5.c Simplifying rule finding
As in ethics so in epistemology, it is contentious what counts as a rule. This goes together with the most iconic criticism of rule consequentialism: it just collapses into act consequentialism. The idea is as follows. Sometimes an act promotes the good but violates a general good-promoting rule such as "Don't steal." Thus, adding an exception clause gets a rule that is better in promoting value. Then, adding exception clause after exception clause in the same fashion will ultimately make act prescriptions of rule consequentialism and act consequentialism to be coextensive. The standard reply is (cf. Hooker 2016, 8) that adding that many exception clauses simply makes consequentialism impractical; it plausibly even contributes to a wrong application of exception clauses due to complexity. The lesson: rules should be simple! Interestingly, we find similar reasoning in epistemology. Williamson (2002, 223) argues against indirect subjective principles of epistemic justification by pointing out that a rule such as "Add salt when the water boils" is far superior to "Do what appears to you to be adding salt when what appears to you to be water appears to you to boil." It is not that advocates for the simpler rule deny that sometimes we mistake salt for a similar looking ingredient, it is rather that motivating one to check whether it is really salt is simply done by the rule "add salt!" Williamson then transfers this to epistemology: Just as we can follow the rule 'Add salt when the water boils', so we can follow the rule 'Proportion your belief in a proposition to its probability on your evidence'. Although we are sometimes reasonably mistaken or uncertain as to what our evidence is and how probable a proposition is on it, we often enough know enough about both to be able to follow the rule. (2002,223) This not only is an argument against subjective principles of epistemic justification: it also motivates that it is valuable to have rules that are easy to follow.
One advantage of a satisficing theory is that it is easier to follow than a maximizing one. A satisficing theory can do away with the task of evaluating which exhaustive set of rules is better than 18 For example, degenerate research programs (Lakatos 1974). all alternatives in promoting epistemic value. Finding some R that satisfices epistemic value is already sufficient. This makes it much easier to find an R to follow since as soon as some R is good enough you do not have to look any further for better alternatives.

5.d Epistemic blameworthiness
Two related objections to the motivation of section 5.c-i.e., simplifying rule finding-arise. First, one might worry that it mixes up objective principles of epistemic justification with the pragmatics of rule application or rule selection. Second, one might worry that, epistemic rules in particular are not something (rational) agents explicitly, consciously do follow, and thus considerations about the practicality of rule following are beside the point.
I reply that at least for those philosophers who think that the practicality of rules has some bearing on whether a rule is right or wrong (see the discussion of Hooker and Williamson in the last section), the motivation has force. If, however, one is from the opposing camp, then I will concede this one motivation. Still, even if one thinks that an idealized theory of epistemic justification should be stripped away of all subjectivist, relativist aspects, and of all practical considerations of rule finding, then, I will argue, a satisficing theory is still valuable to spell out the concept of 'epistemic blameworthiness.' Differentiating wrongness from blameworthiness is widespread in ethics (cf. Hooker 2016, 6.1). 19 Recently, Driver (2018, 118) argued that our critical practice warrants such a separation in epistemology as well. Kvanvig ([2005] 2014, 361) argues that it is reasonable to differentiate Epistemic Blameworthiness with its subjectivist aspects from purely objective Justification. Furthermore, Singer (2018) argues that adopting this differentiation in epistemology is an important lesson one should draw from ethical consequentialists to avoid common objections. Thus, one might reformulate the proposed satisficing deontic theory as an explication of the notion of Epistemic Blameworthiness but stick with a maximizing deontic theory when explicating Justification. This preserves a completely nonrelativist handling of justification as an ideal epistemic theory, but also preserves the motivation from section 5.a that a maximizing theory is too demanding, such that rational agents are at least not epistemically blameworthy if they are value satisficer.
A satisficing explication of Epistemic Blameworthiness has also advantages compared to a completely subjective explication. Merely believing that your beliefs maximize the epistemic good cannot be sufficient to not be epistemically blameworthy. In the Flat Earth Case, suppose you believed that you did everything epistemically right by believing that the earth is flat. You still should be epistemically blamed for holding such a belief since you made too many mistakes. 20 But as soon as one restricts subjectivity to something like "you are not epistemically blameworthy if and only if it is justified to believe that your beliefs maximize the epistemic good," one is just back to equating blameworthiness with justification. Consequently, for spelling out epistemic blameworthiness, mere believing is not good enough, but justified believing goes too far. Now, there might be a middle ground, but that is very hard to spell out correctly. A satisficing explication of Epistemic Blameworthiness does this in a very natural way. The epistemic standards are lower than the standards for justified believing but higher than mere believing. 19 Note, however, that if one differentiates blameworthiness from wrongness in ethics, then blameworthiness is, typically, explicated as a form of expectabilist consequentialism (cf. Hooker 2016, 6.1), i.e., a subjective form of wrongness, or it is explicated as objective wrongness, but one adds some additional conditions, especially control and knowledge conditions. What I am suggesting is quite different. It ties blameworthiness to not satisficing value and wrongness to not maximizing value. I am not aware of any theory in ethics that does that. 20 For the implausibility of purely subjectivist principles of epistemic justification, also cf. Alston ([1985] 1989, 88-89).
6. Objection 1: Contradictory instructions I want to turn now to three expected objections. What is ultimately expected from a complete normative theory in epistemology is to give clear instructions about obligations to believe or at least justified believing. There is a worry that a satisficing theory cannot live up to this demand. If justified R justifies believing that p and justified R' does not, how would an agent decide whether to believe that p? What cannot be wanted is that a complete epistemic normative theory gives contradictory instructions for belief formation and sustenance. I call this the Contradictory Instructions Objection: Contradictory Instructions Objection: For all subjects S and propositions p: the correct overall normative theory of belief formation and sustenance should neither imply that S is obligated to believe that p at t and obligated not to believe that p at t nor that S is permitted to believe that p at t and is not permitted to believe that p at t.
To address this objection, a detour to the intersection of the third and the fourth pillar of TE is necessary, i.e., it has to be addressed how to get from justifications to obligations to believe. The most straightforward principle is as follows: (BN-So) Straightforward Norm of Belief Formation and Sustenance, Obligation: For all subjects S and propositions p: S is epistemically obligated to believe that p if and only if S is justified (according to satisficing EJ-IO) in believing that p. S is epistemically obligated not to believe that p if and only if S is not justified in believing that p.
I formulated this principle with obligations because in epistemology it is not sufficient to merely speak about permissions. If there were no obligations to believe, then you would always be permitted to withhold beliefs and would not get to your goal of believing truths. As such, it would defy the whole teleological motivation. This, however, puts some restrictions on the correct epistemic goal. If the goal were merely believing truths and avoiding error, then it seems that one would be obligated to believe all kind of propositions that one did not even consider and were not of relevance because it increases epistemic value. Thus, there needs to be some kind of relevance condition, 21 or some kind of restriction to propositions under consideration. 22 Furthermore, one cannot omit the additional condition for obligations not to believe. Without that condition, S would be merely not obligated to believe that p but still allowed to believe that p (which Feldman did not recognize, as mentioned earlier) if there were no justification not to believe that p. This would be too weak. BN-So is as elegant as it is intuitive. What speaks for it is that it draws the most straightforward connection from the epistemic goal to belief obligations. However, it could be troublesome in connection with a satisficing deontic theory since it seems to run into the Contradictory Instructions Objection: for conflicting exhaustive sets of rules which satisfice epistemic value, there will be some p which S is obligated to believe according to R and will be obligated not to believe according to R'. This outcome can be circumvented if a person simply chooses one and only one R at any given time. After all, the basic idea of a satisficing theory is to give the subject the freedom to choose between satisficing R and R'. Thus, by choosing precisely one R at t, no contradictory instructions arise. The question arises now, however, if we can live with the resulting permissivism and relativism, which will be the topic of the next two sections.

Objection 2: Permissivism and arbitrariness
The relativist consequences of a satisficing theory imply 'Permissivism,' i.e., a violation of what Feldman ([2006Feldman ([ ] 2011 calls the 'Uniqueness' thesis that "a body of evidence justifies at most one proposition out of a competing set of propositions […] and that it justifies at most one attitude toward any particular proposition." Suppose there are two evidentialist methods that satisfice epistemic value but justify contradicting sets of beliefs (see section 5.1), then Uniqueness is violated because the same body of evidence (i) justifies competing propositions relative to different R and (ii) justifies more than one attitude toward some propositions because it can obligate one to believe that p relative to R and obligate one not to believe that p relative to R', and thus justify two doxastic states towards p, i.e., believing and withholding belief.
First, if one is already convinced by a Permissivist picture, then this further empowers a satisficing framework. As Li (2018, 351) recognizes: "Many philosophers have found the uniqueness thesis to be intuitively implausible, but there are relatively few fully developed epistemic theories that can explain why it is false." A satisficing theory of epistemic justification does just that: since there will be closely related evidentialist rules that satisfice value but justify different sets of beliefs, a satisficing theory of epistemic justification can explain in a very natural way why Uniqueness is false. 23 But what if one thinks violating Uniqueness is wrong? While this is not the place to solve the Permissivism debate, I want to argue why the Permissivism of a satisficing theory can be quite reasonable. It is useful to differentiate here two versions of violating Uniqueness: Inter-Personal Permissivism-i.e., violating Uniqueness across persons-and Intra-Personal Permissivism-i.e., violating Uniqueness within one single person (cf. Kelly 2014). Most Permissivists defend Inter-Personal Permissivism. For instance, Schoenfield (2014) argues that it is reasonable for epistemic standards to vary between individuals (also cf. Kelly 2014;Podgorski 2016;Simpson 2017). Already simply involving a veritist framework with a twin cognitive goal can make a form of Permissivism plausible since, arguably, the balancing of the value of believing truths and avoiding error is not exhausted by epistemic reasons.
Intra-Personal Permissivism, on the other hand, is much less defended. 24 Allowing one single rational agent to have varying epistemic standards seems unintuitive. The presented satisficing theory sanctions such Permissivism. It can be put most bluntly with what I call the 'Arbitrary Switching Objection.' Consider there are two competing R that satisfice value for S. What would prevent S to follow R at t and R' at t' in the exact same epistemic situation arbitrarily. Nothing, it seems, since both satisfice value. Maybe S beliefs p based on R on weekdays and ¬p based on R' on weekends. The objection is: if Satisficing EJ-IO sanctions this, then there must be something wrong with it. White (2005) challenges Permissivism by arguing precisely that it leads to some unacceptable arbitrariness of one's doxastic attitudes (also cf. Kolodny 2007, 248). In response to White's arguments, Permissivists typically respond by trying to avoid arbitrariness at least of epistemic standards 25 but it seems that Satisficing EJ-IO has arbitrariness built into it. It is also resilient to Jackson's (2021) solution for the Intra-Personal Permissivist. She argues with cases of supererogation for permissive switching. If supererogative reflection on your beliefs suggests revising your belief, then such change is permitted but not required because it is supererogative. Thus, revising and not revising is permissible. Whatever we think about this solution, it does not work for the present Arbitrary Switching Objection because switching appears to be based on some epistemic procedure, such as rational reflection.
What is then the solution? Ye (2019) pushes Permissivism further than Jackson and concludes that arbitrariness is simply fine. His answer is, in short, that there is nothing unreasonable about choosing an action between two permissible actions arbitrarily, and analogously the same holds for belief. This solution works for a satisficing theory as well, but I do not even have to go as far as Ye.
First, contrary to Ye, my view is not committed to Permissivism about credences. For example, in lottery beliefs your credences that your ticket will lose can be incredibly high without you believing that it loses. Since practical considerations impede on the balancing of the truth goal but not necessarily on credences, there is no arbitrariness of credences. As noted earlier, following Dorst (2017), this still can preserve Lockeanism since the threshold is variable. As such, the arbitrariness of choosing epistemic standards in my framework is already weakened, since uniquely tailoring credences to evidence is preserved.
Second, what about the arbitrariness of choosing rules that satisfice epistemic value? Similar to current responses of arbitrariness (cf. Schoenfield 2014, 199;Meacham 2016, 472-73), I respond that any R still singles out one specific belief; believing is not arbitrary (also see section 6.1). Ye (forthcoming; also cf. White 2005, 452;Feldman 2007, 205À6) objects to this move that this just pushes the arbitrariness to the epistemic standards or rule choice. Note, however, since I, contrary to Ye, explain the choice of epistemic standards via practical considerations, the choice of epistemic standards is not arbitrary. It is not arbitrary to believe whether there are eggs in my fridge based on a higher effort-/high reliability method or a lower effort-/slightly lower reliability method. There is a practical trade-off, and this trade-off can explain the R (with its implied epistemic standards) that one chooses. So going beyond Ye, there is not simply an analogy between permissive action and permissive belief. Permissive actions straightforwardly lead to permissive beliefs, and since there is nothing strange about cases of permissive action, there is nothing strange about permissive believing either. If I buy eggs on weekends by consulting my shopping list and on weekends by calling my daughter, I am not irrational because both are reasonable things to do. If I form beliefs about the eggs on weekdays by consulting my shopping list and on weekends by calling my daughter, I am not irrational either, as long as both methods satisfice value. 26 Note further, that the presented theory actually blocks completely arbitrary switching, it just does not block epistemically arbitrary switching. If I consulted my shopping list, and thus believe that I have eggs in my fridge, afterwards call my daughter (higher reliability, stronger evidence) and she is saying that I do not, then I cannot simply switch back to looking at my shopping list believing that I have eggs in my fridge after all. If I have the result of applying various methods of justification available, then I am not allowed to base my belief on an inferior one because this would practically not be motivated. There is no practical advantage in trusting the lower reliability method. Now one might worry that such belief switching is much stranger in complex interdependent belief systems. I do not think so, and this is independent of a satisficing theory. Consider the following case from personal experience: by thinking about the question of scientific realism (at least for some scientific theories), some days I found myself to be more of a realist based on abductive reasoning, and other days I was more of an antirealist based on avoiding inflationary metaphysics. Nothing changed in the evidence I had but my methods of justification were different. I still think both solutions are quite reasonable, and I would not view myself as being unreasonable to prefer one over the other. 27 If cases such as this are reasonable epistemic practice, then they motivate that there is generally nothing wrong with switching.

Objection 3: Relativism
As a last objection, I want to reply to more general relativist concerns. I will show that even a maximizing theory plausibly runs into relativism and violates Uniqueness, and thus the worries about a satisficing theory should be reconsidered.
A maximizing principle of epistemic justification proposes that R is justified if and only if it maximizes value or at least promotes epistemic value better than all alternatives R'. Such a principle might still justify conflicting R in the following cases: (i) Two (or more) conflicting R promote the correct epistemic goal equally well but better than all alternatives. (ii) Two (or more) conflicting R promote the correct epistemic goal better than all alternatives but are incommensurable or incomparable in their promotion of the correct epistemic goal between themselves.
Maximizing EJ-IO would still justify those conflicting R in both cases, and one gets all the consequences of violating Uniqueness. Some conditions need to be fulfilled, however.
Merely not knowing which one of two conflicting R is better than all alternatives in promoting the epistemic good is not yet problematic. The only thing that matters is whether those R are actually better or not, since we are arguing in a framework of objective justification. Thus, the statement in (i) can only mean that R justifies a subject to believe one set of propositions and R' justifies a subject to believe a different set of propositions and it is exactly equally valuable to believe one set as it is to believe the other. If, for example, the correct explication of veritism is to improve the ratio of true over false beliefs for a reasonably sized set of beliefs, then both sets would have to have the exact same ratio. In real-life cases that might not happen too often.
The same line of reasoning applies to (ii). It is not an objection in a framework of objective justification that two R are merely incommensurable given what S knows. But what could objective incommensurability mean? Since it is clearly defined how rules derive their value-i.e., only by maximizing the epistemic good-the most plausible way for this to arise is some variability of correct epistemic goods. For instance, there might not be a justification for how much weight one should put on believing truths compared to avoiding error, or at least there might be some permissible spectrum. Then, some R could be better in promoting one epistemic good and some R' could be better in promoting another. If those goods are incommensurable, then both R would be objectively incommensurable.
If cases of the kind (i) or (ii) truly exist, then even a maximizing theory would violate Uniqueness, and has relativist consequences. These consequences would be limited in scope compared to a satisficing theory, but still, if one had reservations against a satisficing theory because one wants to preserve a completely nonrelativistic objectivist theory of justification, then such reservations have to be reconsidered if a maximizing theory cannot deliver that either. Then the difference between a maximizing and a satisficing theory of epistemic justification is just a matter of degree and not a matter of kind regarding their relativist consequences.

Conclusion
I put forward a satisficing theory of indirect epistemic justification in a framework of teleological epistemology stating that rules or methods of epistemic justification are justified if and only if they satisfice the epistemic good-i.e., reach some threshold of epistemic value (which varies with practical context)-and believing is justified if and only if it follows said methods or rules. I argued that by drawing the correct analogy from normative ethics, a genuine satisficing approach has to be understood as putting forward a form of subjective relativism and Permissivism. There is some leeway for rationality where it is up to the subject to choose between different methods or rules of epistemic justification as long as epistemic value is satisficed. The threshold varies with practical context, and furthermore depends on the amount of epistemic good needed, one's ability to do epistemically good, and the position one is in.
I gave three motivations: (i) a maximizing approach is too demanding (Einstein Case, Egg Case), whereas a satisficing theory can give the right verdict; (ii) a satisficing theory can make finding reasonable rules for belief formation and sustenance more accessible; and (iii) a satisficing approach has major advantages for spelling out the concept of Epistemic Blameworthiness, since, contrary to the maximizing objectivist, it can preserve the intuition that epistemic standards for epistemic blameworthiness are lower than those for justified believing, but contrary to the general subjectivist, it does not lead to implausibly low epistemic standards.
I argued that the framework implies violating Uniqueness, which Permissivists will regard as a strength because a satisficing theory can naturally explain this violation. For opponents of Permissivism, I argued that the resulting Intra-Personal Permissivism is weaker than expected because it does not imply Credence Permissivism, and is, furthermore, a direct consequence from a plausible kind of Permissivism about action. Finally, I argued that a maximizing alternative is most likely not able to avoid all relativist consequences either and will violate Uniqueness as well, so reservations against a satisficing theory should be reconsidered.