Much work in moral psychology examines moral judgment. For example, we ask people whether they think that a certain action is permissible. Or we ask how likely they think it is that a certain action is the right thing to do. But we can also ask about moral decisions – how do people decide what to do in moral contexts? Decisions characteristically depend on judgments, but decisions go beyond judgment to initiate action. I might judge that the right thing to do is to give to Save the Children, but it’s a further question whether I will decide to write a check.
The topic of moral decision making is vast in scope. In this chapter, we limit our theoretical treatment by focusing largely on expected utility theory, the mainstay model of decision theory generally. To capture a broader swathe of moral decision making, we present an augmentation of the standard outcome-based expected utility hypothesis: an action-based account. We describe how estimates of utility based on the value of actions explain moral decision making alongside outcome-based estimates. We also discuss recent advances in the science of moral decision making that support this augmented expected utility model.
7.1 The Foundations for a Theory of Moral Decision Making: Expected Utility Theory
The most prominent theory of decision making is expected utility theory (EUT), and we will investigate moral decision from this starting point. Notoriously, much of human decision making does not conform to expected utility theory. There is an entire tradition of work on this, but one representative example is that people overweight small risks – people effectively treat a 1 percent chance of some event as having a significantly higher probability (e.g., 4 percent). Insurance companies capitalize on this human vulnerability (Kahneman, Reference Kahneman2011 provides an accessible review). Nonetheless, expected utility theory provides a useful starting point from which to articulate key aspects of moral decision making.
The theory of expected utility relies on a specific collection of components: a set of options available to the decision-maker, the decision-maker’s expectations regarding the likelihood of a particular choice resulting in a specific outcome, the subjective importance or utilities assigned by the decision-maker to each potential outcome, and simple computations involving these utilities and expectations.
The most straightforward approach to explain this framework is by considering different bets that one might take involving money. I assign higher utility (meaning I consider it more valuable) to an outcome where I receive $100 compared to an outcome where I receive $90. This preference arises from my valuation of money, where I generally prefer having more over having less for things I value. Consequently, when faced with a decision between (A) receiving $100 and (B) receiving $90, the appropriate choice is A. Nevertheless, if the decision involves choosing between (C) a 10 percent chance of receiving $100 and (D) a 90 percent chance of receiving $90, then the better option is D. In the initial scenario, the EUT serves as a normative theory – it outlines the decisions one ought to make based on their expectations and utilities. However, it also forms the basis for a descriptive theory, and indeed, in the given examples, it is highly probable that individuals would opt for A over B and D over C.
7.1.1 Outcome-Based EUT
Certainly, not all choices revolve around finances, and this aspect contributes to why decision theory expresses the personal worth attributed to outcomes using the concept of “utility” rather than monetary measures. To take a familiar example from decision theory that incorporates probabilities, consider being faced with a decision about whether to take an umbrella with you to work. You place a very low utility on an outcome in which you don’t have your umbrella and it rains; in that case, you get soaked. This brings you no happiness at all, thus the utility of this outcome for you is 0. If you do have your umbrella, and it does not rain (you lugged your umbrella around for nothing), this is also a low-value outcome. Still, it’s better than not having your umbrella when it does rain – let’s say your utility for this outcome is 15. The outcome in which you don’t take your umbrella and it doesn’t rain would certainly be better than this. Let’s say that this has the utility 70 for you. Finally, you’ll be very happy with your decision to take your umbrella when it in fact rains; let’s say your utility for this outcome is 90.
Now, to decide whether or not to take the umbrella, you consult the weather forecast, which leads you to think the chance of rain is 75 percent. This scenario can be represented with a decision tree (see Figure 7.1). Given these probabilities and values, EUT dictates that you should take the umbrella.Footnote 1 However, if the chance of rain is only 5 percent, then EUT says that you should not take the umbrella.Footnote 2 We can dub this approach to decision theory outcome-based EUT. Once more, while we have presented this illustration in the framework of a normative theory of decision making (wherein, in the initial scenario, selecting the umbrella is the recommended course of action), it is also feasible that an outcome-oriented EUT accurately depicts how individuals might arrive at decisions based on the provided utilities and expectations.

Figure 7.1 A decision tree reflecting outcome-based EUT; u(o) = utility of this outcome, for example, the utility of the outcome in which you have your umbrella and it rains is 90.
7.1.2 Moral Decisions and Outcome-Based EUT
Already with this outcome-based EUT model of decision making we can accommodate morally important decisions of a utilitarian bent.Footnote 3 Indeed, as Baron writes: “Utilitarianism is the natural extension of expected-utility to decisions for many people. The utilitarian normative model here is to base the decision on the expected number of deaths” (Chapter 8, this volume). Consider the following case, which we call “Triage.” An ambulance driver knows that he is the only person who can address life-threatening injuries from an avalanche. There are two clusters of people in need, and he only has time to attend to one cluster. One cluster consists of two people on the north side of the mountain; the other cluster consists of five people on the east side of the mountain. Given the information available, he has the expectation that if he goes to the north side he can save the two people there and if he goes to the east side he can save the five people there. Since the ambulance driver values human life, and more is better than less, he should (normative EUT) and will (descriptive EUT) decide to go to the east side. The situation changes if he learns that the injuries for the people at the north are such that for each of these two people, he has a 95 percent chance of saving that person, and that the injuries for the people at the east are such that for each one of these five people, he has a 1 percent chance of saving them. In that case, he should and will factor these probabilities into his calculation and decide to go instead to the north side.
Outcome-based EUT accommodates variations in moral decision making. For instance, one person might assign a higher utility to saving cats than to saving dogs, and another person might have the reverse utility assignments; in such a case, if the first person were confronted with a dilemma that required sacrificing one dog or one cat, she should prefer to save the cat (we should expect the second person to prefer saving the dog). Outcome-based EUT can also accommodate subtler phenomena, such as when people represent actions as associated with entirely different outcomes. For example, the decision of whether to tell a joke or not might be associated with expectations about the outcome of hurting the target of the joke (a moral consideration), or expectations about the outcome of making the target of the joke laugh (nonmoral consideration). As such, two people considering the same action, the joke, might weigh expectations about different outcomes with very different intrinsic moral stakes (Moore et al., Reference Moore, Stevens and Conway2011). As work on indirect speech indicates, whether people are aware of it or not, such moral and nonmoral characterizations of the same event are common, for example, in the form of euphemistic language (Bandura et al., Reference Bandura, Barbaranelli, Caprara and Pastorelli1996; Chakroff et al., Reference Chakroff, Thomas, Haque and Young2015; Orwell, Reference Orwell1946). An outcome can also be represented as the result of different causes or different motivations for approach or avoidance (Janoff-Bulman et al., Reference Janoff-Bulman, Sheikh and Hepp2009), construals that potentially have different implications for moral judgment and decision making. For example, an accident holding up traffic might be construed as the outcome of two cars colliding or one person’s careless texting.
7.1.3 Moral Decisions and Issues for Outcome-Based EUT
Examples like Triage are actually abundant in our everyday lives when we make decisions about how to produce the most good. However, and famously, there is a wide range of cases in which people’s decisions (or, at any rate, their reports of what they would decide) diverge from the dictates of the outcome-based EUT. That is, the decision recommended by outcome-based EUT is not the decision that people make. The closest match to a case like Triage is the “Footbridge” trolley case in which the agent has to decide whether to throw a man in front of a train to prevent the deaths of five other people. Given a focus on outcomes and the utility placed on human life, the utilitarian outcome-based EUT model suggests that one will and should throw the man in front of the train. But this is not what most people say they would do (only around 30 percent, e.g., Greene, et al., Reference Greene, Cushman, Stewart, Lowenberg, Nystrom and Cohen2009; Petrinovich & O’Neill, Reference Petrinovich and O’Neill1996). Most people say that they would not throw the man in front of the train.
Cases like the Footbridge example are exceptions to the happy harmony we see between normative and descriptive versions of outcome-based EUT for cases like Triage. Other examples in which people’s (reported) decisions diverge from outcome-based EUT include protected values (e.g., Baron & Spranca, Reference Baron and Spranca1997), the act/omission distinction (Cushman & Young, Reference Cushman and Young2011; Henne et al., Reference Henne, Niemi, Pinillos, De Brigard and Knobe2019; Spranca et al., Reference Spranca, Minsk and Baron1991; Willemsen & Reuter, Reference Willemsen and Reuter2016), and status quo bias (Ritov & Baron, Reference Ritov and Baron1992). In these and many other cases, people’s decisions diverge from what outcome-based EUT normatively prescribes. One option is to maintain that outcome-based EUT is the correct normative theory and where people diverge from outcome-based EUT, they are simply being irrational (cf. Baron, Reference Baron1994; Greene, Reference Greene and Sinnott-Armstrong2008; Singer, Reference Singer2005). On this approach, one might dispense with EUT as descriptively useful for exceptional cases. An alternative option, which we will explore in this section, is to elaborate and augment EUT in ways that will capture some of the apparently exceptional cases. (In Section 7.2, we will explore challenges to rational moral decision making.)
7.1.4 Moral Decisions and Action-Based EUT
Although there is an elegant simplicity to outcome-based EUT, it seems ill-equipped to explain many everyday moral decisions. In particular, many nonutilitarian decisions seem difficult to accommodate within outcome-based EUT. Perhaps EUT need not be so elegantly simple. As we’ve hinted in Section 7.1.3, in addition to assessing actions according to the utility of their outcomes, we might also assign utility to actions themselves. In other words, agents sometimes assign utility to outcomes (i.e., states of affairs), but agents also sometimes assign utilityFootnote 4 to an action-type. Such an augmentation of EUT is at least somewhat plausible. Indeed, Cushman (Reference Cushman2013) proposed a dual-system account of morality linking valuation of actions and outcomes, model-free and model-based learning, and automatic and controlled processing. Here, we detail an action-based augmented EUT. Consider the following stylized experiment (cf. Batson et al., Reference Batson, Kobrynowicz, Dinnerstein, Kampf and Wilson1997; Fischbacher & Föllmi-Heusi, Reference Fischbacher and Föllmi-Heusi2013). Participants are told to flip a coin in private and then report the result to the experimenter. If they report that the coin came up heads, they get $2; if they report that it came up tails, they get $1. In these kinds of experiments, the reports of the coin coming up heads is greater than would be expected by chance. However, many participants do report that the coin came up tails, receiving $1 rather than $2. What is going on with them? Does this mean that they don’t care about money? Or that they have a bizarre preference of less to more? Drawing such conclusions would be extreme, and it would fail to make broader sense of the agent’s overall decisions. Instead of attributing incoherence to the participants, we might conclude that these subjects assign low (or negative) utility to a type of action – lying (see, e.g., Gaus & Thrasher, Reference Gaus and Thrasher2022, pp. 43, 78; Nichols, Reference Nichols2021, pp. 224–225). And the reason they assign low utility to those types of actions is plausibly because they accept a moral rule against lying. We will refer to this augmented EUT framework as action-based EUT. We can frame this as a normative theory of decision making according to which when deciding which action to take, agents should calculate the utilities assigned to both the outcomes of the actions and the types of actions they are.
Further reason for thinking that people assign low utility to actions that constitute rule violations comes from recent research in economics on “naïve rule following.” In one experiment, participants undertook a computer-based task involving moving balls into either of two buckets. They were informed that placing a ball in the yellow bucket would earn them $0.10, while placing it in the blue bucket would yield $0.05. The earnings were displayed on the screen following each ball placement. The study’s rule specification was straightforward: After learning about the earnings, participants were instructed, without further elaboration: “The rule is to place the balls into the blue bucket” (Kimbrough & Vostroknutov, Reference Kimbrough and Vostroknutov2018, p. 148). Consequently, this rule contradicts their financial interests, and no rationale for the rule is provided. Despite this, in five distinct countries (the USA, Canada, the Netherlands, Italy, and Turkey), over 70 percent of participants demonstrated instances of naïve rule adherence. That is, they put the balls in the blue bucket, despite the fact that this entailed monetary losses. This again suggests that assignments of utility are not limited to outcomes, since many people apparently assign low utility to actions that constitute rule breaking (for a different kind of evidence that seems to support action-based EUT, see Białek et al., Reference Białek, Terbeck and Handley2014; Gawronski et al., Reference Gawronski, Armstrong, Conway, Friesdorf and Hütter2017).
Although we think the foregoing analysis provides reason to adopt action-based EUT, one might challenge this interpretation.Footnote 5 Perhaps people only conform to rules because they recognize the potential costs of breaking rules. For instance, in the bucket study we have described, maybe participants follow the arbitrary rule because they think there is really a hidden cost to breaking the rule; for example, perhaps participants fear that the experimenter will punish rule breakers in some way. Although this is a possible interpretation, and maybe some participants really do have those thoughts, we think that, in many cases, the influence of internalized rules is more direct. Consider rules of etiquette. I put the fork to the left of the plate because I learned a rule to that effect. I don’t know how or why the rule came into place. And when I set the table, I think “the fork goes here” but never “following the fork-on-the-left rule has lower potential costs (or higher potential benefits).” The simpler, direct account of the value placed on rules also seems to have an advantage of efficiency. Less information needs to be stored, less information needs to be retrieved, and less time is required to follow the rule. So if I simply internalize the rule “put balls in blue bucket,” with a low utility assigned to violations, I don’t have to think further about the motivations of the experimenter, and that in itself is an advantage.
We have already acknowledged that some participants might think about the potential costs of breaking the rules but also maintained that many participants likely do not engage in such extra thinking. This generates some testable hypotheses. If some people follow the rule without considering costs, and some follow the rule through considering costs, we should expect processing differences. One way to test this hypothesis would be through retrospective protocol analysis (Ericsson & Simon, Reference Ericsson and Simon1984). For instance, after the instructions, for the first ball, participants who put the ball in the blue bucket might be asked to report their thoughts prior to putting the ball in the bucket. We might find that some participants explicitly report having thoughts about potential costs of breaking the rule, and others might simply say something about the rule. A key question is whether those who mention the costs would have taken longer to put the ball in the bucket than those who merely mentioned the rule. If such a difference were found, this would support the idea that there are different processes in the two cases. Furthermore, such a finding would be consistent with humans’ aversion to cognitive effort at the cost of accuracy (Johnson & Payne, Reference Johnson and Payne1985; Zenon et al., Reference Zenon, Solopchuk and Pezzulo2019), their ubiquitous use of biases and heuristics (Kahneman et al., Reference Kahneman, Slovic, Slovic and Tversky1982), and their struggle when requested to respond without the interference of automatic action (e.g., word-reading in the Stroop test; MacLeod, Reference MacLeod1991).
If we grant that people assign utility to actions themselves (in addition to assigning utility to the outcomes of actions), we gain a powerful way of accommodating the exceptional instances mentioned earlier. People’s commitment to moral rules might shape the utilities they have for different kinds of actions. Part of the reason that people would not push the man in front of the train is that the moral rules that they endorse lead them to assign a low utility to actions like pushing people to their deaths. Something similar holds for less dramatic moral issues, like stealing, cheating, and lying. Some participants in the coin-flip study described earlier plausibly endorse the rule that one should not lie, and this leads them to assign a low utility to actions in which they lie. We can build this into our decision tree. Suppose that a subject in one of these studies is aware that if he lies, there is a 75 percent chance of receiving $5, and if he tells the truth, there is only a 25 percent chance of getting $5. We can suppose that getting $5 would yield 3 units of utility. Now, imagine that for actions that fall under the type, lying, he assigns a low utility, say −2, and for actions that fall under the type, truth telling, he assigns a higher utility, say, 2. Under those circumstances, we can complete our decision tree (see Figure 7.2) and compute that telling the truth yields a greater expected utility compared to lying, despite the fact that the anticipated monetary gain is smaller.Footnote 6

Figure 7.2 A decision tree reflecting action-based EUT; u(o) = utility of this outcome, for example, the utility of the outcome in which you lie and get $5 is 1 (i.e., –2 + 3).
With action-based EUT, we can partially reharmonize the normative and descriptive decision theories. Consider the participant faced with options about whether to lie in the experiment. In the action/outcomes of his options from above (and in Figure 7.2), he assigns a low weight to actions in which he lies. Since he assigns a very low weight to such, he will and should decide in favor of option B rather than A. Importantly, though, we need not think that the utility regarding lying would always overwhelm other factors in moral decision making. Suppose the options are: (C) lie and receive $500,000, (D) tell the truth and receive $0. In that case, the utility he assigns to the money will exceed the utility he assigns to telling the truth; as a result, he will and should choose (C).
These examples are simple but may be scaled up. We are proposing that the action types on the table as choices, such as lying and truth-telling, have their own utility values to a person, shaped by moral norms. And those can then resonate through moral decision making.
7.1.5 Moral Decision Making and Actions
Thus, action-based EUT has resources to accommodate many cases that cannot be captured by outcome-based EUT. The EUT framework also allows us to appreciate another way in which rules might impact moral decision making. The initial branch of a decision tree outlines the options under consideration by the decision-maker (see Figure 7.1). If a potential option is not represented by the agent, it becomes unavailable for selection. If I’m trying to decide where to go to dinner, and I don’t know about some new excellent restaurant, then there is no chance that I will select that option. It isn’t available on my decision tree. It’s plausible that for many rules, once they are internalized, they effectively prune the decision tree. After internalizing a moral rule prohibiting stealing, that option might not even show up on the decision tree in many contexts where stealing is in fact a possible option (see, e.g., Phillips & Cushman, Reference Phillips and Cushman2017). When I go to the hardware store to buy nails, it would be easy to put the nails into my pocket (now that I think of it), but at the time that I was in the hardware store, it never occurred to me that I might steal the nails. Or, to hark back to the trolley cases, remember when you were innocent of philosophical examples. Imagine walking across a bridge when you notice five people on the tracks below who will be hit by a train. You also see a man with a large backpack looking over the bridge. Here’s something that would never occur to you – Should I push this man off the bridge to stop the train? That is not a live option for you. It’s excluded from your option set.
7.2 The Science of Moral Decision Making
7.2.1 Implications for Moral Values and Behaviors: Fairness
We now turn from an abstract characterization of moral decision making to more concrete scientific work on the topic. What are the implications of action-based EUT for moral values and behaviors? Here, we discuss a selection of findings that illustrate some of these implications – with recognition that much more empirical work needs to be done.
One prominent area of study concerns how people decide to fairly allocate resources, or how they define fairness. Without question, some fairness norms are universal, including preferences for impartiality, equitable allocations, and reciprocity (Blake et al., Reference Blake, McAuliffe, Corbit, Callaghan, Barry, Bowie, Kleutsch, Kramer, Ross, Vongsachang, Wrangham and Warneken2015; Deutsch, Reference Deutsch1975; McAuliffe et al., Reference McAuliffe, Blake, Steinbeis and Warneken2017). In one sense, fairness seems action-agnostic; for example, things are made equal in countless situations. However, allocations that are considered fair have also been found to vary among people and across situations – fairness seems to emerge from a variety of actions (Deutsch, Reference Deutsch1975; Niemi et al., Reference Niemi, Wasserman and Young2017; Niemi & Young, Reference Niemi and Young2017; Rawls, Reference Rawls1971, Reference Rawls2001; Trivers, Reference Trivers1971). For example, in some cases, the fair action is giving more to some than others, in order to reduce disparities in need or compensate work; in other cases, the fair action is impartial. There are even individual differences in the emotional underpinnings of fairness values (e.g., need-based fairness is more morally praised by people higher in empathy; Niemi & Young, Reference Niemi and Young2017), as well as divergent neural underpinnings (social and nonsocial cognition; Niemi et al., Reference Niemi, Wasserman and Young2017).
The actions that comprise what people consider fair are diverse. Indeed, the variety of actions that can be included in a person’s decisions about fairness helps explain why what counts as fairness is subject to ongoing social disagreement. An action-based model of decision making fits with this moral diversity. It also fits with the universality and order we observe: Humans prefer a fairly limited set of higher-order action types to be considered potentially fair. At a coarser grain, internalized fairness norms guide which actions are considered relevant for a decision-maker aiming for fairness (e.g., impartiality or equality?). While people endorse such broad, abstract terms as fairness-relevant and very morally important, at a finer grain, people making decisions that will be evaluated for fairness must ultimately assess the utility of particular actions, including ways of distributing resources or designing procedures.
7.2.2 Interpersonal Moral Decision Making
In some theories of moral psychology, severity or seriousness is fundamental to morally relevant events. Moral evaluations involve norm-violating events, actions that make an impact – unlike the choice to carry an umbrella. What especially matters in a morally relevant decision is how our decision affects others, as described in the various ways of allocating fairly. Even decisions that might seem personal (e.g., should I get a divorce; should I move away; should I answer that text?) involve calculation of the expected utility of the decision not only for me but for the person who will be affected. In turn, the decisions of others affect what I choose to do.
Thus, moral decision making entails negotiating the value of one’s own actions, the other’s actions, and outcomes – shared and nonshared. How are these different inputs to moral decisions weighed? Possibly, because people presume others are (also) self-interested and playing by the same “rules,” moral decision making involves representation of twin decision trees, with outcomes for the other person weighed against outcomes for the self. This possibility is supported by the literature on perspective taking and theory of mind in moral decision making, which suggests that people value more than self-interest when making moral judgments and decisions. People’s valuations of others are also reflected in these decisions. In particular, perspective taking can be viewed as a process that enables socially attuned moral decision making, and behavioral and neural evidence supports the possibility that perspective taking is a crucial process during moral cognition, including allocation of blame and praise (Buckholtz & Marois, Reference Buckholtz and Marois2012; Green & Haidt, Reference Greene and Haidt2002; Moll et al., Reference Moll, de Oliveira-Souza, Bramati and Grafman2002; Young et al., Reference Young, Camprodon, Hauser, Pascual-Leone and Saxe2010; Young et al., Reference Young, Cushman, Hauser and Saxe2007; Yoder & Decety, Reference Yoder and Decety2014). Furthermore, other fMRI and behavioral work (Niemi et al., Reference Niemi and Young2017; Niemi & Young, Reference Niemi and Young2017) shows that perspective taking may be behind variation we see in people’s moral evaluations of different types of fairness, including the extent to which they consider reciprocity and impartiality praiseworthy. Taken together, this research suggests that moral decision making incorporates the perspectives of others.
Different values recruit perspective taking to varying degrees during moral decision making, as do different individuals. Individual differences in empathy and concern for others are associated with numerous kinds of moral decision making, from resource allocation problems to harm dilemmas. Research with the interpersonal orientation task (Van Lange, Reference Van Lange1999; Van Lange et al., Reference Van Lange, Otten, De Bruin and Joireman1997) over the last few decades indicates that when people are given the choice between three options: an equal allocation of valuable points between the self and an anonymous other (e.g., 450/450), an individualistic allocation that maximizes points to self (e.g., 550/450), or a competitive allocation that minimizes the other’s points (e.g., 420/320), people typically choose the equal allocation, rather than the self-interested options. Context matters, though. Among business school student participants, choice of the individualistic or competitive options increases, relative to noneconomics students (Van Lange et al., Reference Van Lange, Schippers and Balliet2011). The fact that we see divergent decision making based on decision-makers’ capacity or desire to adopt others’ perspectives suggests that, on average, moderation of self-interest by concern for others may be a vulnerable aspect of moral decision making.
Other research with individuals with psychopathy indicates that their moral decision making may lack the emotionally aversive responses to harm that people lower in psychopathy demonstrate, leading them to choose the equivalent of “pushing the man off the footbridge” (in this case, smothering a crying baby; Glenn et al., Reference Glenn, Koleva, Iyer, Graham and Ditto2010). Accordingly, people high, compared to low, in psychopathy demonstrate a dampened neural response in brain areas for representation of affect (Decety et al., Reference Decety, Chen, Harenski and Kiehl2013) when viewing another person’s pain (i.e., pictures of injured others); however, they show no reduction, relative to nonpsychopathic individuals, when viewing pain described as having happened to themselves. Certainly, humans are on a spectrum of sensitivity to harm (e.g., a caring continuum; Marsh, Reference Marsh2019) and are sometimes concerned with different outcomes altogether in morally relevant decision making. However, typically, the agent does not make moral decisions in a purely self-interested way. The other person’s outcome (harm or benefits) is referenced and, if present and severe enough, activates emotional responses that give value to the action for the decision-maker.
7.2.3 The In-Group and Moral Decision Making
People’s moral decisions have weighty consequences for social life, as they effectively divide people into moral communities (Graham et al., Reference Graham, Haidt and Nosek2009; Haidt, Reference Haidt2012). In turn, moral communities bind together through shared, moral conceptualizations of actions. People’s group-level moral commitments contain rules that factor into calculations of the expected utility of their individual decisions – the moral codes of in-groups both prune the options for actions and shape the interpretation of actions. For example, membership in a moral community that collectively values empathy and equality might contribute to an interpretation of charitable giving as a way to achieve fairness. Likewise, membership in a group of revolutionaries might turn an act of vandalism into bravery.
The influence of the group structure on human psychology has been described for decades. Clearly decision making molds to the group through a variety of cognitive mechanisms, as shown in research on group conformity, group polarization, and groupthink. Even given minimal, completely arbitrary cues to group membership, people easily identify with a group (Tajfel, Reference Tajfel1982); the phenomenon of minimal group formation has been observed in childhood through adulthood (Dunham et al., Reference Dunham, Baron and Carey2011). Mature moral cognition involves countless examples of in-group-based decision making, often referred to as in-group bias. For example, participants have been found to favor others who share their political orientation in moral decision making about whether people accused of sexual misconduct should be reprimanded (Klar & McCoy, Reference Klar and McCoy2021); and they favor close others over distant others in their causal explanations for moral violations (Niemi et al., Reference Niemi, Doris and Graham2023). Indeed, a cluster of moral values proposed in moral foundations theory (Graham et al., Reference Graham, Nosek, Haidt, Iyer, Koleva and Ditto2011), referred to as binding values, are concerned with maintaining the bonds of relationships and groups, rather than an individual’s obligations to other individuals. Binding values, such as loyalty and respect for authority, by their nature, mold decision making to benefit fellow group members and relationship partners, even at the cost of harming an individual.
The influence of the group on moral decision making represents a factor limiting the influence of empathy and perspective taking (Bloom, Reference Bloom2017). Research on dehumanization, prejudice, and stereotyping shows that affect may be blunted in response to morally relevant needs of out-group members (Zaki & Cikara, Reference Zaki and Cikara2015; see also Chapter 14 in this volume). If the people affected by one’s moral decisions are not viewed as people, then representations of the value of the action and outcome of the decision are unlikely to incorporate rich representations of the outcome’s value for the affected person. In that case, instead of weighing and negotiating twin decision trees, the decision-maker’s self-interested expectations about utility might overpower the effects of empathy and perspective taking on moral decision making.
Neglect of the outcome during moral decision making has also been observed to vary based on the ideological groups with which people identify (Hannikainen et al., Reference Hannikainen, Miller and Cushman2017). Moral prohibition of actions appears to be more likely for people with more conservative values, as these values tend to involve rules about concrete actions, for example, sexual activity, food taboos, unpatriotic gestures, disgusting behavior. While actions and intentions are typically both factored into moral judgments, it is possible that, sometimes, individuals do not need more than the act itself to be able to comment on its wrongness. There is much room for research that maps the influence of group norms on moral decision making, including how evaluations of the utility of both actions and outcomes are influenced by moral communities and factored into moral decision making.
7.2.4 The Implementation of Decisions and Representations of Action
The structure of moral decision making can be further illuminated by considering the possibility that moral decision making is sometimes outcome-based and sometimes action-based: 1) people ignore outcomes and make morally relevant decisions based on actions alone, as described earlier and 2) people overlook the value of actions and decide to pursue a morally relevant outcome. Outcomes considered “moral” or “ethical” might require neglect of either actions or outcomes. For example, a parent faced with the decision to keep their unvaccinated child out of school or vaccinate their child and send them to school might ignore the direct outcome of the choice on the child’s education and base their decision on an action: injecting the child with the vaccine.
Neglecting the discrete actions and focusing on the big picture, the outcome, during deliberate moral decision making, also presents issues. As the research on implementation actions (Gollwitzer, Reference Gollwitzer1999) indicates, a person who decides on an intended outcome, such as “get more involved with charity this year,” or “stop being mean to my brother” is more likely to reach that outcome if it is broken down into actionable steps. Leaving morally relevant goals as abstract outcomes might inspire action, but the outcome is more likely if the concrete actions associated with the goal are realistically evaluated.
When people negotiate a morally relevant decision, their decision may focus on evaluation of the outcome or the prerequisite actions. Research on event segmentation (Kurby & Zacks, Reference Kurby and Zacks2008; Zacks & Swallow, Reference Zacks and Swallow2007) finds that people are capable of splitting up events in time in finer and coarser segments. For example, a bride might assign value to each of the following, as high-stakes decision outcomes: the engagement party, the bachelorette party, the catering tasting, the venue selection, the wedding ceremony, the honeymoon, and, finally, being married. A relative of the bride might see things differently, assigning moral weight and high utility to just one outcome: the bride being married. The action-based EUT would suggest that the bride, compared to her relative, perceives more decision trees before being married and, therefore, sees exponentially more options, each associated with valuable actions and outcomes. According to event segmentation theory, both parsing events into smaller action units and larger units reflecting goals are crucial to everyday perception, and it is not unusual for people to segment events in roughly similar ways. People may differ, however, on the “grain” in which they break down one event. Like the bride focused on the many actions before each of the outcomes involved in becoming married, people focused on subgoals of an event describe actions and use more precise verbs (Kurby & Zacks, Reference Kurby and Zacks2008). By contrast, like the relative of the bride, when people focus on coarser-grained events, different features are perceived: objects and more precise nouns.
It is theorized that fine- and coarse-grained event segmentation reflects the capacity and function of working memory, chunking information into cognitively manageable actions and outcomes. Neural research on narrative interpretation and recall demonstrates that short event boundaries reflect activity in sensory regions, whereas longer events reflect activity in “high-level” cognitive areas responsible for abstract models of situations (Baldassano et al., Reference Baldassano, Chen, Zadbood, Pillow, Hasson and Norman2017). Actions and outcomes are represented differently in the brain, but they are tied together when we make sense of the world.
Actions are nested inside events, but this hierarchical organization doesn’t necessarily translate to order between people. When required to negotiate decisions, the bride and her relative may find it difficult to see eye-to-eye about what is the current goal. The utility of the one shared, highly valued outcome, marriage, might be complicated by the proliferation of action and outcome utility estimates experienced by the bride. At any given point in time before the wedding, it is more likely that decision making will be focused on the outcome for the relative and on some action for the bride. According to Vallacher and Wegner (Reference Vallacher and Wegner1989), the target of focus matters, in terms of competence and morality. The authors proposed that action focus versus outcome focus reflects a social-personality dimension of “personal agency”: When we are low-level agents, we are detail-oriented and concerned about mechanism; when we are high-level agents, we see meaning, implications, consequences. Low-level action identification is proposed to be more likely when a person is in unfamiliar territory, feeling their way through one step at a time. High-level action identification, by contrast, is proposed to emerge when a person has some expertise. The authors suggest that low-level and high-level action identification directly relate to moral decision making, with high-level action identification necessary for the kind of causal reasoning and understanding of abstract moral implications that prevent impulsive offenses.
7.2.5 Pruning Options through Valuation of Actions
Unlike the Footbridge problem (push or don’t push the man to save five lives), moral dilemmas in real life often have more than yes/no choices such as harm vs. don’t harm, or be fair vs. don’t be fair. People, guided by instincts, norms, and habits, instead face dilemmas over options for how to act that have various and sometimes unclear moral implications (which is why they are dilemmas). Recalling the example given earlier in this chapter, a parent trying to decide whether to keep their unvaccinated child out of school or vaccinate their child and send them to school might consider several important factors (e.g., disease risk, effects on education, social development, religious concerns) each of which has morally relevant value to the parent.
In order to make a decision like this among confusing morally relevant options, a person may transform the options so that they don’t conflict with their own values. The possibility that people alter their options and associated actions, implicitly and deliberately, in order to facilitate decision making is supported by research on moral decision making. Research indicates that people do tend to export their value systems when judging or making decisions about others. That is, they believe that what is good and right for them is good and right for the other person; no alternative is possible (Newman et al., Reference Newman, Bloom and Knobe2014). This suggests that if a person is attempting to take the perspective of someone in order to estimate an outcome (i.e., in empathy-guided moral decision making), the perspective they take will ultimately bear a resemblance to their own. In this vein, research on moral hypocrisy shows that people are inconsistent moral judges. When they violate a moral commitment they may judge themselves more favorably and their behaviors as more morally permissible than someone else who carries out that violation (e.g., Batson et al., Reference Batson, Thompson and Chen2002; Conway & Peetz, Reference Conway and Peetz2012; Graham et al., Reference Graham, Meindl, Koleva, Iyer and Johnson2015; Valdesolo & DeSteno, Reference Valdesolo and DeSteno2007, Reference Valdesolo and DeSteno2008).
The importance of people’s transformation of actions into morally acceptable options is synchronous with people’s thinking about omissions during decision making. Moral norms transform omissions, the absence of an action, into legitimate moral and immoral options. For example, when a person does nothing in the face of suffering, this may be perceived as a decision associated with bad character, such as callousness or cowardice (Duff, Reference Duff1993). Moral norms (i.e., to reduce suffering in others) transform nonactions to be just as influential as actions in decision making.
7.3 Conclusion
In this chapter, we’ve described how action-based EUT accommodates moral decision making, in terms of actions, options, and learning. We focused on EUT to show that, by incorporating action, EUT can explain a great deal of moral decision making. We acknowledge, however, that there is a wide world of moral decisions to explain. Some of them, for example, may be better represented by other accounts, including game theory (Binmore, Reference Binmore2011). Furthermore, the possibility that there are important individual differences and situational influences on moral decision making is suggested by the reported scientific findings. At this point, there are still unanswered questions regarding the integration of EUT and reinforcement learning models. Nevertheless, it is clear that enormous headway has been made over (at least) the last half century in the study of moral judgment and decision making, and the prospects of an increasingly evidence-based understanding of the topic are strong.

