Much work in moral psychology examines moral judgment. For example, we ask people whether they think that a certain action is permissible. Or we ask how likely they think it is that a certain action is the right thing to do. But we can also ask about moral decisions – how do people decide what to do in moral contexts? Decisions characteristically depend on judgments, but decisions go beyond judgment to initiate action. I might judge that the right thing to do is to give to Save the Children, but it’s a further question whether I will decide to write a check.
The topic of moral decision making is vast in scope. In this chapter, we limit our theoretical treatment by focusing largely on expected utility theory, the mainstay model of decision theory generally. To capture a broader swathe of moral decision making, we present an augmentation of the standard outcome-based expected utility hypothesis: an action-based account. We describe how estimates of utility based on the value of actions explain moral decision making alongside outcome-based estimates. We also discuss recent advances in the science of moral decision making that support this augmented expected utility model.
7.1 The Foundations for a Theory of Moral Decision Making: Expected Utility Theory
The most prominent theory of decision making is expected utility theory (EUT), and we will investigate moral decision from this starting point. Notoriously, much of human decision making does not conform to expected utility theory. There is an entire tradition of work on this, but one representative example is that people overweight small risks – people effectively treat a 1 percent chance of some event as having a significantly higher probability (e.g., 4 percent). Insurance companies capitalize on this human vulnerability (Kahneman, Reference Kahneman2011 provides an accessible review). Nonetheless, expected utility theory provides a useful starting point from which to articulate key aspects of moral decision making.
The theory of expected utility relies on a specific collection of components: a set of options available to the decision-maker, the decision-maker’s expectations regarding the likelihood of a particular choice resulting in a specific outcome, the subjective importance or utilities assigned by the decision-maker to each potential outcome, and simple computations involving these utilities and expectations.
The most straightforward approach to explain this framework is by considering different bets that one might take involving money. I assign higher utility (meaning I consider it more valuable) to an outcome where I receive $100 compared to an outcome where I receive $90. This preference arises from my valuation of money, where I generally prefer having more over having less for things I value. Consequently, when faced with a decision between (A) receiving $100 and (B) receiving $90, the appropriate choice is A. Nevertheless, if the decision involves choosing between (C) a 10 percent chance of receiving $100 and (D) a 90 percent chance of receiving $90, then the better option is D. In the initial scenario, the EUT serves as a normative theory – it outlines the decisions one ought to make based on their expectations and utilities. However, it also forms the basis for a descriptive theory, and indeed, in the given examples, it is highly probable that individuals would opt for A over B and D over C.
7.1.1 Outcome-Based EUT
Certainly, not all choices revolve around finances, and this aspect contributes to why decision theory expresses the personal worth attributed to outcomes using the concept of “utility” rather than monetary measures. To take a familiar example from decision theory that incorporates probabilities, consider being faced with a decision about whether to take an umbrella with you to work. You place a very low utility on an outcome in which you don’t have your umbrella and it rains; in that case, you get soaked. This brings you no happiness at all, thus the utility of this outcome for you is 0. If you do have your umbrella, and it does not rain (you lugged your umbrella around for nothing), this is also a low-value outcome. Still, it’s better than not having your umbrella when it does rain – let’s say your utility for this outcome is 15. The outcome in which you don’t take your umbrella and it doesn’t rain would certainly be better than this. Let’s say that this has the utility 70 for you. Finally, you’ll be very happy with your decision to take your umbrella when it in fact rains; let’s say your utility for this outcome is 90.
Now, to decide whether or not to take the umbrella, you consult the weather forecast, which leads you to think the chance of rain is 75 percent. This scenario can be represented with a decision tree (see Figure 7.1). Given these probabilities and values, EUT dictates that you should take the umbrella.Footnote 1 However, if the chance of rain is only 5 percent, then EUT says that you should not take the umbrella.Footnote 2 We can dub this approach to decision theory outcome-based EUT. Once more, while we have presented this illustration in the framework of a normative theory of decision making (wherein, in the initial scenario, selecting the umbrella is the recommended course of action), it is also feasible that an outcome-oriented EUT accurately depicts how individuals might arrive at decisions based on the provided utilities and expectations.

Figure 7.1 A decision tree reflecting outcome-based EUT; u(o) = utility of this outcome, for example, the utility of the outcome in which you have your umbrella and it rains is 90.
7.1.2 Moral Decisions and Outcome-Based EUT
Already with this outcome-based EUT model of decision making we can accommodate morally important decisions of a utilitarian bent.Footnote 3 Indeed, as Baron writes: “Utilitarianism is the natural extension of expected-utility to decisions for many people. The utilitarian normative model here is to base the decision on the expected number of deaths” (Chapter 8, this volume). Consider the following case, which we call “Triage.” An ambulance driver knows that he is the only person who can address life-threatening injuries from an avalanche. There are two clusters of people in need, and he only has time to attend to one cluster. One cluster consists of two people on the north side of the mountain; the other cluster consists of five people on the east side of the mountain. Given the information available, he has the expectation that if he goes to the north side he can save the two people there and if he goes to the east side he can save the five people there. Since the ambulance driver values human life, and more is better than less, he should (normative EUT) and will (descriptive EUT) decide to go to the east side. The situation changes if he learns that the injuries for the people at the north are such that for each of these two people, he has a 95 percent chance of saving that person, and that the injuries for the people at the east are such that for each one of these five people, he has a 1 percent chance of saving them. In that case, he should and will factor these probabilities into his calculation and decide to go instead to the north side.
Outcome-based EUT accommodates variations in moral decision making. For instance, one person might assign a higher utility to saving cats than to saving dogs, and another person might have the reverse utility assignments; in such a case, if the first person were confronted with a dilemma that required sacrificing one dog or one cat, she should prefer to save the cat (we should expect the second person to prefer saving the dog). Outcome-based EUT can also accommodate subtler phenomena, such as when people represent actions as associated with entirely different outcomes. For example, the decision of whether to tell a joke or not might be associated with expectations about the outcome of hurting the target of the joke (a moral consideration), or expectations about the outcome of making the target of the joke laugh (nonmoral consideration). As such, two people considering the same action, the joke, might weigh expectations about different outcomes with very different intrinsic moral stakes (Moore et al., Reference Moore, Stevens and Conway2011). As work on indirect speech indicates, whether people are aware of it or not, such moral and nonmoral characterizations of the same event are common, for example, in the form of euphemistic language (Bandura et al., Reference Bandura, Barbaranelli, Caprara and Pastorelli1996; Chakroff et al., Reference Chakroff, Thomas, Haque and Young2015; Orwell, Reference Orwell1946). An outcome can also be represented as the result of different causes or different motivations for approach or avoidance (Janoff-Bulman et al., Reference Janoff-Bulman, Sheikh and Hepp2009), construals that potentially have different implications for moral judgment and decision making. For example, an accident holding up traffic might be construed as the outcome of two cars colliding or one person’s careless texting.
7.1.3 Moral Decisions and Issues for Outcome-Based EUT
Examples like Triage are actually abundant in our everyday lives when we make decisions about how to produce the most good. However, and famously, there is a wide range of cases in which people’s decisions (or, at any rate, their reports of what they would decide) diverge from the dictates of the outcome-based EUT. That is, the decision recommended by outcome-based EUT is not the decision that people make. The closest match to a case like Triage is the “Footbridge” trolley case in which the agent has to decide whether to throw a man in front of a train to prevent the deaths of five other people. Given a focus on outcomes and the utility placed on human life, the utilitarian outcome-based EUT model suggests that one will and should throw the man in front of the train. But this is not what most people say they would do (only around 30 percent, e.g., Greene, et al., Reference Greene, Cushman, Stewart, Lowenberg, Nystrom and Cohen2009; Petrinovich & O’Neill, Reference Petrinovich and O’Neill1996). Most people say that they would not throw the man in front of the train.
Cases like the Footbridge example are exceptions to the happy harmony we see between normative and descriptive versions of outcome-based EUT for cases like Triage. Other examples in which people’s (reported) decisions diverge from outcome-based EUT include protected values (e.g., Baron & Spranca, Reference Baron and Spranca1997), the act/omission distinction (Cushman & Young, Reference Cushman and Young2011; Henne et al., Reference Henne, Niemi, Pinillos, De Brigard and Knobe2019; Spranca et al., Reference Spranca, Minsk and Baron1991; Willemsen & Reuter, Reference Willemsen and Reuter2016), and status quo bias (Ritov & Baron, Reference Ritov and Baron1992). In these and many other cases, people’s decisions diverge from what outcome-based EUT normatively prescribes. One option is to maintain that outcome-based EUT is the correct normative theory and where people diverge from outcome-based EUT, they are simply being irrational (cf. Baron, Reference Baron1994; Greene, Reference Greene and Sinnott-Armstrong2008; Singer, Reference Singer2005). On this approach, one might dispense with EUT as descriptively useful for exceptional cases. An alternative option, which we will explore in this section, is to elaborate and augment EUT in ways that will capture some of the apparently exceptional cases. (In Section 7.2, we will explore challenges to rational moral decision making.)
7.1.4 Moral Decisions and Action-Based EUT
Although there is an elegant simplicity to outcome-based EUT, it seems ill-equipped to explain many everyday moral decisions. In particular, many nonutilitarian decisions seem difficult to accommodate within outcome-based EUT. Perhaps EUT need not be so elegantly simple. As we’ve hinted in Section 7.1.3, in addition to assessing actions according to the utility of their outcomes, we might also assign utility to actions themselves. In other words, agents sometimes assign utility to outcomes (i.e., states of affairs), but agents also sometimes assign utilityFootnote 4 to an action-type. Such an augmentation of EUT is at least somewhat plausible. Indeed, Cushman (Reference Cushman2013) proposed a dual-system account of morality linking valuation of actions and outcomes, model-free and model-based learning, and automatic and controlled processing. Here, we detail an action-based augmented EUT. Consider the following stylized experiment (cf. Batson et al., Reference Batson, Kobrynowicz, Dinnerstein, Kampf and Wilson1997; Fischbacher & Föllmi-Heusi, Reference Fischbacher and Föllmi-Heusi2013). Participants are told to flip a coin in private and then report the result to the experimenter. If they report that the coin came up heads, they get $2; if they report that it came up tails, they get $1. In these kinds of experiments, the reports of the coin coming up heads is greater than would be expected by chance. However, many participants do report that the coin came up tails, receiving $1 rather than $2. What is going on with them? Does this mean that they don’t care about money? Or that they have a bizarre preference of less to more? Drawing such conclusions would be extreme, and it would fail to make broader sense of the agent’s overall decisions. Instead of attributing incoherence to the participants, we might conclude that these subjects assign low (or negative) utility to a type of action – lying (see, e.g., Gaus & Thrasher, Reference Gaus and Thrasher2022, pp. 43, 78; Nichols, Reference Nichols2021, pp. 224–225). And the reason they assign low utility to those types of actions is plausibly because they accept a moral rule against lying. We will refer to this augmented EUT framework as action-based EUT. We can frame this as a normative theory of decision making according to which when deciding which action to take, agents should calculate the utilities assigned to both the outcomes of the actions and the types of actions they are.
Further reason for thinking that people assign low utility to actions that constitute rule violations comes from recent research in economics on “naïve rule following.” In one experiment, participants undertook a computer-based task involving moving balls into either of two buckets. They were informed that placing a ball in the yellow bucket would earn them $0.10, while placing it in the blue bucket would yield $0.05. The earnings were displayed on the screen following each ball placement. The study’s rule specification was straightforward: After learning about the earnings, participants were instructed, without further elaboration: “The rule is to place the balls into the blue bucket” (Kimbrough & Vostroknutov, Reference Kimbrough and Vostroknutov2018, p. 148). Consequently, this rule contradicts their financial interests, and no rationale for the rule is provided. Despite this, in five distinct countries (the USA, Canada, the Netherlands, Italy, and Turkey), over 70 percent of participants demonstrated instances of naïve rule adherence. That is, they put the balls in the blue bucket, despite the fact that this entailed monetary losses. This again suggests that assignments of utility are not limited to outcomes, since many people apparently assign low utility to actions that constitute rule breaking (for a different kind of evidence that seems to support action-based EUT, see Białek et al., Reference Białek, Terbeck and Handley2014; Gawronski et al., Reference Gawronski, Armstrong, Conway, Friesdorf and Hütter2017).
Although we think the foregoing analysis provides reason to adopt action-based EUT, one might challenge this interpretation.Footnote 5 Perhaps people only conform to rules because they recognize the potential costs of breaking rules. For instance, in the bucket study we have described, maybe participants follow the arbitrary rule because they think there is really a hidden cost to breaking the rule; for example, perhaps participants fear that the experimenter will punish rule breakers in some way. Although this is a possible interpretation, and maybe some participants really do have those thoughts, we think that, in many cases, the influence of internalized rules is more direct. Consider rules of etiquette. I put the fork to the left of the plate because I learned a rule to that effect. I don’t know how or why the rule came into place. And when I set the table, I think “the fork goes here” but never “following the fork-on-the-left rule has lower potential costs (or higher potential benefits).” The simpler, direct account of the value placed on rules also seems to have an advantage of efficiency. Less information needs to be stored, less information needs to be retrieved, and less time is required to follow the rule. So if I simply internalize the rule “put balls in blue bucket,” with a low utility assigned to violations, I don’t have to think further about the motivations of the experimenter, and that in itself is an advantage.
We have already acknowledged that some participants might think about the potential costs of breaking the rules but also maintained that many participants likely do not engage in such extra thinking. This generates some testable hypotheses. If some people follow the rule without considering costs, and some follow the rule through considering costs, we should expect processing differences. One way to test this hypothesis would be through retrospective protocol analysis (Ericsson & Simon, Reference Ericsson and Simon1984). For instance, after the instructions, for the first ball, participants who put the ball in the blue bucket might be asked to report their thoughts prior to putting the ball in the bucket. We might find that some participants explicitly report having thoughts about potential costs of breaking the rule, and others might simply say something about the rule. A key question is whether those who mention the costs would have taken longer to put the ball in the bucket than those who merely mentioned the rule. If such a difference were found, this would support the idea that there are different processes in the two cases. Furthermore, such a finding would be consistent with humans’ aversion to cognitive effort at the cost of accuracy (Johnson & Payne, Reference Johnson and Payne1985; Zenon et al., Reference Zenon, Solopchuk and Pezzulo2019), their ubiquitous use of biases and heuristics (Kahneman et al., Reference Kahneman, Slovic, Slovic and Tversky1982), and their struggle when requested to respond without the interference of automatic action (e.g., word-reading in the Stroop test; MacLeod, Reference MacLeod1991).
If we grant that people assign utility to actions themselves (in addition to assigning utility to the outcomes of actions), we gain a powerful way of accommodating the exceptional instances mentioned earlier. People’s commitment to moral rules might shape the utilities they have for different kinds of actions. Part of the reason that people would not push the man in front of the train is that the moral rules that they endorse lead them to assign a low utility to actions like pushing people to their deaths. Something similar holds for less dramatic moral issues, like stealing, cheating, and lying. Some participants in the coin-flip study described earlier plausibly endorse the rule that one should not lie, and this leads them to assign a low utility to actions in which they lie. We can build this into our decision tree. Suppose that a subject in one of these studies is aware that if he lies, there is a 75 percent chance of receiving $5, and if he tells the truth, there is only a 25 percent chance of getting $5. We can suppose that getting $5 would yield 3 units of utility. Now, imagine that for actions that fall under the type, lying, he assigns a low utility, say −2, and for actions that fall under the type, truth telling, he assigns a higher utility, say, 2. Under those circumstances, we can complete our decision tree (see Figure 7.2) and compute that telling the truth yields a greater expected utility compared to lying, despite the fact that the anticipated monetary gain is smaller.Footnote 6

Figure 7.2 A decision tree reflecting action-based EUT; u(o) = utility of this outcome, for example, the utility of the outcome in which you lie and get $5 is 1 (i.e., –2 + 3).
With action-based EUT, we can partially reharmonize the normative and descriptive decision theories. Consider the participant faced with options about whether to lie in the experiment. In the action/outcomes of his options from above (and in Figure 7.2), he assigns a low weight to actions in which he lies. Since he assigns a very low weight to such, he will and should decide in favor of option B rather than A. Importantly, though, we need not think that the utility regarding lying would always overwhelm other factors in moral decision making. Suppose the options are: (C) lie and receive $500,000, (D) tell the truth and receive $0. In that case, the utility he assigns to the money will exceed the utility he assigns to telling the truth; as a result, he will and should choose (C).
These examples are simple but may be scaled up. We are proposing that the action types on the table as choices, such as lying and truth-telling, have their own utility values to a person, shaped by moral norms. And those can then resonate through moral decision making.
7.1.5 Moral Decision Making and Actions
Thus, action-based EUT has resources to accommodate many cases that cannot be captured by outcome-based EUT. The EUT framework also allows us to appreciate another way in which rules might impact moral decision making. The initial branch of a decision tree outlines the options under consideration by the decision-maker (see Figure 7.1). If a potential option is not represented by the agent, it becomes unavailable for selection. If I’m trying to decide where to go to dinner, and I don’t know about some new excellent restaurant, then there is no chance that I will select that option. It isn’t available on my decision tree. It’s plausible that for many rules, once they are internalized, they effectively prune the decision tree. After internalizing a moral rule prohibiting stealing, that option might not even show up on the decision tree in many contexts where stealing is in fact a possible option (see, e.g., Phillips & Cushman, Reference Phillips and Cushman2017). When I go to the hardware store to buy nails, it would be easy to put the nails into my pocket (now that I think of it), but at the time that I was in the hardware store, it never occurred to me that I might steal the nails. Or, to hark back to the trolley cases, remember when you were innocent of philosophical examples. Imagine walking across a bridge when you notice five people on the tracks below who will be hit by a train. You also see a man with a large backpack looking over the bridge. Here’s something that would never occur to you – Should I push this man off the bridge to stop the train? That is not a live option for you. It’s excluded from your option set.
7.2 The Science of Moral Decision Making
7.2.1 Implications for Moral Values and Behaviors: Fairness
We now turn from an abstract characterization of moral decision making to more concrete scientific work on the topic. What are the implications of action-based EUT for moral values and behaviors? Here, we discuss a selection of findings that illustrate some of these implications – with recognition that much more empirical work needs to be done.
One prominent area of study concerns how people decide to fairly allocate resources, or how they define fairness. Without question, some fairness norms are universal, including preferences for impartiality, equitable allocations, and reciprocity (Blake et al., Reference Blake, McAuliffe, Corbit, Callaghan, Barry, Bowie, Kleutsch, Kramer, Ross, Vongsachang, Wrangham and Warneken2015; Deutsch, Reference Deutsch1975; McAuliffe et al., Reference McAuliffe, Blake, Steinbeis and Warneken2017). In one sense, fairness seems action-agnostic; for example, things are made equal in countless situations. However, allocations that are considered fair have also been found to vary among people and across situations – fairness seems to emerge from a variety of actions (Deutsch, Reference Deutsch1975; Niemi et al., Reference Niemi, Wasserman and Young2017; Niemi & Young, Reference Niemi and Young2017; Rawls, Reference Rawls1971, Reference Rawls2001; Trivers, Reference Trivers1971). For example, in some cases, the fair action is giving more to some than others, in order to reduce disparities in need or compensate work; in other cases, the fair action is impartial. There are even individual differences in the emotional underpinnings of fairness values (e.g., need-based fairness is more morally praised by people higher in empathy; Niemi & Young, Reference Niemi and Young2017), as well as divergent neural underpinnings (social and nonsocial cognition; Niemi et al., Reference Niemi, Wasserman and Young2017).
The actions that comprise what people consider fair are diverse. Indeed, the variety of actions that can be included in a person’s decisions about fairness helps explain why what counts as fairness is subject to ongoing social disagreement. An action-based model of decision making fits with this moral diversity. It also fits with the universality and order we observe: Humans prefer a fairly limited set of higher-order action types to be considered potentially fair. At a coarser grain, internalized fairness norms guide which actions are considered relevant for a decision-maker aiming for fairness (e.g., impartiality or equality?). While people endorse such broad, abstract terms as fairness-relevant and very morally important, at a finer grain, people making decisions that will be evaluated for fairness must ultimately assess the utility of particular actions, including ways of distributing resources or designing procedures.
7.2.2 Interpersonal Moral Decision Making
In some theories of moral psychology, severity or seriousness is fundamental to morally relevant events. Moral evaluations involve norm-violating events, actions that make an impact – unlike the choice to carry an umbrella. What especially matters in a morally relevant decision is how our decision affects others, as described in the various ways of allocating fairly. Even decisions that might seem personal (e.g., should I get a divorce; should I move away; should I answer that text?) involve calculation of the expected utility of the decision not only for me but for the person who will be affected. In turn, the decisions of others affect what I choose to do.
Thus, moral decision making entails negotiating the value of one’s own actions, the other’s actions, and outcomes – shared and nonshared. How are these different inputs to moral decisions weighed? Possibly, because people presume others are (also) self-interested and playing by the same “rules,” moral decision making involves representation of twin decision trees, with outcomes for the other person weighed against outcomes for the self. This possibility is supported by the literature on perspective taking and theory of mind in moral decision making, which suggests that people value more than self-interest when making moral judgments and decisions. People’s valuations of others are also reflected in these decisions. In particular, perspective taking can be viewed as a process that enables socially attuned moral decision making, and behavioral and neural evidence supports the possibility that perspective taking is a crucial process during moral cognition, including allocation of blame and praise (Buckholtz & Marois, Reference Buckholtz and Marois2012; Green & Haidt, Reference Greene and Haidt2002; Moll et al., Reference Moll, de Oliveira-Souza, Bramati and Grafman2002; Young et al., Reference Young, Camprodon, Hauser, Pascual-Leone and Saxe2010; Young et al., Reference Young, Cushman, Hauser and Saxe2007; Yoder & Decety, Reference Yoder and Decety2014). Furthermore, other fMRI and behavioral work (Niemi et al., Reference Niemi and Young2017; Niemi & Young, Reference Niemi and Young2017) shows that perspective taking may be behind variation we see in people’s moral evaluations of different types of fairness, including the extent to which they consider reciprocity and impartiality praiseworthy. Taken together, this research suggests that moral decision making incorporates the perspectives of others.
Different values recruit perspective taking to varying degrees during moral decision making, as do different individuals. Individual differences in empathy and concern for others are associated with numerous kinds of moral decision making, from resource allocation problems to harm dilemmas. Research with the interpersonal orientation task (Van Lange, Reference Van Lange1999; Van Lange et al., Reference Van Lange, Otten, De Bruin and Joireman1997) over the last few decades indicates that when people are given the choice between three options: an equal allocation of valuable points between the self and an anonymous other (e.g., 450/450), an individualistic allocation that maximizes points to self (e.g., 550/450), or a competitive allocation that minimizes the other’s points (e.g., 420/320), people typically choose the equal allocation, rather than the self-interested options. Context matters, though. Among business school student participants, choice of the individualistic or competitive options increases, relative to noneconomics students (Van Lange et al., Reference Van Lange, Schippers and Balliet2011). The fact that we see divergent decision making based on decision-makers’ capacity or desire to adopt others’ perspectives suggests that, on average, moderation of self-interest by concern for others may be a vulnerable aspect of moral decision making.
Other research with individuals with psychopathy indicates that their moral decision making may lack the emotionally aversive responses to harm that people lower in psychopathy demonstrate, leading them to choose the equivalent of “pushing the man off the footbridge” (in this case, smothering a crying baby; Glenn et al., Reference Glenn, Koleva, Iyer, Graham and Ditto2010). Accordingly, people high, compared to low, in psychopathy demonstrate a dampened neural response in brain areas for representation of affect (Decety et al., Reference Decety, Chen, Harenski and Kiehl2013) when viewing another person’s pain (i.e., pictures of injured others); however, they show no reduction, relative to nonpsychopathic individuals, when viewing pain described as having happened to themselves. Certainly, humans are on a spectrum of sensitivity to harm (e.g., a caring continuum; Marsh, Reference Marsh2019) and are sometimes concerned with different outcomes altogether in morally relevant decision making. However, typically, the agent does not make moral decisions in a purely self-interested way. The other person’s outcome (harm or benefits) is referenced and, if present and severe enough, activates emotional responses that give value to the action for the decision-maker.
7.2.3 The In-Group and Moral Decision Making
People’s moral decisions have weighty consequences for social life, as they effectively divide people into moral communities (Graham et al., Reference Graham, Haidt and Nosek2009; Haidt, Reference Haidt2012). In turn, moral communities bind together through shared, moral conceptualizations of actions. People’s group-level moral commitments contain rules that factor into calculations of the expected utility of their individual decisions – the moral codes of in-groups both prune the options for actions and shape the interpretation of actions. For example, membership in a moral community that collectively values empathy and equality might contribute to an interpretation of charitable giving as a way to achieve fairness. Likewise, membership in a group of revolutionaries might turn an act of vandalism into bravery.
The influence of the group structure on human psychology has been described for decades. Clearly decision making molds to the group through a variety of cognitive mechanisms, as shown in research on group conformity, group polarization, and groupthink. Even given minimal, completely arbitrary cues to group membership, people easily identify with a group (Tajfel, Reference Tajfel1982); the phenomenon of minimal group formation has been observed in childhood through adulthood (Dunham et al., Reference Dunham, Baron and Carey2011). Mature moral cognition involves countless examples of in-group-based decision making, often referred to as in-group bias. For example, participants have been found to favor others who share their political orientation in moral decision making about whether people accused of sexual misconduct should be reprimanded (Klar & McCoy, Reference Klar and McCoy2021); and they favor close others over distant others in their causal explanations for moral violations (Niemi et al., Reference Niemi, Doris and Graham2023). Indeed, a cluster of moral values proposed in moral foundations theory (Graham et al., Reference Graham, Nosek, Haidt, Iyer, Koleva and Ditto2011), referred to as binding values, are concerned with maintaining the bonds of relationships and groups, rather than an individual’s obligations to other individuals. Binding values, such as loyalty and respect for authority, by their nature, mold decision making to benefit fellow group members and relationship partners, even at the cost of harming an individual.
The influence of the group on moral decision making represents a factor limiting the influence of empathy and perspective taking (Bloom, Reference Bloom2017). Research on dehumanization, prejudice, and stereotyping shows that affect may be blunted in response to morally relevant needs of out-group members (Zaki & Cikara, Reference Zaki and Cikara2015; see also Chapter 14 in this volume). If the people affected by one’s moral decisions are not viewed as people, then representations of the value of the action and outcome of the decision are unlikely to incorporate rich representations of the outcome’s value for the affected person. In that case, instead of weighing and negotiating twin decision trees, the decision-maker’s self-interested expectations about utility might overpower the effects of empathy and perspective taking on moral decision making.
Neglect of the outcome during moral decision making has also been observed to vary based on the ideological groups with which people identify (Hannikainen et al., Reference Hannikainen, Miller and Cushman2017). Moral prohibition of actions appears to be more likely for people with more conservative values, as these values tend to involve rules about concrete actions, for example, sexual activity, food taboos, unpatriotic gestures, disgusting behavior. While actions and intentions are typically both factored into moral judgments, it is possible that, sometimes, individuals do not need more than the act itself to be able to comment on its wrongness. There is much room for research that maps the influence of group norms on moral decision making, including how evaluations of the utility of both actions and outcomes are influenced by moral communities and factored into moral decision making.
7.2.4 The Implementation of Decisions and Representations of Action
The structure of moral decision making can be further illuminated by considering the possibility that moral decision making is sometimes outcome-based and sometimes action-based: 1) people ignore outcomes and make morally relevant decisions based on actions alone, as described earlier and 2) people overlook the value of actions and decide to pursue a morally relevant outcome. Outcomes considered “moral” or “ethical” might require neglect of either actions or outcomes. For example, a parent faced with the decision to keep their unvaccinated child out of school or vaccinate their child and send them to school might ignore the direct outcome of the choice on the child’s education and base their decision on an action: injecting the child with the vaccine.
Neglecting the discrete actions and focusing on the big picture, the outcome, during deliberate moral decision making, also presents issues. As the research on implementation actions (Gollwitzer, Reference Gollwitzer1999) indicates, a person who decides on an intended outcome, such as “get more involved with charity this year,” or “stop being mean to my brother” is more likely to reach that outcome if it is broken down into actionable steps. Leaving morally relevant goals as abstract outcomes might inspire action, but the outcome is more likely if the concrete actions associated with the goal are realistically evaluated.
When people negotiate a morally relevant decision, their decision may focus on evaluation of the outcome or the prerequisite actions. Research on event segmentation (Kurby & Zacks, Reference Kurby and Zacks2008; Zacks & Swallow, Reference Zacks and Swallow2007) finds that people are capable of splitting up events in time in finer and coarser segments. For example, a bride might assign value to each of the following, as high-stakes decision outcomes: the engagement party, the bachelorette party, the catering tasting, the venue selection, the wedding ceremony, the honeymoon, and, finally, being married. A relative of the bride might see things differently, assigning moral weight and high utility to just one outcome: the bride being married. The action-based EUT would suggest that the bride, compared to her relative, perceives more decision trees before being married and, therefore, sees exponentially more options, each associated with valuable actions and outcomes. According to event segmentation theory, both parsing events into smaller action units and larger units reflecting goals are crucial to everyday perception, and it is not unusual for people to segment events in roughly similar ways. People may differ, however, on the “grain” in which they break down one event. Like the bride focused on the many actions before each of the outcomes involved in becoming married, people focused on subgoals of an event describe actions and use more precise verbs (Kurby & Zacks, Reference Kurby and Zacks2008). By contrast, like the relative of the bride, when people focus on coarser-grained events, different features are perceived: objects and more precise nouns.
It is theorized that fine- and coarse-grained event segmentation reflects the capacity and function of working memory, chunking information into cognitively manageable actions and outcomes. Neural research on narrative interpretation and recall demonstrates that short event boundaries reflect activity in sensory regions, whereas longer events reflect activity in “high-level” cognitive areas responsible for abstract models of situations (Baldassano et al., Reference Baldassano, Chen, Zadbood, Pillow, Hasson and Norman2017). Actions and outcomes are represented differently in the brain, but they are tied together when we make sense of the world.
Actions are nested inside events, but this hierarchical organization doesn’t necessarily translate to order between people. When required to negotiate decisions, the bride and her relative may find it difficult to see eye-to-eye about what is the current goal. The utility of the one shared, highly valued outcome, marriage, might be complicated by the proliferation of action and outcome utility estimates experienced by the bride. At any given point in time before the wedding, it is more likely that decision making will be focused on the outcome for the relative and on some action for the bride. According to Vallacher and Wegner (Reference Vallacher and Wegner1989), the target of focus matters, in terms of competence and morality. The authors proposed that action focus versus outcome focus reflects a social-personality dimension of “personal agency”: When we are low-level agents, we are detail-oriented and concerned about mechanism; when we are high-level agents, we see meaning, implications, consequences. Low-level action identification is proposed to be more likely when a person is in unfamiliar territory, feeling their way through one step at a time. High-level action identification, by contrast, is proposed to emerge when a person has some expertise. The authors suggest that low-level and high-level action identification directly relate to moral decision making, with high-level action identification necessary for the kind of causal reasoning and understanding of abstract moral implications that prevent impulsive offenses.
7.2.5 Pruning Options through Valuation of Actions
Unlike the Footbridge problem (push or don’t push the man to save five lives), moral dilemmas in real life often have more than yes/no choices such as harm vs. don’t harm, or be fair vs. don’t be fair. People, guided by instincts, norms, and habits, instead face dilemmas over options for how to act that have various and sometimes unclear moral implications (which is why they are dilemmas). Recalling the example given earlier in this chapter, a parent trying to decide whether to keep their unvaccinated child out of school or vaccinate their child and send them to school might consider several important factors (e.g., disease risk, effects on education, social development, religious concerns) each of which has morally relevant value to the parent.
In order to make a decision like this among confusing morally relevant options, a person may transform the options so that they don’t conflict with their own values. The possibility that people alter their options and associated actions, implicitly and deliberately, in order to facilitate decision making is supported by research on moral decision making. Research indicates that people do tend to export their value systems when judging or making decisions about others. That is, they believe that what is good and right for them is good and right for the other person; no alternative is possible (Newman et al., Reference Newman, Bloom and Knobe2014). This suggests that if a person is attempting to take the perspective of someone in order to estimate an outcome (i.e., in empathy-guided moral decision making), the perspective they take will ultimately bear a resemblance to their own. In this vein, research on moral hypocrisy shows that people are inconsistent moral judges. When they violate a moral commitment they may judge themselves more favorably and their behaviors as more morally permissible than someone else who carries out that violation (e.g., Batson et al., Reference Batson, Thompson and Chen2002; Conway & Peetz, Reference Conway and Peetz2012; Graham et al., Reference Graham, Meindl, Koleva, Iyer and Johnson2015; Valdesolo & DeSteno, Reference Valdesolo and DeSteno2007, Reference Valdesolo and DeSteno2008).
The importance of people’s transformation of actions into morally acceptable options is synchronous with people’s thinking about omissions during decision making. Moral norms transform omissions, the absence of an action, into legitimate moral and immoral options. For example, when a person does nothing in the face of suffering, this may be perceived as a decision associated with bad character, such as callousness or cowardice (Duff, Reference Duff1993). Moral norms (i.e., to reduce suffering in others) transform nonactions to be just as influential as actions in decision making.
7.3 Conclusion
In this chapter, we’ve described how action-based EUT accommodates moral decision making, in terms of actions, options, and learning. We focused on EUT to show that, by incorporating action, EUT can explain a great deal of moral decision making. We acknowledge, however, that there is a wide world of moral decisions to explain. Some of them, for example, may be better represented by other accounts, including game theory (Binmore, Reference Binmore2011). Furthermore, the possibility that there are important individual differences and situational influences on moral decision making is suggested by the reported scientific findings. At this point, there are still unanswered questions regarding the integration of EUT and reinforcement learning models. Nevertheless, it is clear that enormous headway has been made over (at least) the last half century in the study of moral judgment and decision making, and the prospects of an increasingly evidence-based understanding of the topic are strong.
In the fall of 2021, many hospitals in the United States were overwhelmed with COVID-19 patients, most of whom had refused to be vaccinated against the disease. Many of these nonvaccinators appealed to moral principles concerning freedom and rights, which they took to outweigh the consequences of their decision. They claimed the right to make decisions about their own bodies and the right to freedom from government control over personal behavior. Some politicians supported these views even to the point of trying to prohibit schools and private businesses from imposing mandates for mask wearing or vaccination. Note that the expected consequences of nonvaccination are bad for everyone. Vaccination reduces the probability of serious illness for the individual, and it reduces the probability of an infected person, even one without symptoms, transmitting the disease to others. If effects on others are morally relevant, then nonvaccination is not only individually irrational but also immoral, unless some other moral principle outweighs these effects.
This case is an example of a frequent conflict between moral principles that people advocate and try to follow, on the one hand, and the expected consequences of following those principles, on the other. The moral principles at issue are inconsistent with moral principles based on utilitarianism, which holds that choice options should be evaluated in terms of their expected consequences for all those affected, but this is not all that makes these principles irrational. The choice of nonvaccination for oneself conflicts with expected utility theory (discussed later in the chapter) as applied to individual choices; it is a losing gamble. And opposing vaccinations for others is simply harmful to them, which by itself is inconsistent with any concept of morality.
Apparent examples of this sort of inconsistency in the real world have been extensively documented. In many cases, the analysis of expected consequences is based on economics rather than utilitarian analysis, but the conclusions of economic analysis are generally consistent with those that utilitarian analysis would imply.Footnote 1 Apparent inconsistencies have been found in allocation of resources to large humanitarian tragedies (Bhatia et al., Reference Bhatia, Walasek, Slovic and Kunreuther2021); in insurance decisions by firms and individuals (Johnson et al., Reference Johnson, Hershey, Meszaros and Kunreuther1993); in excessive attention to some risks coupled with neglect of others (Breyer, Reference Breyer1993; Kunreuther & Slovic, Reference Kunreuther and Slovic1978; Sunstein, Reference Sunstein2002); in tax policy (McCaffery, Reference McCaffery1997); in economic policies concerning trade, price controls, and wages (Caplan, Reference Caplan2007); and elsewhere.
All these realistic cases (and many more) support the argument that people’s moral judgments, when put into practice, can lead to consequences that people themselves would consider worse on the whole than what might have been achieved. But the real world is complicated. It is possible that the principles can have a utilitarian defense after all. For example, many of these apparently self-defeating policies arise through the functioning of institutions, such as legislatures and courts, that are imperfect yet better than any feasible alternatives, so that any attempt to overturn their results would, in the long run, make matters worse as a result of weakening these institutions. It thus becomes reasonable to ask whether people really apply nonutilitarian principles when they make moral judgments. One way to answer this question is to do psychology experiments, and those are the main topic of this chapter. At issue is the question of whether we can demonstrate truly irrational and nonutilitarian reasoning in hypothetical or real judgments under controlled conditions and, if so, whether we can learn something about the determinants of these judgments.
8.1 Normative, Descriptive, and Prescriptive “Models” in Experimental Psychology
Since the nineteenth century, psychologists have studied reasoning in contexts in which right answers are defined by some formal theory such as the logic of syllogisms. A common finding is that reasoning did not conform well to the model, thus, Henle (Reference Henle1962) begins by pointing out that “[t]he question of whether logic is descriptive of the thinking process, or whether its relation to the thinking process is normative only, seems to be easily answered. Our reasoning does not, for example, ordinarily follow the syllogistic form, and we do fall into contradictions” (p. 366). Around the same time (the 1950s and 1960s), others were comparing human judgments to other normative models, including probability and statistics (Bruner et al., Reference Bruner, Goodnow and Austin1956; Chapman & Chapman, Reference Chapman and Chapman1969; Meehl, Reference Meehl1954). In retrospect, we can think of such research as comparing “descriptive models” – psychological accounts of what people are doing – to normative models. The term “model” is inappropriate because it implies some sort of formal system. A few such systems exist, but the term is used even when they do not.
Kahneman and Tversky (Reference Kahneman and Tversky1979; Tversky, Reference Tversky1967; Tversky & Kahneman, Reference Tversky and Kahneman1981) began to apply this approach to decisions as well as judgments (and their 1979 paper proposed a true descriptive model that accounted fairly well for choices among simple gambles). Their normative model was expected utility theory in the form proposed by Savage (Reference Savage1954), in which both probability and utility were subjective (even if numerical probabilities were included in problem statements). (See also Chapter 7 in this volume.) Given this normative model, researchers could not always determine whether a given decision conformed to the model or not. For example, one person might prefer $10 for sure over a gamble with a 0.6 probability of $25 and a 0.4 probability of $0. Another person might prefer the gamble. The former person’s utility for $25 might be less than twice as high as her utility for $10, and she might think of 0.6 as “essentially an even chance,” so that her subjective probability of winning would be closer to 0.5. Thus, for her the expected utility (subjective probability times subjective utility) of the gamble would be less than that of $10 for sure.
To overcome this problem and show that choices were inconsistent with the normative model, Tversky and Kahneman (Reference Tversky and Kahneman1981) emphasized the use of framing effects, in which the same choice, in terms of consequences and their probabilities, was offered in different words. If subjects made different choices in the two versions, then they could not be following the normative model, which concerns consequences and probabilities. A classic example was the Asian disease problem, in which some subjects were told that an Asian disease was approaching and 600 deaths would be expected if nothing was done. In one version, the subjects chose between “200 saved” and a 0.33 chance to save 600. In another version the choice was between “400 die” and a 0.67 probability that 600 would die. Most subjects in the “saved” condition chose “200 saved,” and most subjects in the “die” condition chose the gamble.
This experiment had two properties that have received little attention in the extensive literature about it. One is that it is essentially a moral problem, not an individual choice like the money gambles used in other studies. It is moral because it is a decision about the well-being of other people. Research on decisions had slipped from a focus on expected utility to a focus on utilitarianism. Utilitarianism is the natural extension of expected utility to decisions for many people.Footnote 2 The utilitarian normative model here is to base the decision on the expected number of deaths (usually assuming that the subjective probabilities match the given probabilities).
The second property of the Asian disease problem concerns strong preferences for the two options. The expected utilities of the two options are close. Thus, strong preferences for different options violate a feature of utilitarianism (and other moral theories), which is to treat all lives equally. In gains, for example, a strong preference for “save 200” implies that the extra 400, beyond the 200 saved, are given less weight than twice that of the first 200 lives. Slovic (e.g., Reference Slovic2007) has explored this finding of unequal treatment extensively. One way to think of this phenomenon is in terms of the curve relating total disutility to number of deaths. People tend to make decisions as if the slope of this curve decreases: the millionth death matters less than the tenth, or the first.
Here is another example of the move from individual to moral decisions. The pertussis vaccine used to prevent whooping cough in the 1980s would often cause a disease very much like the one it prevented but at a much lower probability. Despite the clear benefits, many people resisted (and still resist) vaccination (Asch et al., Reference Asch, Baron, Hershey, Kunreuther, Meszaros, Ritov and Spranca1994; Sherman et al., Reference Sherman, Vallen, Finkelstein, Connell, Boland and Feemster2021). Ritov and Baron (Reference Ritov and Baron1990) found, in a laboratory study, that many people would not want such a vaccine, because (presumably) they would not want to cause the disease through their action. Ritov and Baron also found that people would also oppose requiring the vaccine as a public health measure. The individual decision was purely a matter of self-interest, but the public health decision was moral, because it concerned the well-being of other people.
Note that, in this case, the self-interested decision is irrational (from the perspective of expected utility) because omission of the vaccine increases personal risk. Could we say that the moral omission is also irrational because it means that more people will be sick? Some moral systems have a rule against using people as means to help others, and it could be argued that those who suffer from the side effects will serve as means to prevent disease in a greater number. Yet it seems inconsistent to say that the decision that is rational for each individual is immoral when applied to the population.
In these examples, the general approach of comparing laboratory decisions to normative models can be, and has been, extended from individual decisions to moral decisions, often with the implicit use of utilitarianism as a normative model. Further research, some of which I review here, finds that the departures from utilitarianism are systematic. As noted, some of these departures result from distortions in the way people think about quantities. Many others result from the application of nonutilitarian principles to the problems of interest.
These principles may be absolute or “prima facie,” that is, considerations that can be overridden by other considerations (Ross, Reference Ross1930/2002). Examples are: “We have a right to control our bodies,” “Do no harm” (meaning do no harm through action, as opposed to omission), “Do not use people as means to achieve better outcomes for others,” or “Do not kill innocent people.” They are often called “moral intuitions” (Hare, Reference Hare1981) or “moral heuristics” (Sunstein, Reference Sunstein2005). “Heuristics” originally referred to weak methods that might be helpful in solving problems, such as: “Do you know a related problem?” (Polya, Reference Polya1945), but the term was used by Tversky and Kahneman to refer to judgment tasks. An example is judging the probability that someone is a member of a group by the similarity of that person to prototypical members of the group, thus ignoring other relevant attributes such as the size of the group (Tversky & Kahneman, Reference Tversky and Kahneman1974).
When this view is extended to moral judgments, other problems arise. In principle, a heuristic is a “fast and frugal” method that often works but sometimes does not.Footnote 3 In morality, though, some of these heuristics seem to become hardened into rules that people knowingly apply, believing that they constitute the best possible moral judgments. Theologians and philosophers defend these rules as normative in this way (e.g., the rule about not using people as means). Such rules are often part of deontology, a class of moral systems based on rules, rights, and duties, which go beyond simply bringing about the best expected consequences for all.Footnote 4 Thus, much of the research on moral judgment focuses not so much on heuristics but on the contrast between deontological rules and utilitarianism. In this research literature, the terms “deontological” and “utilitarian” are not meant to imply that choices are based on representations of either system, just that they are consistent with what those choices would be.Footnote 5
Here I use the term “(moral) intuition,” for what others call moral heuristics. I think it captures the idea that the relevant moral principles tend to be evoked immediately upon presentation of a moral problem, without any extra effort to look for other relevant considerations.
All normative models are controversial to some degree, including Bayesian probability theory and expected utility theory (e.g., Ellsberg, Reference Ellsberg1961), but utilitarianism seems more controversial than most of the others that are studied psychologically, in part because it yields conclusions that seem to conflict with strong moral intuitions held by philosophers and psychology researchers as well as by experimental subjects. Hare (Reference Hare1981) has dealt with this conflict explicitly and in depth. His approach turns out to be surprisingly relevant to experimental psychology (as I discuss later).
But there are other reasons for looking for biases relative to a utilitarian normative model, even for those who do not accept utilitarianism as truly normative. Specifically, if people consistently violate the utilitarian standard in the same biased way (as in favoring harmful omissions over less harmful acts), we should not be surprised if the real consequences turn out to be clearly worse than if the utilitarian standard were followed.Footnote 6 As I suggested, many examples in the real world can be explained in terms of such biases. Thus, the study of violations of utilitarianism can at least help us understand why things in the real world are not as good as they could be. If the violation of utilitarian standards is the result of following nonutilitarian moral rules, then we at least learn the potential cost of following those rules.
Of course, much more could be said in defense of utilitarianism (e.g., Hare, Reference Hare1981, whose other work is nicely summarized by Singer, Reference Singer2002), but this is not the place for it.Footnote 7
In the 1980s, a third type of “model” became apparent, which was called “prescriptive” (Bell et al., Reference Bell, Raiffa and Tversky1988).Footnote 8 The idea is that normative models specify a standard and prescriptive models tell us what to do in order to do better by that standard. The distinction arose because the idea of “decision analysis” was exactly to bring decisions into conformity with various forms of expected utility theory, but decision analysis had many techniques that were not part of that theory, and most of them lead to approximations at best, for example, ways of estimating probability and utility of outcomes. Expected utility theory tells us about the mathematical relations among inputs and outputs. Decision analysis is not the only prescription for good decision making (where good is defined in terms of expected utility). Others are “decision architecture” (Thaler & Sunstein, Reference Thaler and Sunstein2008), and various educational programs.
As an example of the distinction, consider the concept of division in arithmetic. The formal definition (normative model) is (roughly): if A/B=C, then A=BC; we define division (A/B) in terms of multiplication. But this definition does not tell us how to do it. Many people now learn “long division” as a prescriptive procedure to divide large decimal numbers using successive approximations, starting with the leftmost digits in the dividend. To “understand long division” is to see why this procedure leads to a normatively good conclusion (exact for integers but sometimes just a good approximation).
For utilitarianism as a normative model, various prescriptive models have been proposed. One is simply to start with knowledge of the normative idea and then ask if it is obvious which option is better, for example, when the matter involves substantial benefits to some people at the cost of small inconvenience to others (e.g., vision tests for driver’s licenses). Another is decision analysis itself when it includes estimates of utilities for different groups of people who are affected. (This is close to “welfare economics.”) Another is to follow a rule, such as: “Do not commit adultery.” Hare (Reference Hare1981) suggests that, in real cases where adultery is an issue, any attempt to evaluate probabilities and utilities will be so biased by emotional factors that it is likely to lead to erroneous and very harmful consequences. (J. S. Mill made similar arguments.) Thus, the best way to conform to the utilitarian standard is sometimes to try to do something else. Moral intuitions may also be prescriptive in this sense. Hare (following Sidgwick) thinks they usually are, but the research described here suggests that some are more harmful than helpful. We might say that a major prescriptive question for utilitarians is how to bring up (and educate) children so that they do not fall prey to the harmful intuitions.
The approach to the study of moral judgment discussed here is an extension of the general approach to the study of judgments and decisions just summarized. We look for biases, that is, systematic departures from normative models. We try to explain these in terms of descriptive models. Based on what we have found, we then (ideally) try to design prescriptions for fixing these biases. This approach makes the study of moral judgment part of applied psychology, part of an attempt to make things better, like clinical psychology or educational psychology, except that we try to make the normative model, the standard for success, explicit. Alternative approaches, not discussed here, involve description “for its own sake,” without any attempt at evaluation and improvement.
8.2 Methods and Biases
In this section I will discuss several experimental methods and possible biases, organized by method rather than substantive topic, although I comment on the normative approach to some of the topics. All of these methods are potentially capable of showing that judgments or hypothetical decisions are nonutilitarian.
It is worth noting that essentially all of the nonutilitarian biases I describe here are the result of processes also found in nonmoral situations. Cushman and Young (Reference Cushman and Young2011) and Greene (Reference Greene and Sinnott-Armstrong2008) have argued explicitly for the parallelism between “cognitive biases” and patterns found in moral judgment.
8.2.1 Framing Effects
A framing effect, as noted already, is found when two equivalent cases yield different responses. An example from moral psychology is the effect on fairness judgments of describing tax rate differences as surcharges or bonuses, holding constant the same pre-tax and post-tax income levels. McCaffery and Baron (Reference McCaffery and Baron2004), inspired by classroom demonstrations reported by Thomas Schelling, presented subjects with examples like the following (edited for simplicity):
Childless Surcharge
Low income: A married couple with $25,000 total income and two children pays $3,000 in taxes, as a couple.
The same couple, if it had no children, would pay a surcharge of $1,000.
High income: A married couple with $100,000 total income and two children pays $30,000 in taxes, as a couple.
The same couple, if it had no children, would pay a surcharge of $3,000.
Child Bonus
Low income: A married couple with $25,000 total income and no children pays $4,000 in taxes, as a couple.
The same couple, if it had two children, would get a bonus of $1,000.
High income: A married couple with $100,000 total income and no children pays $33,000 in taxes, as a couple.
The same couple, if it had two children, would get a bonus of $3,000.
For the Child Surcharge, most subjects judged the surcharge as too high for the low-income family and too low for the high-income family. For the Child Bonus, they judged the bonus as too low for the low-income family and too high for the high-income family. Yet the high-income bonus and surcharge are the same, as are the low-income bonus and surcharge. Although the question is about fairness, a moral property, nothing here depends on utilitarianism as such. The intuitions about fairness that drive this result cannot be consistent with utilitarianism because they lead to different consequences depending on the description.
In another example of a framing effect, Harris and Joyce (Reference Harris and Joyce1980) told subjects that a group of partners had opened a business (e.g., selling plants at a flea market). The partners took turns operating the business, so different amounts of income were generated while each partner was in control, and different costs were incurred. Subjects favored equal division of profits when they were asked about division of profits, and they favored equal division of expenses when asked about expenses. Because expenses and profits were unequal in different ways, their two judgments conflicted. This result depends on an intuitive principle of equality, and, again, the inconsistency does not depend on utilitarianism.
A more complex framing effect concerns the effect of marriage (McCaffery & Baron, Reference McCaffery and Baron2004). When asked directly, many subjects favor “marriage neutrality,” which means that marriage does not affect the total taxes paid. People also favor progressive taxation, which means that those with higher incomes pay a higher percentage in taxes. Finally, people tend to favor “couples neutrality,” which means that couples with the same income pay the same tax, regardless of which earner makes more. Careful reflection (left as an exercise for the reader) implies that these three principles are incompatible. One of them must give.Footnote 9 This is, like the child bonus/surcharge, a logical inconsistency, hence a form of framing effect, which involves focusing on the question that is asked, an “isolation effect” (discussed later).
8.2.2 Contrast of Utilitarian and Nonutilitarian Options
Other methods involve asking subjects to decide between two options, one of which is consistent with utilitarianism and the other of which is not. The nonutilitarian option deviates by exemplifying a particular bias.
8.2.2.1 Omissions
A great deal of research has concerned action/omission dilemmas such as the vaccine case already described, in which people are more willing to accept the harms caused by omission than the harms caused by action. Although Ritov and Baron (Reference Ritov and Baron1990) coined the term “omission bias” as a name for this bias, that term was misleading. A simple bias toward omission would be a bias toward the default, whatever it is. Although a default bias does exist, it plays a minor role in the bias at issue (Baron & Ritov, Reference Baron and Ritov1994). Another determinant is the amplification effect, in which the consequences of action are simply given more weight than those of omission. If both options involve gains rather than losses, the amplification effect induces a bias toward action, which can be large enough to overcome the default bias.
Recent studies have tended to concern a set of dilemmas originally designed by philosophers as extreme cases on which to test, and try to explain, their moral intuitions (e.g., Foot, 1967/1978). In the simple trolley case, a runaway trolley is headed toward five people and will kill them if nothing is done. You can divert the trolley onto another track where it will kill only one person. Most people think diversion is the best response. In the “footbridge” version, the only way to stop the trolley is to push a large man off a footbridge, so that he falls on the track and blocks the trolley, being killed in the process. Most people resist this solution, and many experiments have tried to examine and explain this sort of difference.
A potential issue for experiments like these is what question to ask. In many experiments, the researcher asks: “Is it acceptable to push the man …?” The problem with this is that “acceptable” applies only to a single option, and utilitarians (and others concerned with decisions) must ask the question “compared to what?” The relevant question for us is which option is better, morally. Deontology often makes distinctions between what is permitted, forbidden, or morally required. Because these categories apply to options, not choices, it is possible for both options in a choice to be acceptable, or both forbidden. Other alternatives that work for everyone are: “Which option should [the agent] choose?” and “Which option should you choose [if you were the agent]?” Some studies have asked replaced “should” with “would.” This may be interesting, but some people say, explaining themselves, “I know that I should do it, but I could not bring myself to actually do it” (Baron, Reference Baron1992).
8.2.2.2 The Nature of “Omission Bias”
The literature has identified two major determinants of “omission bias”: deontological rules and the use of a limited concept of causality.Footnote 10
Rules favoring omission are more common than those favoring action (Baron & Ritov, Reference Baron, Ritov, Bartels, Bauman, Skitka and Medin2009). Rules that prescribe acts are usually conditional on some role. A physician, once accepting a patient, is morally and legally obliged to try to save the patient’s life (unless instructed otherwise) but a rule requiring anyone to try to save every life at risk is impossible to take seriously. Likewise, a rule against performing abortions is easier to follow than a rule requiring prevention of miscarriages in similar situations.
“Utilitarian moral dilemmas” often involve rules of this sort, such as prohibitions against killing, or tampering with human genes that affect future generations. When these rules are understood to be absolute (as discussed in Section 8.2.2.3), we would expect that subjects would object to action regardless of how beneficial its consequences are. These results are found (Baron & Ritov, Reference Baron, Ritov, Bartels, Bauman, Skitka and Medin2009). Thus, one determinant of the usual bias favoring omissions over less harmful acts, is the result of specific rules that are understood to be absolute (or nearly so).
Another determinant concerns causality (Cushman, Reference Cushman2008). We can (loosely) classify judgments of causality into two categories. One category, which includes “but for” causality, may be called “make a difference.” You are (perhaps partially) causally responsible for some outcome if something under your control could have made a difference in whether the outcome occurred or not. This view does not distinguish acts and omissions as such. It is often applied to tort law, especially lawsuits against someone who is supposed to take care to avoid harming others. Utilitarianism implies make-a-difference causality, at least when the options are clearly laid out and both possible.Footnote 11
The other category might be called direct causality. By this view, you are causally responsible for some outcome if there is a chain of events between your action and the outcome, with each link in the chain following some known principle of causality, such as the laws of physics (but any science will do). By this view, people may sometimes be held morally responsible for outcomes that they could not have avoided. (Spranca et al., Reference Spranca, Minsk and Baron1991, report a few instances of this.) Young children tend to consider outcomes only, thus judging that an act is wrong if it causes harm by accident (Piaget, Reference Piaget1932). The apparent bias toward harmful omissions over less harmful acts seems to be closely related to direct causality.
Supporting a role for perceived direct causality, Baron and Ritov (Reference Baron and Ritov1994, Experiment 4) compared the original vaccination case (in which vaccination deaths were side effects) with a “vaccine failure” case, in which the deaths that result if the vaccination is chosen are caused by its failure to prevent the natural disease. The bias against vaccination (action) was much stronger in the original condition than in the vaccine failure condition.
Royzman and Baron (Reference Royzman and Baron2002) compared cases in which an action caused direct harm with those in which an action caused harm as a side effect (i.e., “caused” only in the make-a-difference sense). For example, in one case, a runaway missile is heading for a large commercial airliner. A military commander can prevent collision with the airliner either by interposing a small plane between the missile and the large plane or by asking the large plane to turn, in which case the missile would hit a small plane now behind the large one. The indirect case (the latter) was preferred. In Study 3, subjects compared indirect action, direct action, and omission (i.e., doing nothing to prevent the missile from striking the airliner). Subjects strongly preferred omission to direct action but only weakly preferred omission to indirect action. Baron and Ritov (Reference Baron, Ritov, Bartels, Bauman, Skitka and Medin2009, Study 3) found similar results; they also found that perceived action was the main determinant of bias against action.
Greene et al. (Reference Greene, Cushman, Stewart, Lowenberg, Nystrom and Cohen2009) found that direct causality is a matter of degree. The most resistance to action occurred when a physical effect of action (hands-on pushing a man) caused a death, compared to cases in which the causal link between action and outcome involved more steps.
In sum, it seems that the bias against beneficial action is the result of at least two factors other than default bias: the perception of direct causality, as opposed to make-a-difference causality; and the commitment to particular rules that prohibit certain actions.
All of these studies, it should be noted, are consistent with sometimes extreme individual differences, with some subjects making the utilitarian response almost all the time. These subjects apparently do follow make-a-difference causality. In some experiments, we have found subjects who equate inaction with standing by in the face of evil, as with those German citizens who tolerated Hitler (e.g., Spranca et al., Reference Spranca, Minsk and Baron1991).
Note that some of these studies also ask about “blame” or “responsibility.” The latter term is ambiguous between causal, moral, and legal meanings (Malle, Reference Malle2021). The former is sometimes subsumed under the term “punishment,” which is examined more directly (and less ambiguously) in other experiments (later in this chapter).
8.2.2.3 Protected Values
Some deontological rules are taken to be absolute (Baron & Ritov, Reference Baron, Ritov, Bartels, Bauman, Skitka and Medin2009). Tetlock (e.g., Reference Tetlock2003), has used the term “sacred values” for essentially the same phenomenon, and Roth (Reference Roth2007) has used the term “repugnant transactions” for moral prohibitions on transactions such as a live donor selling a kidney. These protected values (PVs) are thus “protected” from trade-offs with other considerations. Some PVs are based on religion, but many are held by atheists, such as rules against cloning or genetic engineering of humans. In such cases, people say they should not violate the rule (usually a prohibition) no matter how great the benefits are. However, when asked to try hard to think of cases in which the benefits would be great enough, or when given some possible counterexamples, most people admit that the rules are not in fact absolute, so they seem to be absolute only as a result of insufficient reflection (Baron & Leshner, Reference Baron and Leshner2000; Tetlock et al., Reference Tetlock, Mellers and Scoblic2017).
Protected values may function as heuristics that serve the purpose of avoiding further thought about whether some trade-off is warranted (Hanselmann & Tanner, Reference Hanselmann and Tanner2008). Thus, they appear to be nonutilitarian. However, J. S. Mill (Reference Mill1859) argued that we should follow certain moral rules even if it seems clear that the consequences of breaking them in some situation would be better than those of following the rule. Suppression of free speech was an example. The idea here is that our own judgments about expected consequences in such cases are not trustworthy; we are subject to self-serving biases and ordinary error. We do not need to deceive ourselves in order to follow such rules. When asked to join a terrorist cell, a person today might think to himself:
It seems that the cause is just, and that the total harm of the deaths that we would cause would be much smaller than the harm we would prevent by carrying through the plan. But I know that almost all the terrorists throughout history have drawn just this conclusion, and the vast majority of them have been incorrect. Thus, it is probably best if I don’t join.
Note that everything is conscious here. No self-deception is needed.
Thus, in experiments on PVs, it is worth giving subjects ways to express apparent PVs that are actually consistent with utilitarianism. Baron and Leshner (Reference Baron and Leshner2000, Experiment 2) included the following, among other nonexclusive options for responses to possible PV items such as “cutting all the trees in an old-growth forest”:
(1) I cannot imagine any situations in which this is acceptable. (38)
(2) I can imagine situations in which the benefits are great enough to justify this, but these situations do not happen in the real world. (7)
(3) There are situations in the real world in which the benefits are great enough, but people cannot recognize these situations, so it is best never to do this. (9)
(4) This is unacceptable as a general rule, but we should make exceptions to it if we are sure enough. (28)
The percentages of choices are shown in parentheses, so it seems that apparent PVs are not usually the result of a Mill-type explanation and are truly nonutilitarian principles. In this experiment, only the first response (with 38 percent) represented a true PV. Note that our claim here is that PVs exist with sufficient prevalence to matter; both subjects and items differed substantially in the prevalence of true PVs.
8.2.2.4 Parochialism and Self-interest
From a utilitarian perspective – as well as many other perspectives – a major bias in people’s reasoning is parochialism (Baron, Reference Baron, Goodman, Jinks and Woods2012a, Reference Baron2012b; also called “in-group bias”). The technical use of the term refers to a class of experimental social-dilemma games (Bornstein & Ben-Yossef, Reference Bornstein and Ben-Yossef1994). In a social dilemma, each player can help other players in the group at some cost to himself, and the total benefit to the group is greater than the cost. This is called “cooperation.” Examples in the real world are widespread, from doing one’s job without shirking, to following rules (e.g., rules for income taxes) even when there is no chance of getting caught breaking them, to contributing to charities. Parochialism arises when each player’s behavior can affect an in-group and an out-group, and some players are willing to help the in-group at some personal cost while hurting the out-group even more (perhaps as a result of ignoring the out-group).
Consider voting as an example. “Cooperation” means voting for the candidate or proposition that is best for those who are relevant to your vote, which could be you and your family, your compatriots, or everyone in the world. Defection in this example is not voting. Voting has a cost. It is well known but not well understood, that the probability of being the pivotal (decisive) voter is so low that, even if you gain a large amount of money from your side winning, the expected return of voting is, like that of a lottery ticket, not worth the cost.
However, if you care enough about other people, taking their utilities as part of your own, with some weight for each other person, then voting can be worth the cost (Edlin et al., Reference Edlin, Gelman and Kaplan2007). Given this mathematical fact, a situation could arise in which it is not worth voting if all you care about is yourself, not quite worth voting if you care about your nation, but well worth voting if you care about humanity. If you are rational, you would then vote for candidates or proposals that are best for humanity. Otherwise, voting is not worth the cost. The same applies to many other forms of political action. Possible current examples of policies that affect the world are climate change, refugees, migration, population pressure on natural resources, fisheries, biodiversity, world peace, world trade, and the strength of international institutions that attempt to regulate these matters. Nationalist policies often work to the detriment of adequate attention these issues. In its general form, parochialism is a candidate for the most harmful departures from utilitarian decision making.
“Cosmopolitanism” is sometimes used as a technical term used for the attitude of caring about the world. Although this attitude sounds as idealistic and fanciful as the John Lennon song “Imagine,” in fact it is fairly common in the modern world (Buchan et al., Reference Buchan, Brewer, Grimalda, Wilson, Fatas and Foddy2011; Buchan et al., Reference Buchan, Grimalda, Wilson, Brewer, Fatas and Foddy2009). Arguably, it could arise as a result of reflection (Singer, Reference Singer1982). What principle can justify caring about some people but ignoring others? Answers could arise, but when we reflect on them (without bias toward inherited opinions) they may seem weak. Surely this sort of reasoning was part of what has led people to oppose slavery and to promote women’s political and legal rights. The absence of it allows parochialism to exist.
Other sorts of reasoning lead to parochialism (Baron, Reference Baron, Goodman, Jinks and Woods2012a, Reference Baron2012b). People think they have a duty to support their nation because their nation has given them the vote, or in return for what their nation has done for them. (Of course, most nations do not tell their citizens, even naturalized citizens, that this is expected, and it is well known that some voters, especially in a nation of immigrants, are concerned with particular foreign countries to which they are tied in some way.)
8.2.3 Attending to Irrelevant Attributes or Ignoring Relevant Ones
Kahneman and Frederick (Reference Kahneman, Frederick, Gilovich, Griffin and Kahneman2002) proposed that many biases can be explained in terms of “attribute substitution.” Two options differ in terms of two or more attributes. Some attributes are normatively relevant and some are not, but the latter are easier to use and typically correlated (imperfectly) with the relevant ones. So people use the irrelevant ones and sometimes ignore the relevant ones completely.
8.2.3.1 Allocation
A great deal of research has examined how people think they should allocate benefits and burdens. Allocations can be local, such as the distribution of grades in a class or housework among those living together. But I focus here on policy. These issues include income, wealth, taxes, criminal penalties, tort fines, insurance, and compensation. Much of this research has examined the principles that people use for allocation decisions (e.g., Deutsch, Reference Deutsch1975). These include equality (everyone gets the same); contribution (to each according to their contribution, also called “equity”); need (to each according to need); and maximization (e.g., maximization of total wealth – economic efficiency or total utility). But punishment also raises questions about distribution. What principle should determine criminal or tort penalties? Likewise compensation for misfortune, whether at the hands of nature or a harmful act of someone else; compensation is provided by insurance, social insurance, or tort penalties.
Utilitarian theory implies that distributions of goods (e.g., of income or wealth) should be based on maximization of utility, but this principle implies two other criteria: declining marginal utility of most goods, and incentive. A given amount of money has more utility to the poor than to the rich (i.e., the utility of money is marginally declining, that is, the slope of the curve relating utility to money decreases as the amount of money increases). Hence, other things being equal, utility would be maximized if we took from the rich and gave to the poor until everyone is equal. However, this would prevent the use of income as an incentive for work (and has “transaction costs” of its own). Hence, maximization requires a compromise between equality and contribution. Such a principle is useless for psychology experiments. Even if it were possible to calculate the optimum trade-off, ordinary people would have no way of knowing the result. However, experiments can show deviations from any such model, even nonutilitarian models that incorporate similar assumptions. Such deviations can be explained in terms of simple heuristics such as equality, or demonstrated by framing effects, such as those described earlier.
People sometimes prefer equality over maximization that involves lives rather than money.Footnote 12 Several studies (e.g., Ubel & Loewenstein, Reference Ubel and Loewenstein1996) have presented subjects with allocation dilemmas of the following sort: Two groups of 100 people each are waiting for liver transplants. Members of group A have a 70 percent chance of survival after transplantation, and members of group B have a 30 percent chance. How should 100 livers – the total supply – be allocated between the two groups? The simple utilitarian answer is “all to group A,” but only a minority of the subjects chose this allocation. People want to give some livers to group B, even if less than half. Many want to give half. Many people are willing to trade lives for fairness to the two named groups. Surely there is some third group that is not in the scheme at all, so inequality is inevitable. (Such results are found even when group membership is unknown to anyone: Baron, Reference Baron1995.)
8.2.3.2 Compensation and Deterrence in Tort Law and Criminal Law
Compensation is justified by declining marginal utility. If you have a house fire that requires construction work, or an illness that requires expensive treatment, your utility for money increases. You have an immediate need for more of it. Insurance, including medical insurance and social insurance (such as unemployment compensation) is a scheme for transferring money from those who have a lower utility, those who pay insurance premiums or taxes, to those who have a higher utility. Like progressive taxes, compensation should be limited when its availability can provide incentive for reducing risks. For example, fire insurance could require installation of fire extinguishers. Health insurance may cost more for smokers, but this is consistent with utilitarianism only if this incentive effect actually causes people not to smoke.
Tort penalties and criminal penalties are justified by incentive effects, that is, by the principle of deterrence. If you know that you are likely to be punished or fined for some behavior (including omissions, in some situations), then you are less likely to engage in that behavior. Penalties “send a message” to the person penalized and to others: “Don’t do this.”Footnote 13
Experiments (e.g., Baron & Ritov, Reference Baron and Ritov1993) are hardly needed to demonstrate that these principles are not followed in the real world. Many people still advocate health insurance in which people pay premiums according to their individual “risk” even when that risk is beyond the individual’s control, hence not subject to incentive effects. (This practice is partially banned in the United States.) Compensation is often provided to relatives for “wrongful death” (but not for other deaths), even when the death in question reduces the utility of money for them. And tort penalties are often levied even when the incentive effect leads to more harmful behavior (e.g., a lawsuit for side effects of a beneficial vaccine with rare side effects causes the company to withdraw the vaccine entirely; see Baron & Ritov, Reference Baron and Ritov1993).
Likewise, criminal punishments are often inconsistent with the principle of deterrence (Carlsmith et al., Reference Carlsmith, Darley and Robinson2002). Preferences for penalties are based more on the heinousness of the offense than on factors that should affect deterrence. For example, by utilitarian theory, the severity of punishment should be higher when the probability of detection is lower; this way, potential offenders are risking a larger loss in the unlikely event that they get caught. But probability of detection plays little role. Littering is lightly penalized but rarely detected.
8.2.4 Comparison of Moral Judgments to Consequence Judgments
In some cases, such as the vaccination case described, the utilitarian answer is fairly clear. When the answer cannot be specified, a simple alternative approach for experimenters is to ask the subject which option, on the whole, has the best overall consequences for everyone affected. When the subject gives one answer to that question and a different answer to the question of what should be done, then we have pretty good evidence that the subject is giving a nonutilitarian answer, and we can go on to explore the reasons for this discrepancy.
Baron et al. (Reference Baron, Ritov and Greene2013) asked subjects what was best for their nation (or national group, in the case of Arab and Jewish Israelis), what was best on the whole, what was best for the other group (in Israel), and what their moral duty was. Many subjects thought it was their duty to go against their own judgment of what was best on the whole, in the direction of parochialism (in-group bias), and they indicated that they would do their duty in a real vote.
Baron and Jurney (Reference Baron1993) asked subjects if they would vote for various reforms. In one experiment, 39 percent of the subjects said they would vote for a large tax on gasoline (to reduce global warming). Of those who would vote against the tax, 48 percent thought that on the whole it would do more good than harm; this group was thus admitting that they were not following their own perception of overall consequences. Of those subjects who would vote against the tax despite judging that it would do more good than harm, 85 percent cited the unfairness of the tax as a reason for voting against it (for instance, the burden would fall more heavily on people who drive a lot), and 75 percent cited the fact that the tax would harm some people (e.g., drivers). The latter subjects were apparently unwilling to harm some people in order to help others, even when they see the benefit as greater than the harm. This effect may be related to “omission bias.” Unlike other results summarized here, the principle in question is nonutilitarian but is endorsed by other moral theories. Yet, its application in the real world can make things worse.
8.2.5 Isolation Effects
In “isolation” effects, people attend only (or primarily) to data or issues immediately before them (Camerer, Reference Camerer, Kahneman and Tversky2000; Kahneman & Lovallo, Reference Kahneman and Lovallo1993; Read et al., Reference Read, Loewenstein and Rabin1999). These effects are related to, or identical to, what others have called a focusing effect (Idson et al., Reference Idson, Chugh, Bereby-Meyer, Moran, Grosskopf and Bazerman2004; Jones et al., Reference Jones, Frisch, Yurok and Kim1998; Legrenzi et al., Reference Legrenzi, Girotto and Johnson-Laird1993). People know about indirect effects but do not consider them, or do not consider them enough. The idea came from the theory of mental models in reasoning (Legrenzi et al., Reference Legrenzi, Girotto and Johnson-Laird1993): People reason from mental models, and when possible they use a single, simple, model that represents just the information they are given. Other factors are ignored or underused.
McCaffery and Baron (Reference McCaffery and Baron2006) found apparent isolation effects in evaluation of taxes and other policies. For example, people prefer “hidden” taxes, such as a tax on corporations, without thinking about where the money comes from (employees, consumers, stockholders). If people are asked who actually pays, they realize that such taxes are not “free.” Caplan (Reference Caplan2007) reports similar effects for policies such as rent control, which have an immediate desirable effect on prices but an undesirable secondary effect on the supply of housing. Often people seem to evaluate policies (such as long prison sentences) in terms of their intended effects, even if those are not their main effects. These evaluations, working through the political system, affect real policies.
8.3 Moral Rules and Intuitions
Many demonstrations of nonutilitarian biases, or their cognitive bias cognates, seem to result from intuitive responses rather than any sort of reflection. At issue is whether these biases would be reduced if people engaged in more thinking, or more thinking of a certain sort.
Hare (Reference Hare1981; see www.utilitarianism.net/ for additional citations) proposed a related account. In defending utilitarianism, he proposed a two-level theory of moral thinking, with an intuitive and “critical” level. The critical level, which is a normative model in the sense discussed earlier, is utilitarian and is rarely approximated in human thinking but also rarely needed. Optimal decisions at this level are what would result if the decision maker could sympathetically represent to herself the preferences of all those affected and reach a decision as if the conflicts among their preferences were conflicts among her own preferences. At this level the distinction between case-by-case decisions and moral principles almost disappears, since each case specifies the decision for other cases that are similar in morally relevant ways and becomes a principle for just those cases (however few or many there may be). The principles and decisions accepted through such idealized reflection are those that would be rationally accepted by anyone, even if that person in real life would lose from application of the principle. The principles are thus universal, but each principle (unlike heuristics or intuitive rules) need not be simple. It includes all morally relevant features of a given case (those that could in principle affect the choice).
Hare argues that the term “moral” implies such universal agreement (an idea he attributes to Kant). Roughly, the idea is that we would balk at calling a principle “moral” if I applied it when you were in one position (e.g., the loser) and I was in another (the winner) but would not apply it if we switched positions (including with each position all its relevant features). This idea is embodied in the “veil of ignorance” (Rawls, Reference Rawls1971), which is a possible prescriptive intervention (Huang et al., Reference Huang, Bernhard, Barak-Corren, Bazerman and Greene2021).
Hare’s intuitive level, as I noted earlier, consists of intuitions that could be prescriptive principles worth following, or at least worth considering. But it also includes intuitions that may be the cause of harmful biases, such as “do no harm,” if it is understood as referring to actions but not omissions.
8.3.1 Intuitions and Dual Systems
Many approaches to reasoning have relied on the idea of dual systems, intuitive and reflective, with at least the intuitive level being similar to Hare’s. That system, by various accounts, is automatic (uncontrollable but also free of demands on cognitive resources), driven by emotion (or affect), and based on associations rather than rules. The reflective system is controllable and requires some effort. Because it is controllable, it may or may not become active after the intuitive system has begun its work. In principle, if the subject knows that reasoning is required, the reflective system could begin right away. Kahneman (Reference Kahneman2011) argues that a corrective version of this theory, in which reflection begins after results of intuition are available and can function to correct the intuition, is relevant to a variety of tasks studied in the heuristics-and-biases tradition. The corrective theory has also been proposed as an account of moral judgment by Joshua Greene (e.g., Reference Greene and Sinnott-Armstrong2008, p. 44, although elsewhere Greene is less specific about the ordering of events in time).
Several lines of evidence seem to support the dual system theory for moral judgment. First, response times (RTs) for “personal” dilemmas, those that involve direct killing, such as the footbridge dilemma, are longer, especially when subjects choose the utilitarian option.
A common finding in choice tasks is that RT is longer when the response is rarely made or when the options are similar in attractiveness (hence conflicting). These factors alone can explain RT differences found. Note that the corrective model implies that RT is longer for utilitarian than for deontological responses when their probability is equal (which is also where the two responses are maximally conflicting). Baron and Gürçay (Reference Gürçay and Baron2017), in a meta-analysis of 26 experiments, estimated this RT for each response by assuming that each subject had an “ability” to make utilitarian responses, and each dilemma had a “difficulty” for making that response. (Thus, the footbridge problem is more “difficult” than the simple trolley case.) The two choices would be equally likely when ability was equal to difficulty, according to our measures. A plot of RT for each choice as a function of ability minus difficulty indeed showed the longest RT when this difference was zero, but, at this point, the utilitarian responses took no longer than the deontological responses. Rosas et al. (Reference Rosas, Bermúdez and Aguilar-Pardo2019) also found that RTs were determined mainly by conflict. These results are inconsistent with any form of the corrective model.
Baron and Gürçay (Reference Gürçay and Baron2017) also noted that subjects who made more utilitarian responses had longer RTs on everything, a result consistent with the claim that reflection-impulsivity is correlated with utilitarian responding. Why this happens may depend on developmental processes that have already occurred before the experiment. For example, people who are generally reflective may come to favor utilitarian solutions over time.Footnote 14
Other results concern the effects of time pressure or cognitive load, which, in some studies, seem to affect utilitarian responses but not deontological responses. A general problem with these studies is that the effects vary for different dilemmas, not only in magnitude but also in direction (as also found by Gürçay & Baron, Reference Gürçay and Baron2017, despite finding no overall effect of time pressure vs. instructions to reflect). For example, to deal with load or time pressure, subjects may skip or skim the less salient parts of the printed description, and those may vary with how the dilemma is described. Researchers should at least test effects in ways that take into account the variance across dilemmas as well as across subjects, and most researchers have not done this (for an exception, see Patil et al., Reference Patil, Zucchelli, Kool, Campbell, Fornasier, Calò and Cushman2021), as well as trying different ways of ordering the information in the dilemma.
These sorts of results concerning time pressure and cognitive load have been difficult to replicate (e.g., Bago & De Neys, Reference Bago and De Neys2019). Rosas and Aguilar-Pardo (Reference Rosas and Aguilar-Pardo2020) found that utilitarian responses can occur under extreme time pressure. Moreover, studies that track the position of the pointer (mouse) during experiments with moral dilemmas do not show any tendency to switch from utilitarian to deontological responses during the time (usually 10–20 seconds) that the subject deliberates (Gürçay & Baron, Reference Gürçay and Baron2017; Koop, Reference Koop2013).
In sum, the most plausible account is that, when presented with dilemmas that pit utilitarian and deontological responses against each other, people are aware of the conflict as soon as they understand the dilemma. Yet more reflective people, for as yet unknown reasons, are more likely to favor the utilitarian resolution to the conflict. This kind of account in terms of individual differences in reflection is not far from Greene’s two-system account, but it does not assume any sequential effects involving suppressing an early response by a late one, so it is thus consistent with the known results, and with versions of dual system theory that assume that the systems work in parallel rather than sequentially (e.g., Sloman, Reference Sloman1996). It is clear by any account that people differ in some sort of reflectiveness, and these differences are related to differences in at least some moral dilemmas (Patil et al., Reference Patil, Zucchelli, Kool, Campbell, Fornasier, Calò and Cushman2021).
8.4 Future Directions
A lot remains to be known about moral reasoning. The reader who has gotten this far will not be surprised that I think this topic should be part of cognitive psychology, which has been studying reasoning more generally for well over a century. Many of the methods of psychology remain to be fully applied to moral reasoning. But moral reasoning is important for practical purposes too. It is tied up with politics and public policy.
Political judgments of citizens are often moral judgments (see Chapter 22, this volume). These merit special attention because the actions or omissions of citizens affect other citizens and noncitizens at home and abroad. Many of the world’s problems, within and among nations, can be traced to policies approved by citizens. The utilitarian argument I made earlier applies here. If citizens collectively follow nonutilitarian moral intuitions, then we should not be surprised if the final results they influence are deficient, for all those affected, everywhere.
Differences in thinking about politics arise in individual development (as studied by Adelson, Kohlberg, and many others; see, for example, Adelson, Reference Adelson1971, and Kohlberg, Reference Kohlberg1963) and in cultural evolution. Hallpike’s (Reference Hallpike2004) analysis, which is analogous to that of Kohlberg, suggests that something like developmental stages occurred over the course of cultural evolution, with the earlier stages still present. Early people, and those who still live as they did, and young children, do not distinguish morality, laws and social conventions, and etiquette. They are just “the way we do things.” With the growth of cities and writing, codified laws came to exist and were soon “written in stone” or in parts of what is now the Old Testament. Similar developments may occur in early adolescence (depending on culture; see Haidt et al., Reference Haidt, Koller and Dias1993). The development of a concept of morality, independent and outside of laws and conventions, came relatively recently in human history, possibly only a few thousand years ago, after writing became generally used. The concept of morality, like other concepts such as “science,” is not fully developed in human cultures. And the developments so far are not well understood by many people. For most who make the distinction between morality and convention now, it comes in adolescence. The existence of a concept of morality raises the possibility of rational thought about what it should be.
It is apparent that culture has a large effect not only on moral beliefs but also on how (or whether) people reason about them, or about other issues such as politics (Baron et al., Reference Baron, Isler, Yilmaz, Ottati and Stern2023). A question of interest is how cultural traditions persist over generations and over historical time (even within generations) and how they change. Attitudes toward homosexuality, for example, have changed enormously in the last 50 years, in some countries. And it is clear that there are cultural influences on beliefs about what good thinking is. One way to study cultural change over time is to examine written documents, both for their content and for the type of reasoning they exhibit. Some of this sort of work has been done (Suedfeld, Reference Suedfeld1985; Suedfeld et al., Reference Suedfeld, Guttieri, Tetlock and Post2003), but it has been confined to documented records of groups, such as legislators who make speeches, that are not representative of any larger cultural tradition.
It is clear that education can be designed to encourage rational thinking (e.g., Baron, Reference Baron1993). Liberal education at the university level is often explicit in its attempts to encourage questioning, consideration of diverse views, and understanding of the nature of expert knowledge. Many secondary schools do this too (e.g., Metz et al., Reference Metz, Baelen and Yu2020).
Several efforts have been made to teach moral thinking in a way that views it as a type of thinking rather than a set of rules. Kohlberg, in particular, encouraged widespread experimentation with moral discussion in high schools around the world (Snarey & Samuelson, Reference Snarey, Samuelson, Nucci and Narvaez2008). Much of this work disappeared with Kohlberg’s death and with claims that his ideas were biased against women (claims that were consistently shown to be unfounded, as Snarey and Samuelson point out).
Education is one important domain where people’s thinking can be influenced. Others, probably to a lesser extent, are journalism and politics itself. Ultimately, individuals and cultures change from a variety of influences, and we cannot expect applied research on one domain or another to provide the key. Change is slow, but the world would benefit if people’s moral thinking became more rational.
A central task for moral psychology is articulating the structure of the moral categories to which individuals, human or otherwise, are assigned in the context of everyday moral reasoning. Of these categories, two stand out as especially foundational: moral agents and moral patients (Gray & Wegner, Reference Gray and Wegner2009; Schein & Gray, Reference Schein and Gray2018). Moral agents can commit moral wrongs and be held morally responsible for their actions; moral patients can be morally wronged and their interests given moral consideration.Footnote 1 What research on moral categorization aims to understand, broadly speaking, is the basis on which individuals are assigned to these categories. How do we go about determining whether an individual has moral agency or moral patiency? Addressing this question leads directly to the study of mind perception, that is, the attribution of mental capacities and traits (Epley & Waytz, Reference Epley, Waytz, Fiske, Gilbert and Lindzey2010; Gray et al., Reference Gray, Young and Waytz2012). The link between moral categorization and mind perception is not surprising, given the extent to which social cognition in general, and moral cognition in particular, involves the representation of other minds. As we will see in this chapter, however, the interplay between attributions of moral status and attributions of mindedness is multifaceted and complex. Attaining a proper understanding of the connection between these types of attribution may require moving beyond standard models of mind perception, which tend to focus either on the representation of mental capacities or the representation of mental traits (i.e., mental capacities that are regularly exercised), to a hybrid model that encompasses both capacities and traits. (For clarification of the distinction between capacities and traits, see Section 9.1.1.)
The structure of the chapter is as follows. Section 9.1 is an overview of research on mind perception, focusing on alternative accounts of the attribution of mental capacities and traits. Sections 9.2 and 9.3 address the role of mind perception in the categorization of individuals as moral patients and moral agents, focusing on how the attribution of mental states and capacities influences the attribution of moral status. Section 9.4 concludes the chapter with a brief discussion of how empirical study of the causal nexus between mind perception and moral categorization bears on philosophy, law, and research on artificial intelligence (AI).
9.1 Mind Perception
For social beings like us, few distinctions are more basic than the distinction between things that have minds (e.g., people, pets) and things that do not (e.g., pebbles, biscuits). This makes sense, given that the ability to think about minds plays an essential role in social cognition, enabling us to predict, understand, and influence the behavior of others (Waytz et al., Reference Waytz, Gray, Epley and Wegner2010). No less important than determining whether something has a mind, however, is determining what kind of mind it has. The idea here is that people tend to think of psychological capacities and traits as clustering together in a particular way and that the pattern of clustering reveals the intuitive ontology of the mental, or what Gray et al. (Reference Gray, Gray and Wegner2007) call the “dimensions of mind perception.” Theories of mind perception vary with respect to the number of dimensions they posit and the mental features that lie on those dimensions.
9.1.1 Two-Dimensional Models
The idea that we perceive minds in multiple dimensions has deep philosophical roots. In discussions of the metaphysics of consciousness, for example, it has long been assumed that some mental states (the phenomenal type) are intrinsically linked to conscious experience, whereas other mental states (the intentional type) are not (Block, Reference Block1995; Chalmers, Reference Chalmers1995; Nagel, Reference Nagel1974). The typological distinction between phenomenal and intentional states figured prominently in later work on consciousness that turned away from metaphysics proper toward folk metaphysics. It was argued, for example, that the psychological origin of the “hard problem” of explaining how brain activity gives rise to conscious experience could be traced to features of our cognitive architecture that make it difficult for us to think about the phenomenal mind in mechanistic terms, at least in an intuitively satisfying way (Arico et al., Reference Arico, Fiala, Goldberg and Nichols2011; Robbins & Jack, Reference Robbins and Jack2006). Early work in the experimental philosophy of consciousness also validated a basic distinction in folk metaphysics between phenomenal and intentional aspects of mind – a precursor to the idea that mind perception operates in two dimensions, only one of which is tied to conscious experience (Knobe & Prinz, Reference Knobe and Prinz2008).
In psychology, the basic premise of two-dimensional accounts of mind perception is that people intuitively think of mental capacities and traits as belonging to one of two categories or clusters. Some of these accounts focus primarily on the representation of mental traits, whereas other accounts focus primarily on the representation of mental capacities. On this point it is important to note that, though the terms capacity and trait are sometimes used interchangeably, they are not synonymous. To illustrate the distinction with an example: Having the capacity for cooperation entails having the potential to interact with others in a cooperative way (Vetter, Reference Vetter2013), whereas having the trait of cooperativeness entails possessing the capacity for cooperation and exercising that capacity on a regular basis (Vollmer, Reference Vollmer1993). In general, traits are grounded in, but not identical to, behavioral capacities. Cooperativeness, for example, is grounded in the capacity to engage in a certain kind of prosocial behavior (i.e., cooperation). Since it is possible to have a capacity without regularly exercising it, however, mere possession of a capacity does not entail possession of traits constitutively linked to that capacity. Thus, the ascription conditions for mental traits (like cooperativeness) are more restrictive than the ascription conditions for the mental capacities in which those traits are grounded (like cooperation).
A variety of trait-based models of mind perception have been advanced by social psychologists. These models originate in research on person perception rather than research on mind perception, which tends to focus on mental capacities and exclude mental traits – even though mental traits are at least as constitutive of the mind as mental capacities are, and arguably more so. The warmth–competence model distinguishes between traits like friendliness, helpfulness, and trustworthiness and traits like intelligence, creativity, and efficacy (Fiske et al., Reference Fiske, Cuddy and Glick2006; Fiske et al., Reference Fiske, Cuddy, Glick and Xu2002). The agency–communion model exhibits a similar dichotomy, with traits like ambition, dominance, and independence on the agency side of the ledger and traits like cooperativeness, trustworthiness, and nurturance on the communion side (Abele & Wojciszke, Reference Abele and Wojciszke2007). Interestingly, research on these two models suggests that the warmth/communion dimension has separable moral and nonmoral components and that traits associated with the moral component, like trustworthiness, play a more important role in person perception than traits associated with the nonmoral component, like friendliness (Brambilla & Leach, Reference Brambilla and Leach2014; Goodwin et al., Reference Goodwin, Piazza and Rozin2014). In the dehumanization literature, the human nature–human uniqueness model distinguishes between universal (human nature) traits like emotionality, interpersonal warmth, and curiosity and culturally variable (human uniqueness) traits like civility, refinement, and moral sensibility (Haslam, Reference Haslam2006; Haslam & Loughnan, Reference Haslam and Loughnan2014; see also Chapter 14 in this volume).
By contrast with accounts that emphasize the representation of mental traits, the dominant account of mind perception in moral psychology, the experience–agency model, focuses on the representation of mental capacities (Gray et al., Reference Gray, Gray and Wegner2007). In a landmark study, participants rated a cast of 13 characters (e.g., a normal adult, a child, a dog, a robot) pairwise on one of 18 mental capacities. On each trial, participants were shown pictures of two of the characters, each picture accompanied by a brief verbal description, and asked to indicate on a five-point scale which of the two characters was more likely to have that capacity. These comparative ratings were then aggregated across trials to determine a mean relative rating for each character, and the process was repeated for each capacity. A principal components analysis of correlations between ratings of different capacities across characters suggested a divide between two dimensions of mental life: experience (e.g., pain, desire, and joy) and agency (e.g., self-control, memory, and planning). Of the two components, experience was primary, including most of the capacities (> 60 percent) and accounting for most of the variance in the data (88 percent). Though the two dimensions were strongly correlated (r = 0.90) (Piazza et al., Reference Piazza, Landy and Goodwin2014), most characters scored higher on one dimension than the other, and in some cases the difference was dramatic – for example, the robot and God characters were attributed a lot of agency but relatively little experience, whereas the infant and the dog characters were attributed a lot of experience but relatively little agency. The correlation between agency and experience was almost perfect (r = 0.97), however, when nonnatural and atypical characters (e.g., God, a robot, a dead person, a fetus) were excluded (Piazza et al., Reference Piazza, Landy and Goodwin2014). Consistent with this finding, factor analysis of data from a later study of mind perception with a large set of animal targets revealed a single factor accounting for 29–48 percent of the variance (Bastian, Loughnan, et al., Reference Bastian, Costello, Loughnan and Hodson2012), suggesting that a one-dimensional model of mind perception may be sufficient for some natural, ordinary entities (e.g., wild and domestic animals).
The experience–agency framework, at least as characterized by Gray et al. (Reference Gray, Gray and Wegner2007), has conceptual shortcomings. First, the range of capacities specified for each dimension is relatively narrow. It leaves out a host of capacities that might also feature in the intuitive ontology of the mental, such as perception, reasoning, self-awareness, and empathy (Malle, Reference Malle, Goel, Seifert and Freska2019; Weisman et al., Reference Weisman, Dweck and Markman2017, Reference Weisman, Legare, Smith, Dzokoto, Aulino, Ng, Dulin, Ross-Zehnder, Brahinsky and Luhrmann2021). Such omissions are especially troublesome insofar as some of these capacities (e.g., perception, self-awareness, and empathy) are not obviously more experiential than agentic, or vice versa. Second, some capacities in Gray et al.’s two-dimensional space (thought, morality) are specified at such a high level of generality that it is difficult to determine which psychological functions they encompass (Malle, Reference Malle, Goel, Seifert and Freska2019). Indeed, one of them (personality) is so abstract and multifaceted that it does not seem like a capacity at all.
The experience–agency framework is a powerful tool for mapping the intuitive ontology of the mind, yet Gray et al.’s (Reference Gray, Gray and Wegner2007) empirical findings support it only up to a point. First, the observation that participants rate characters similarly on pain and fear, say, does not show that they think of pain and fear as related at the level of basic ontology; it might mean simply that they think of characters as similar in terms of the possession of these capacities (Weisman et al., Reference Weisman, Dweck and Markman2017). Indeed, though Weisman et al. (Reference Weisman, Dweck and Markman2017) replicated Gray et al.’s (Reference Gray, Gray and Wegner2007) findings using the same design with a larger set of mental capacities, they found that when participants rated the capacities of characters individually rather than pairwise, the ratings did not pattern as predicted by the experience–agency model. For this reason, drawing conclusions about the intuitive ontology of mind from perceived patterns of resemblance among different entities is a risky business. A second issue concerns the method used by Gray and colleagues to partition the set of capacities, associating each capacity with the factor it loaded more strongly on. In the most part, this procedure was unproblematic, since 5 of the 18 capacities (hunger, fear, pain, pleasure, and rage) loaded much more strongly on experience than agency, and 5 capacities (self-control, morality, memory, emotion recognition, and planning) loaded much more strongly on agency than experience. The remaining 8 capacities (desire, personality, consciousness, pride, embarrassment, joy, communication, and thought), however, loaded about equally on both factors, suggesting that many of the capacities surveyed do not fit neatly into either category (Malle, Reference Malle, Goel, Seifert and Freska2019).
That said, empirical support for the experience–agency model is not limited to Gray et al.’s (Reference Gray, Gray and Wegner2007) study, which was replicated by Weisman and colleagues (Reference Weisman, Dweck and Markman2017) using an expanded set of capacities. Another source of evidence for the model comes from studies of how people rate the naturalness of different kinds of mental state ascriptions to group entities, such as corporations (Knobe & Prinz, Reference Knobe and Prinz2008). In one such study, participants were asked to rate the naturalness of sentences ascribing mental states to a fictional entity called the Acme Corporation, some of which ascribed agentic states (e.g., “Acme Corp. has just decided to adopt a new marketing plan”) while others ascribed experiential states (e.g., “Acme Corp. is feeling excruciating pain”). Analysis of the responses showed that ascriptions of agentic states to the corporate group entity were rated more natural than ascriptions of experiential states. In another study, participants were randomly assigned to either the “feeling” condition, in which explicitly experiential states were ascribed (e.g., “Acme Corp. is feeling upset”), or the “no-feeling” condition, in which the ascriptions were not explicitly experiential (e.g., “Acme Corp. is upset about the court’s recent ruling”). Here, naturalness ratings of the “no feeling” ascriptions were higher than ratings of the “feeling” ascriptions. A similar asymmetry has been observed in the case of ascription of mental states to robots (Huebner, Reference Huebner2010; Sytsma & Machery, Reference Sytsma and Machery2010). In both cases the asymmetry may have arisen because the target of ascription was physically constituted differently than conscious beings typically are, that is, either not in a single biological body (corporations) or not biologically at all (robots). This suggestion – what Phelan et al. (Reference Phelan, Arico and Nichols2013) call the discontinuity hypothesis – dovetails with the fact that participants in Gray and colleagues’ (Reference Gray, Gray and Wegner2007) study seemed to think of the mind of God, an immaterial entity lacking any sort of physical instantiation, as much richer in agentic capacities than experiential ones. The latter finding, however, is somewhat at odds with the literature on anthropomorphism, which suggests that people tend to think of God as rich in both agency and experience (Barrett & Keil, Reference Barrett and Keil1996). In one study, for example, a majority of participants attributed to God a wide range of both agentic and experiential capacities, including emotions (e.g., happiness, sadness, and worry), with only a few capacities (e.g., smell, taste, pain) attributed by a minority of participants (< 30 percent) (Shtulman & Lindeman, Reference Schtulman and Lindeman2016, Study 1).
What’s more, support for the discontinuity hypothesis is limited, especially as it pertains to the attribution of mental capacities to groups. In Knobe and Prinz’s second study, for example, the two types of mental state ascriptions rated by participants differed in grammatical complexity: ascriptions in the “no-feeling” condition included a prepositional phrase, but ascriptions in the “feeling” condition did not. As a result, the effect of the manipulation may have been due to an experimental artifact (Arico, Reference Arico2010; Sytsma & Machery, Reference Sytsma and Machery2009). Further, studies by Phelan and colleagues (Reference Phelan, Arico and Nichols2013) suggest that ascriptions of mental states to groups are typically understood distributively rather than collectively, that is, as ascriptions of mental states to the group’s members, not to the group per se. This opens up the possibility that differences in naturalness ratings between experiential and agentic ascriptions to groups stem from the fact that depending on context (e.g., the type of group at issue and the role played by the individuals comprising it), attributing agentic states to the members of a group may seem more appropriate than attributing experiential states to them (Phelan et al., Reference Phelan, Arico and Nichols2013). Differences in naturalness ratings between experiential and agentic ascriptions to robots may be explained in a similar fashion, in terms of observers’ tacit assumptions about the robot’s function (Buckwalter & Phelan, Reference Buckwalter and Phelan2013). Given these considerations, the strength of evidence for the experience–agency model derived from studies of how people perceive the mindedness of corporations and artifacts is open to question.
To sum up: Two-dimensional models of mind perception posit either a dichotomous representation of mental traits (warmth vs. competence, communion vs. agency, human nature vs. human uniqueness) or a dichotomous representation of mental capacities (experience vs. agency). Of these models, the experience–agency model has been far and away the most influential in moral psychology. Whether the distinction between experience and agency marks a fundamental divide in the intuitive ontology of the mental, however, is less clear.
9.1.2 Three-Dimensional Models
Recent research on mind perception has pointed in the direction of a three-dimensional, rather than two-dimensional, account of how we attribute mental capacities. Results from studies in which participants rated one or more characters individually on a range of mental capacities – rather than a set of characters pairwise on a single capacity, as in the paradigm used by Gray et al. (Reference Gray, Gray and Wegner2007) – yielded an alternative picture that Weisman et al. (Reference Weisman, Dweck and Markman2017, Reference Weisman, Legare, Smith, Dzokoto, Aulino, Ng, Dulin, Ross-Zehnder, Brahinsky and Luhrmann2021) call the body–heart–mind model. The three dimensions in this model are comprised primarily of somatosensory capacities, like hunger, pain, and pleasure (body); social-emotional capacities, like empathy, pride, and guilt (heart); and perceptual and inferential capacities, such as seeing, reasoning, and decision making (mind). An analogous three-dimensional structure was identified in studies by Malle (Reference Malle, Goel, Seifert and Freska2019), in which principal components analysis of participants’ ratings of individual characters revealed a tripartite distinction between affect (e.g., hunger, pleasure, and anger), moral and mental regulation (e.g., empathy, deliberation, and planning), and reality interaction (e.g., perception, communication, and logical reasoning).
These three-dimensional models differ from the experience–agency model in two important ways. First, they span, and are empirically based on, a much wider set of mental capacities. Second, they retain the experience dimension of the experience–agency model but split the agency dimension into two subdimensions, each of which includes experiential elements. In Weisman et al.’s (Reference Weisman, Dweck and Markman2017) model, the capacities grouped under body are all experiential, whereas both heart and mind are characterized by a mix of agentic and experiential capacities. Likewise, the first dimension of Malle’s (Reference Malle, Goel, Seifert and Freska2019) model (affect) is composed of capacities linked to experience, and the second and third dimensions (moral and mental regulation and reality interaction) have both agentic and experiential features. Due to the dominance of the experience–agency model, however, the explanatory power of three-dimensional models of mind perception for research on moral categorization has largely yet to be explored.
To sum up: Existing models of mind perception vary in dimensionality, content, and scope. Two-dimensional (dichotomous) models can be either trait-based or capacity-based, and they tend to include a relatively small number of mental features on each dimension. Three-dimensional (trichotomous) models tend to be capacity-based, and they encompass a larger set of features on each dimension. In Section 9.2 we begin to explore how some of these models have been applied to the study of moral categorization, focusing on the experience–agency model.
9.2 Moral Patiency
According to a standard definition of moral patiency, something is a moral patient just in case it can be morally wronged (Goodwin, Reference Goodwin2015; Piazza et al., Reference Piazza, Landy and Goodwin2014; Sytsma & Machery, Reference Sytsma and Machery2012). Three features of this definition are worth noting. First, to be a moral patient is to have a kind of moral standing, namely, the moral standing associated with the possession of rights. Hence, to be a moral patient is to be an individual whose interests or well-being deserve the sort of consideration that grounds a duty on the part of others (Raz, Reference Raz1984).Footnote 2 Second, being a moral patient is distinct from being a potential target of morally wrong action. This distinction is required by the fact that a morally wrong action directed toward one individual could be morally wrong because it resulted in a wrong being done to a different individual. For example, it might be wrong to cut down a tree in your neighbor’s yard without their permission because doing so would result in a wrong being done to your neighbor, not because of a wrong being done to the tree.Footnote 3 Third, the concept of moral patiency is normative, not descriptive. To be a moral patient is to possess a type of intrinsic value that governs how one ought (and ought not) to be treated by others; whether one is treated in a way that meets that standard is irrelevant. For this reason, the concept of moral patiency should not be conflated with psychological patiency (Goodwin, Reference Goodwin2015).Footnote 4 To be a psychological patient is to possess the mental capacities that distinguish sentient from nonsentient beings, namely, the capacities associated with conscious experience (e.g., pain, pleasure, joy). Unlike the concept of moral patiency, the concept of psychological patiency is descriptive, not normative; there is nothing intrinsically evaluative about it. The importance of maintaining the distinction between moral patiency and psychological patiency will become clear later on, when we review research on the various factors affecting the categorization of individuals as moral patients, which clearly transcend psychological patiency. Indeed, as we will see, some of the factors that influence judgments of moral patiency transcend the psychological realm altogether (hence, transcending the domain of mind perception).
Note that, in speaking of the categorization of individuals as moral patients, we are tacitly assuming that moral patiency can be understood as a property that an individual either has or lacks, as opposed to a property that admits of degrees. Whether this assumption is correct from the standpoint of normative theory – that is, from the perspective of theorizing about what qualifies something as a moral patient – is a matter of controversy (DeGrazia, Reference DeGrazia2008). Fortunately, this is not a debate in which we need to enter, since our focus is on ordinary people’s attribution of moral patiency. The question for us is not: What characteristics determine whether an individual can be morally wronged and whether its interests deserve moral consideration? Our question is this: What characteristics contribute to ordinary people’s perception of an individual as something that can be morally wronged and ordinary people’s perception of an individual as something whose interests morally matter? In addressing the latter question, we can remain neutral in the philosophical debate about whether moral patiency admits of degrees. Neutrality is an option here because our concern is with everyday attributions of moral patiency, not moral patiency itself.Footnote 5
That said, philosophical theorizing about moral patiency (what qualifies something as a moral patient) provides a natural starting point for psychological theorizing about the attribution of moral patiency (what causes something to be seen as a moral patient). What stands out in the philosophical literature on the topic is a pair of diametrically opposed views. The first view, associated with a utilitarian perspective in ethics, is that psychological patiency (i.e., sentience) is necessary and sufficient for moral patiency (Bentham, Reference Bentham1789/1970; Bernstein, Reference Bernstein1998; Singer, Reference Singer, Regan and Singer1989). The second view, associated with a deontological perspective in ethics and endorsed by certain versions of social contract theory, is that psychological agency, or the suite of high-level cognitive capacities required for rational thought and behavior (e.g., deliberative reasoning, self-consciousness, autonomy), is necessary and sufficient for moral patiency (Carruthers, Reference Carruthers1992; Kant, Reference Kant1785/1998, 1797/Reference Kant1996). The contrast between these views could hardly be starker. Regarding the moral value of animals, Bentham wrote: “The question is not, Can they reason? Nor, Can they talk? But, Can they suffer?” (Bentham, Reference Bentham1789/1970, p. 283n). For Kant, by contrast, animals “have only a relative worth, as means, and are therefore called things, whereas rational beings are called persons because their nature … marks them out as an end in itself” (Kant, 1797/Reference Kant1996, p. 79).
Both the utilitarian and deontological perspectives have informed psychological research on the attribution of moral patiency by suggesting ways in which mind perception might contribute to the categorization of individuals as moral patients: first, via the perception of experiential capacities, or psychological patiency; second, via the perception of agentic capacities, or psychological agency. One line of evidence for the relevance of both these dimensions of mind to moral categorization comes from Gray et al.’s (Reference Gray, Gray and Wegner2007) landmark study of mind perception. Participants in this study, in addition to making comparative judgments of characters’ mental capacities, were also asked to make comparative judgments of characters’ moral status, both with respect to an indirect measure of moral agency (“If both characters had caused a person’s death, which one do you think would be more worthy of punishment?”) and an indirect measure of moral patiency (“If you were forced to harm one of these characters, which one would it be more painful for you to harm?”).Footnote 6 Both moral categories were positively correlated with the two dimensions of mind revealed by Gray et al.’s factor analysis, but moral agency was more strongly correlated with agency than experience (r = 0.82 vs. r = 0.22), and moral patiency was more strongly correlated with experience than agency (r = 0.85 vs. r = 0.26). Thus, characters scoring much higher on agency than experience (like God) were attributed more moral agency than patiency, whereas characters scoring lower on agency than experience (like the infant) were attributed more moral patiency than agency (Gray & Wegner, Reference Gray and Wegner2009).
Indirect evidence for the hypothesis that moral patiency is more closely linked to experience than agency comes from an experimental study of folk-psychological explanation (Knobe & Prinz, Reference Knobe and Prinz2008, Study 6). Participants in this study were randomly assigned to one of two conditions in which they read a story involving a fictional character interested in the psychological capacities of fish. In one condition, the character was described as wanting to know how well the fish could remember the location of food sources in their habitat (an agentic capacity); in the other condition, the character wanted to know whether fish could feel pain (an experiential capacity). After reading the story, participants were asked to explain why the character might want to have this information. The pattern of responses between conditions varied dramatically: in the memory condition, typical answers referred to the character’s interest in predicting, explaining, or controlling the behavior of the fish; in the pain condition, the focus of explanation was on the character’s concern about the welfare of the fish. Consistent with the correlational evidence reported by Gray et al. (Reference Gray, Gray and Wegner2007), these findings support the experientialist hypothesis that the attribution of moral patiency is driven mostly by psychological patiency, with psychological agency playing either a minor role or no role at all. Note that this hypothesis implies only that psychological patiency is the most heavily weighted feature in the multidimensional feature space corresponding to the concept of moral patiency. According to a stronger form of experientialism, psychological patiency is the only feature in that space and hence the sole determinant of moral patiency.
Though some experimental studies of moral patiency involving the direct manipulation of perception of a target’s experience and agency have lent support to experientialism, evidence for this view is mixed. On the positive side, in a study using vignettes about the treatment of lobsters, lobsters were rated higher in moral patiency (e.g., more deserving of protection from harm) when described as high in sentience and low in intelligence than when described as low in sentience and high in intelligence; indeed, while moral patiency ratings increased relative to an initial baseline measure (taken prior to the addition of information about lobster psychology) when the lobsters were described as sentient but unintelligent, moral patiency ratings dropped below baseline when the lobsters were described as intelligent but insentient (Jack & Robbins, Reference Jack and Robbins2012, Study 1). Consistent with this result, two studies using a full-factorial design, one involving a story about lobster farming and the other a story about surgical research on monkeys, showed an effect of sentience on ratings of moral patiency but no effect of intelligence, consistent with experientialism (Jack & Robbins, Reference Jack and Robbins2012, Study 2; Sytsma & Machery, Reference Sytsma and Machery2012, Study 1).
On the negative side, studies of moral patiency using an “alien species” paradigm are difficult to square with experientialism. In one such study, moral patiency ratings of a fictional extraterrestrial species called “atlans” were higher when the creatures were described as high in agency than when they were described as low in agency, but moral patiency ratings were no higher when the creatures could feel pleasure and pain than when they lacked those capacities (Sytsma & Machery, Reference Sytsma and Machery2012, Study 2).Footnote 7 In a companion study using a vignette about an individual atlan (rather than atlans in general), there was an effect of both agency and experience on moral patiency, but the effect of agency was greater than that of experience (Sytsma & Machery, Reference Sytsma and Machery2012, Study 4). And, in an unrelated study, moral patiency ratings of a fictional extraterrestrial species called “trablans” were higher in the high agency condition than the low agency condition (Piazza & Loughnan, Reference Piazza and Loughnan2016, Study 1).Footnote 8 This last finding is prima facie at odds with experientialism, insofar as the large size of the effect (d = 0.84) suggests that the attribution of moral patiency is strongly influenced by the perception of psychological agency.Footnote 9
A further challenge to experientialism comes from evidence that attributions of moral patiency are highly sensitive to whether an individual is seen as having a harmful disposition (Khamitov et al., Reference Khamitov, Rotman and Piazza2016; Opotow, Reference Opotow1993; Piazza et al., Reference Piazza, Landy and Goodwin2014). This third factor is a cluster of psychological traits, not a cluster of capacities. As such, it is not accounted for in capacity-based models like the experience–agency model (Gray et al., Reference Gray, Gray and Wegner2007), though it does implicitly figure in trait-based models, as the opposite pole of the warmth dimension of the warmth–competence model (Fiske et al., Reference Fiske, Cuddy, Glick and Xu2002) and the communion dimension of the agency–communion model (Abele & Wojciszke, Reference Abele and Wojciszke2007). In one study, participants rated the moral patiency of an alien species that was variously described, depending on condition, as high or low in intelligence, high or low in sentience, and harmful or harmless (Piazza et al., Reference Piazza, Landy and Goodwin2014, Study 2). Moral patiency was measured using a five-item measure that included questions about the wrongness of harming the creatures, the extent to which the creatures were entitled to protection from harm, and the extent to which they deserved to be treated with compassion. Main effects were observed for all three dimensions of variation, most dramatically in the case of the harmfulness factor, the effect of which (i.e., decreased moral patiency) dwarfed in size the effects of both intelligence and sentience (increased moral patiency). Interestingly, the sentience manipulation affected moral patiency regardless of whether the creature was described as harmful, but there was no effect of intelligence when the creature was described as harmless. This asymmetry could be taken to suggest that, though harmfulness was the most important of the three factors, psychological patiency played a more fundamental role in shaping attributions of moral patiency than psychological agency did. That said, the main finding here – that harmfulness dramatically diminishes moral patiency – is at odds with experientialism, at least insofar as harmfulness is a behavioral trait, not an experiential capacity. (Recall that according to experientialism, the attribution of moral patiency is driven for the most part by the perception of psychological patiency, understood as a cluster of experiential capacities.)
Further support for the relevance of psychological traits to moral patiency can be found in research on dehumanization. In one study, traits associated with human nature (HN), such as emotional responsiveness and openness, were positively correlated with moral patiency (as measured by opposition to mistreatment), but no analogous correlation was found for traits associated with human uniqueness (HU), such as civility and refinement (Bastian et al., Reference Bastian, Laham, Wilson, Haslam and Koval2011, Study 1). In a companion study, fictional characters described as high in HN traits were rated higher in moral patiency than characters described as low in HN traits, while high-HU characters got lower ratings than their low-HU counterparts, even though characters described as high in either type of trait were equally liked by participants (Bastian et al., Reference Bastian, Laham, Wilson, Haslam and Koval2011, Study 2).
The full array of factors affecting the attribution of moral patiency is not limited to mental capacities and traits. For example, perceived physical attractiveness also influences judgments of whether a species of animal deserves to be protected from harm or otherwise shown moral consideration (Gunnthorsdottir, Reference Gunnthorsdottir2001; Klebl et al., Reference Klebl, Luo and Bastian2022; Klebl et al., Reference Klebl, Luo, Tan, Ping Ern and Bastian2021). Perceived similarity to humans, over and above psychological similarity, may also play a role (Bastian, Costello, et al., Reference Bastian, Costello, Loughnan and Hodson2012), though some studies suggest otherwise (Akechi & Hietanen, Reference Akechi and Hietanen2021; Kozachenko & Piazza, Reference Kozachenko and Piazza2021; Opotow, Reference Opotow1993). Whatever else is the case, it seems clear that two-dimensional models of mind perception, whether capacity-based or trait-based, are insufficient for understanding how mind perception contributes to judgments of moral patiency. What is needed are higher-dimensional models incorporating both capacities and traits.
A substantial body of empirical research supports the hypothesis that mind perception influences the attribution of moral patiency. The converse hypothesis – that the attribution of moral patiency to a target directly influences the perception of its mindedness – is more controversial. What is relatively uncontroversial is that categorizing an animal as a food source tends to reduce attributions of moral patiency and attributions of mindedness, a phenomenon that makes sense in light of the general tendency to reduce cognitive dissonance (Bastian, Loughnan, et al., Reference Bastian, Costello, Loughnan and Hodson2012; Loughnan et al., Reference Loughnan, Haslam and Bastian2010; Piazza & Loughnan, Reference Piazza and Loughnan2016). Similarly, describing an animal as vulnerable to harm tends to increase both the extent to which it is seen as a moral patient and the extent to which it is seen as a psychological patient (Jack & Robbins, Reference Jack and Robbins2012, Studies 3 and 4). But there is currently no good evidence that reducing the perceived moral patiency of an animal causes a reduction in its perceived mindedness. Indeed, results from one small study (N = 80) showed that the effect of categorizing an animal as food on perception of its moral patiency was mediated by perception of its capacity to suffer, not the other way around (Bratanova et al., Reference Bratanova, Loughnan and Bastian2011). This finding is hard to square with the hypothesis that categorizing an animal as a moral patient affects how its mind is perceived.Footnote 10
By contrast, there is some evidence that the perceived moral patiency of other types of entity (i.e., nonanimals) affects the perception of their mindedness (Ward et al., Reference Ward, Olsen and Wegner2013). In a series of four studies, participants attributed more experience and agency to three fictional characters – a human in a persistent vegetative state, a robot, and a human corpse – when the character was described as a victim of intentional harm. In each case, a manipulation check confirmed that the harmful action was seen as morally wrong and hence that the victim was seen as a moral patient (Ward et al., Reference Ward, Olsen and Wegner2013, Studies 1–4). One might hesitate to attach too much significance to these results, given that the vignettes used in these studies featured atypical, nonnatural characters. Another concern is that the measure of moral patiency used in these four studies (i.e., the perceived moral wrongness of the action directed at the target) is too indirect. The concern here relates to a key feature of the characterization of moral patiency introduced at the beginning of this section. Moral patiency entails the possibility of being wronged, not just the possibility of being the recipient of a wrong action. Hence, the fact that something is seen as a target of wrongdoing does not show that it is seen as a moral patient. Indeed, it seems plausible to suppose that people regard intentionally harming a robot as wrong for some other reason than that it would result in the robot’s being wronged; for example, it might be seen as wrong because it would result in a wrong being done to the robot’s owner. (The same line of reasoning applies to the other two cases: the patient in a vegetative state and the corpse.) But this concern does not apply to the fifth study, which used a vignette about a normal human character. Here, however, describing the character as a victim of intentional wrongdoing produced the puzzling result – likely the effect of an experimental artifact – that the character was attributed less experience and agency (Ward et al., Reference Ward, Olsen and Wegner2013, Study 5). In short, while attributing moral patiency to a target may indeed directly influence how its mind is perceived, evidence of this influence is limited.
9.3 Moral Agency
A moral agent is an individual that can commit morally wrong actions and be held responsible for those actions, in the sense that it is normatively appropriate to blame or punish them accordingly. This characterization of moral agency is a relatively narrow one, as it covers only actions with a negative valence (immoral behavior). A fuller characterization of moral agency would also include positively valenced actions (moral behavior). Judgments of moral agency in this broader sense appear to be sensitive to different factors and sensitive in different ways to the same factor, depending on the valence of the action (Anderson et al., Reference Anderson, Crockett and Pizarro2020; Robbins & Alvear, Reference Robbins and Alvear2023). A potential advantage of a more focused perspective on moral agency, however, is that it enables us to conceptualize moral agency as the mirror image of moral patiency: Moral patients have rights, and moral agents have a duty to respect those rights.
Moral agency, like moral patiency, is a normative concept. Hence, just as moral patiency should not be conflated with psychological patiency (the possession of experiential mental capacities), moral agency should not be confused with psychological agency (the possession of agentic mental capacities). And like moral patiency, moral agency can be conceptualized either as a property that admits of degree or in binary terms. The same point applies to moral responsibility: Though an individual might deserve more or less blame, or more or less severe punishment, for a morally wrong action, being worthy of any amount of blame or punishment suffices for moral responsibility (hence, for moral agency) in the categorical sense. Further, moral agency is a relatively stable, context-invariant property of individuals. Hence, even if moral responsibility does admit of degree, having more (or less) moral responsibility for a particular action does not entail having more (or less) moral agency.
Philosophical accounts of moral responsibility provide a useful jumping-off point for thinking about the role of mind perception in the attribution of moral agency. Standard views of moral responsibility tie moral responsibility to the possession of sophisticated cognitive abilities that appear to be unique to our species, such as the capacity to recognize, grasp, and act on the basis of moral considerations (Arpaly, Reference Arpaly2003; Fischer & Ravizza, Reference Fischer and Ravizza1998; Strawson, Reference Strawson1962). Translating these accounts to the attributional realm entails linking the attribution of moral responsibility – and hence, the categorization of an individual as a moral agent – primarily to the perception of mental capacities on the agentic side of the ledger and only marginally to capacities on the experiential side. As with moral patiency, however, the empirical story is more complicated than this simple hypothesis – call it agentialism – suggests. Note, however, that just as experientialism does not entail that moral patiency is solely determined by psychological patiency, agentialism does not entail that moral agency is solely determined by psychological agency. Agentialism requires only that psychological agency is the most heavily weighted feature in the feature space associated with the concept of moral agency. (Stronger formulations of agentialism are possible, but we will not consider them here.)
Correlational evidence for agentialism comes from Gray et al.’s (Reference Gray, Gray and Wegner2007) study, in which participants made both comparative judgments of characters’ mental capacities and comparative judgments of their moral responsibility for a hypothetical transgression. Factor analysis of the data revealed that moral responsibility was strongly correlated with agentic capacities but only weakly correlated with experiential ones, suggesting that psychological agency contributes more to moral agency than psychological patiency does. A good deal of experimental evidence supports this hypothesis. First, a variety of studies have shown that judgments of blame are sensitive to whether a behavior is intentional and freely chosen (Alicke, Reference Alicke2000; Cushman, Reference Cushman2008; Guglielmo et al., Reference Guglielmo, Monroe and Malle2009; Monroe et al., Reference Monroe, Dillon and Malle2014). Similarly, multiple studies have shown that reduced psychological agency is causally linked to reduced moral responsibility. In one study, for example, a fictional character with a severe learning disability was attributed less moral responsibility for immoral behavior than his cognitively typical counterpart (Gray & Wegner, Reference Gray and Wegner2009, Study 1b). In another study, blame and punishment judgments of a fictional character who had committed a violent crime were mitigated by information that the offender suffered from psychotic delusions due to schizophrenia (de Vel-Palumbo et al., Reference de Vel-Palumbo, Schein, Ferguson, Chang and Bastian2021). Likewise, recent evidence suggests that the mitigating effect of a history of childhood abuse and neglect on judgments of blame for antisocial behavior is mediated by perceived deficits in socioemotional functioning, as measured by symptoms of post-traumatic stress disorder (Robbins & Alvear, Reference Robbins and Alvear2023, Study 3). Both individually and collectively, these findings provide support for the hypothesis that psychological agency is a key determinant of moral responsibility.
There is also some (albeit limited) evidence that psychological patiency influences moral responsibility. In one study, for example, a fictional character was judged less responsible for stealing a car when described as unusually sensitive to pain – suggesting that the contribution of psychological patiency to moral responsibility, unlike the contribution of psychological agency, may sometimes be negative, rather than positive (Gray & Wegner, Reference Gray and Wegner2009, Study 3a). An alternative explanation of this finding, however, is that the highly pain-sensitive character was attributed less responsibility because he was seen as suffering from a mental disorder that impaired his psychological agency.
As noted earlier, however, there is more to the attribution of mindedness than the attribution of mental capacities, given that mind perception in the broad sense also involves the attribution of mental traits. (The distinction between capacities and traits is critical here, just as it was in the context of our earlier discussion of moral patiency, where a similar point was made about the limitations of the experience–agency model.) A full account of how mind perception contributes to the attribution of moral agency needs to go beyond Gray et al.’s (Reference Gray, Gray and Wegner2007) experience–agency model (which focuses exclusively on the representation of mental capacities), just as it does in the case of moral patiency. For example, perception of traits associated with Fiske et al.’s (Reference Fiske, Cuddy, Glick and Xu2002) warmth–competence model appears to affect perception of moral agency. Female victims of domestic abuse are blamed more for their victimization when described as low in warmth (Capezza & Arriaga, Reference Capezza and Arriaga2008), and high-status individuals, who are seen as high in competence and low in warmth, are punished more severely for the same transgression than their low-status counterparts (Fragale et al., Reference Fragale, Rosen, Xu and Merideth2009). In similar fashion, traits associated with the HU dimension of Haslam’s (Reference Haslam2006) model of dehumanization, such as refinement and civility, have been shown to influence the attribution of moral agency. In one study, for example, individuals high in HU traits were judged to be more worthy of blame and punishment than their low-HU counterparts, whereas the presence of HN traits, such as emotional responsiveness and interpersonal warmth, had no effect on these judgments (Bastian et al., Reference Bastian, Laham, Wilson, Haslam and Koval2011, Study 2). In another study, higher scores on a composite measure of dehumanization, incorporating both HU and HN traits, predicted more severe blame and punishment judgments for the perpetrators of various criminal offenses, ranging in seriousness from financial fraud to mass murder (Bastian et al., Reference Bastian, Denson and Haslam2013, Study 2).
A further influence on the attribution of moral responsibility is the perception of moral character (Pizarro & Tannenbaum, Reference Pizarro, Tannenbaum, Mikulincer and Shaver2012). Evidence from multiple studies suggests that individuals of bad moral character are seen as more responsible and more deserving of blame and punishment for immoral actions than individuals with good moral character (Nadler, Reference Nadler2012; Nadler & McDonnell, Reference Nadler and McDonnell2012; Schwartz et al., Reference Schwartz, Djeriouat and Trémolière2022). In one study, participants were randomly assigned to one of four conditions, in each of which they read a story about an accident in which the protagonist lost control while downhill skiing and collided with another skier on the slope, causing their death (Nadler, Reference Nadler2012, Experiment 1). In the “good character” condition, the protagonist was described as hard-working, responsible, reliable, and generous, and in the “bad character” condition, he was described as lazy, irresponsible, unreliable, and selfish. In the “low recklessness” condition, the protagonist was described as confident that he could avoid hitting anyone on the slope, and in the “high recklessness” condition he was described as knowing there was a risk of hitting someone but not caring. Results showed that the protagonist was seen as more responsible, more blameworthy, and deserving of more severe punishment, for the killing when described as having a bad moral character, regardless of whether he was acting recklessly. Similar results were obtained from a study using a vignette about a woman whose dogs escaped from her yard and killed a small child. Regardless of whether the protagonist was aware of the risk posed by her dogs, participants in the bad character condition judged her more harshly than participants in the good character version (Nadler & McDonnell, Reference Nadler and McDonnell2012, Experiment 3).
As we saw in the case of moral patiency, attributions of moral agency are systematically affected by perception of an individual’s mental capacities and traits. Evidence of a causal link in the opposite direction is sparse by comparison, though there is some evidence that perceiving an individual as a moral agent in a given context attenuates the perception of their psychological patiency in that context.Footnote 11 In one study, participants attributed less pain sensitivity to a fictional character engaged in fraudulent business activity when described as playing a leading role in the fraud, rather than a supporting role (Gray & Wegner, Reference Gray and Wegner2009, Study 3c). As predicted, participants attributed more blame to the main perpetrator of the fraud, suggesting that the intended manipulation was successful. Without conducting a mediation analysis, however, one cannot rule out the possibility that the effect of the manipulation on perception of the character’s moral agency in the fraud scenario was mediated by perception of their psychological patiency in that scenario, rather than the effect of the manipulation on the character’s psychological patiency being mediated by perception of their moral agency. It may have been, for example, that the character playing a leading role in the fraud was seen as less sensitive to pain than the character playing a supporting role because of differences in personality (e.g., levels of dominance, confidence, and risk aversion), rather than a difference in moral agency (Arico, Reference Arico2012).
9.4 Conclusion
The ability to perceive other minds plays a fundamental role in social cognition, and nowhere is its fundamentality more evident than in the study of moral cognition (Gray et al., Reference Gray, Young and Waytz2012). Unsurprisingly, the role of mind perception in moral categorization is multifaceted and complex, and the processes and mechanisms underlying it are far from completely understood. Nonetheless, our overview of research on the topic suggests a few major themes. First, none of the standard models of mind perception – whether two-dimensional or three-dimensional, capacity-based or trait-based – has the resources necessary to capture the full range of phenomena linking attributions of moral status with attributions of mindedness. The reason for this is that none of these models encompasses the full range of mental features (both capacities and traits) that influence the perception of moral patiency and moral agency. Second, attributions of moral patiency and attributions of moral agency are sensitive to mind perception in different ways. For instance, while both agentic and experiential capacities and traits contribute in a positive way to moral patiency, and agentic capacities and traits contribute in a positive way to moral agency, there is some evidence that experiential capacities and traits have the opposite effect on moral agency. Third, there is some evidence that categorizing an entity as a moral patient or moral agent affects how the mind of that entity is perceived (not just the other way around), but evidence of these effects is in relatively short supply.
The empirical literature on moral categorization reviewed in this chapter is rich and interesting. Still, it has limitations, one of which is especially noteworthy: the reliance on indirect measures of moral patiency and moral agency. In studies of moral patiency, for example, perceptions of moral patiency are typically assessed by asking questions like: “To what extent would it be morally wrong to harm X?” rather than questions like: “To what extent does X deserve to be protected from harm?” The distinction between these questions is important, because (as noted in Section 9.2) moral patiency implies the potential to be morally wronged, not just the potential to be the target of a morally wrong action (which might be wrong in virtue of the harm done to some other individual). Thus, participants’ responses to questions about the moral wrongness of harming an individual are only an indirect indicator of the perceived moral patiency of that individual. In studies of moral agency, perceived moral agency is typically measured by asking about the extent to which an individual deserves blame or punishment for an action, or is responsible for the action and its outcome. The issue here is that attributions of moral responsibility for an action are sensitive to factors that need not affect perceptions of moral agency, at least insofar as moral agency – unlike blame or responsibility – is a relatively stable property of individuals, invariant across contexts of action. Hence, attributions of blame and punishment are only an indirect, approximate indicator of perceived moral agency. Future research on moral categorization would benefit from the employment of more direct measures of moral patiency and moral agency in addition to the indirect measures commonly in use.
As with other topics in moral psychology, the study of moral categorization originates with theorizing by philosophers. This is true of research on moral patiency, which is deeply informed by the contrast between utilitarian and deontological perspectives in normative ethics, and research on moral agency, which reflects the influence of philosophical thinking about moral responsibility. What experimental studies in this area reveal is the plurality of factors that figure into attributions of both moral patiency and moral agency, including factors often overlooked by normative theorists, such as character traits (e.g., interpersonal warmth, harmfulness, and intellectual refinement). The normative significance of these factors, however, is unclear. It may well be that intuitive thinking about moral categories of the sort revealed by experimental studies is not a reliable guide to the structure of these categories, in which case it would be risky to use the results of those studies to constrain normative theory (Greene, Reference Greene and Sinnott-Armstrong2008). By contrast, it seems that reflective thinking about moral categories should be informed, at least to some extent, by the patterns of attribution observed in empirical research. Normative theorizing about moral categories in a way that is appropriately sensitive to empirical evidence is (or ought to be) a central task for philosophers working in this area.
Research on moral categorization also has profound implications for the law. Consider the influence of characterological information on the attribution of blame and responsibility and hence on the perception of moral agency (given that such attributions apply only to moral agents). Empirical evidence suggests, for example, that the outcome in a legal proceeding is likely to be worse for a defendant who is perceived by the judge or jury as having a bad moral character in virtue of a prior record of offenses from which an inference to bad character is naturally drawn (Nadler, Reference Nadler2012). Highlighting the negative experiential effects of a crime on its victims will likely have the same effect. Introducing biographical information about a defendant with an extensive history of suffering at the hands of others, by contrast, will tend to have the opposite effect (Robbins & Litton, Reference Robbins and Litton2018), as will information about the defendant’s cognitive limitations (Gray & Wegner, Reference Gray and Wegner2009). Whether these effects make sense in normative terms is an important topic for legal theory, just as it is for moral philosophy (Greene & Cohen, Reference Greene and Cohen2004).
Moral categorization is a central topic of investigation in the moral psychology of artificial intelligence (Bonnefon et al., Reference Bonnefon, Rahwan and Shariff2024; Ladak et al., Reference Ladak, Loughnan and Wilks2023). In general, it appears that mind perception contributes as much to the attribution of moral status to artificial agents as it does in the case of biological agents, and in similar ways. For example, the attribution of moral agency to robots is sensitive to perception of their capacities for intentional action, free choice, and the appreciation of moral considerations for acting, just as it is in the case of humans (Bigman et al., Reference Bigman, Waytz, Alterovitz and Gray2019). The attribution of moral agency to robots also appears to be influenced by perception of their experiential capacities but differently than it does in the human case, where the effect may sometimes be negative rather than positive. Evidence of this asymmetry comes from a vignette-based study in which a fictional robot was judged more responsible for causing harm in a sacrificial moral dilemma scenario when described as having affective states rather than lacking them, possibly as a result of the affective robot’s being humanized (Nijssen et al., Reference Nijssen, Müller, Bosse and Paulus2023). Explaining this and other human–robot symmetries in the attribution of moral agency, and the application of moral categories more generally, is an active area of research in moral psychology – and one in which thinking about mind perception will no doubt continue to play an essential role.
The study of moral emotions is thriving, with an upsurge in research on the topic. However, a key question that needs to be answered is: What makes moral emotions unique? Specifically, it is important to understand the distinct qualities of specific emotions and the degree to which an emotion can be moral. It is essential to consider this question, as moral emotions are believed to influence both our judgments and actions. In terms of emotions being related to moral judgments, three claims have been made in the literature: 1) emotions are associated with moral judgments; 2) emotions amplify moral judgments; and 3) emotions moralize nonmoral acts (Avramova & Inbar, Reference Avramova and Inbar2013). Evidence to date is mainly supportive of the first claim – in other words, emotions are at the very least associated with moral judgments. Focusing on the relationship between emotions and action, some researchers have argued that a person’s moral emotions are better predictors of the person’s (moral) action than are other moral phenomena, such as moral reasoning (for reviews, see Haidt, Reference Haidt2001, Reference Haidt, Davidson, Scherer and Goldsmith2003; Teper et al., Reference Teper, Zhong and Inzlicht2015). Given that emotions may play a crucial role in both moral action and judgment, this suggests emotions are important to morality. Therefore, it is necessary to understand whether a given emotion is moral and whether moral emotions are distinct from nonmoral emotions. Unsurprisingly, it is difficult to decipher what makes a moral emotion distinct from a nonmoral emotion, since there are outstanding debates about what is a nonmoral emotion (Barrett & Russell, Reference Barrett and Russell2015; Cowen & Keltner, Reference Cowen and Keltner2017; Ekman, Reference Ekman1999), what the scope of morality is (Ellemers et al., Reference Ellemers, van der Toorn, Paunov and van Leeuwen2019), what a moral judgment is (Malle, Reference Malle2021), and what constitutes a moral action (Teper et al., Reference Teper, Zhong and Inzlicht2015). A careful examination of these issues is beyond the scope of this chapter, but it is nevertheless necessary to keep these issues in mind when evaluating whether and to what degree a given emotion is moral.
10.1 What Makes Moral Emotions Unique?
In terms of determining whether a given emotion is moral, previous definitions have focused either on unique elicitors of moral emotions (e.g., certain norm violations) and/or unique consequences of moral emotions (e.g., prosocial behavior). For example, Tangney et al. (Reference Tangney, Stuewig and Mashek2007) have argued that moral emotions respond to violations of norms that are supported by groups and whole societies. Therefore, moral emotions are crucial to social functioning because individuals often feel socially shared emotions in reaction to various events that are moral. This definition focuses more on the elicitors of certain moral emotions, with moral action being a byproduct. By contrast, Haidt (Reference Haidt, Davidson, Scherer and Goldsmith2003) defines moral emotions as “those emotions that are linked to the interests or welfare either of society as a whole or at least of persons other than the judge or agent” (p. 853). According to this definition, a prototypical moral emotion has two features. First, prototypical moral emotions have “disinterested elicitors,” meaning that a situation does not need to directly involve or impact an individual to trigger an emotional response. Specifically, Haidt argued that “the more an emotion tends to be triggered by such disinterested elicitors, the more it can be considered a prototypical moral emotion” (p. 854). Second, moral emotions are associated with prosocial tendencies, meaning that moral emotions are likely to motivate actions that will benefit others. Thus, Haidt’s definition focuses on both elicitors and outcomes that make a moral emotion unique.
Expanding on these previous definitions of moral emotions, it is important to consider in more depth who is benefiting from the emotion and the consequences of specific moral emotions. Cohen-Chen et al. (Reference Cohen-Chen, Pliskin and Goldenberg2020) have argued emotions should be distinguished based on “whether they feel good” and/or “whether they do good.” Evidently, emotions that make us feel uncomfortable can lead to extremely favorable outcomes. For example, anger can be considered a prototypical moral emotion according to Haidt’s (Reference Haidt, Davidson, Scherer and Goldsmith2003) definition and can have both very negative consequences in the case of aggression and very positive socio-moral consequences, ultimately leading to corrective behaviors, negotiation, and reconciliation. Thus, anger can be good for relationships and groups. Cohen-Chen et al.’s (Reference Cohen-Chen, Pliskin and Goldenberg2020) conceptualization was used to explain emotions in conflict but can be applied here to moral emotions as well. In other words, we should be questioning ultimately whether certain moral emotions “do good” for individuals, groups, and societies, which is more important than whether emotions make us “feel good,” as emotions do not have to be a positive feeling or experience to lead to positive outcomes. However, we should broaden the scope of outcomes of moral emotions, by considering the impact of moral emotions on our thoughts and perceptions and how these affect others, in addition to actions and action tendencies. Thus, there needs to be a shift in focus from the elicitors of moral emotions to the social consequences of feeling moral emotions, which includes both cognitive and behavioral consequences for both the self and others.
The current chapter will analyze the different families of moral emotions (i.e., emotions within a family are similar but should have some differences) along the lines of whether they do good for others, both in terms of behavioral responses (e.g., promoting engagement, helping, and approach) and social-cognitive processes (e.g., open-mindedness, flexible thinking, and connectedness). The four main families of moral emotions include the other-condemning emotions (e.g., contempt, anger, and disgust); the self-conscious emotions (e.g., shame and guilt); the other-suffering emotions (e.g., compassion and empathy); and the other-praising emotions (e.g., awe and elevation) (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003). I will examine the typical consequences of these specific emotions, not just the situations that typically elicit them (see Table 10.1 for a summary).
| Emotion | Elicitors | Degree criteria | Consequences | Degree criteria |
|---|---|---|---|---|
| Contempt | Community violations; seeing someone as beneath you, not measuring up and/or being incompetent. | ◌ | Social exclusion and nonnormative collective action. | ◌ |
| Anger | Autonomy violations; blameworthy and/or harmful actions, which are often performed intentionally. | ✔ | Aggression and retaliation but also reparative behavior and normative collective action. | ✔ |
| Disgust | Divinity violations; moral violations that demonstrate bad character and/or are despicable; bodily norm violations. | ◌ | Avoidance and purification, some evidence for nonnormative collective action and aggression. | ◌ |
| Guilt | Self-moral failures, which focus on the action itself and/or prescriptive moral violations. | ✔ | Reparative behavior, social improvement, and collective action but not always when shame is accounted for. | ✔ |
| Shame | Self-moral failures, which focus on the person and/or proscriptive moral violations. | ◌ | Avoidance, denial, and withdrawal, some evidence for social support and other positive consequences when the situation is repairable. | ◌ |
| Empathy | Feeling the same emotion as another and/or understanding what they are feeling. | ◌ | Reconciliation, forgiveness, helping, and humanizing behaviors; increase in positive attitudes and seeing similarities with others, or self-other overlap; decrease in hostile action, aggression, stereotyping, and prejudice. However, we often avoid experiencing empathy and empathy failures are frequent. | ◌ |
| Compassion | Feeling concerned for another person’s suffering. | ◌ | Compassion shares most of empathy’s positive outcomes, such as helping and humanizing behaviors. However, we often experience compassion fade. | ◌ |
| Awe | Things that are vast, transcend previous experiences, or exceed expectancies. | ◌ | Need for accommodation and connection, critical thinking, decrease in selfishness and increase in positive feelings, helping, well-being, environmental concern, and charitable giving; perceiving that time is slowing down; more willingness to associate with others with opposing views. | ✔ |
| Elevation | Witnessing acts of uncommon goodness or moral beauty. | ✔ | Helping behaviors, imitation of positive role models, desire to share more overlap with others, and decrease in prejudice. | ✔ |
10.2 Other-Condemning Emotions
Anger, disgust, and contempt are other-condemning emotions, sometimes referred to as morally condemning emotions (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003), or the hostility triad (Izard, Reference Izard1991). Thus, we experience these emotions when we think that someone else (or a group) has engaged in some form of (moral) wrongdoing. These emotions have been considered to be basic or primary emotions (Ekman, Reference Ekman1999), which implies that they are universally experienced and have unique facial expressions. By contrast, others have argued that emotions are socially constructed (Averill, Reference Averill1983; Barrett & Russell, Reference Barrett and Russell2015; Parrott, Reference Parrott2001); thus, there are no universal or basic emotions. If we take disgust, for example, recent evidence suggests that this emotion is more likely to be caused by social learning (Aznar et al., Reference Aznar, Tenenbaum and Russell2021; Rottman et al., 2018) than previous research suggests (Bloom, Reference Bloom2004; Danovitch & Bloom, Reference Danovitch and Bloom2009). As a result, the “basicness” of these morally condemning emotions is questionable.
10.2.1 Contempt
Within the moral realm, research on the CAD triad hypothesis maps the three emotions (contempt, anger, disgust) to three distinct moral violations (community, autonomy, divinity) (Rozin et al., Reference Rozin, Lowery, Imada and Haidt1999). Specifically, it was found that community violations are associated with contempt, autonomy violations are associated with anger, and divinity violations are associated with disgust. However, of the three other-condemning emotions, it is questionable whether contempt is distinct from anger and disgust. For example, some have argued that contempt is a form of disgust (for a review see Fischer & Giner-Sorolla, Reference Fischer and Giner-Sorolla2016). Even when examining the CAD triad hypothesis (Rozin et al., Reference Rozin, Lowery, Imada and Haidt1999), it was found that contempt often overlapped with anger and disgust and was triggered by norm violations in multiple moral domains (Fischer & Giner-Sorolla, Reference Fischer and Giner-Sorolla2016; P. S. Russell et al., Reference Russell and Giner-Sorolla2013). There are also known methodological issues with measuring contempt, such as that the facial expression for contempt is less clear than that of anger or disgust (J. A. Russell, Reference Russell1991). Additionally, for self-report measures, English speakers do not always understand what the term “contempt” means (Ekman et al., Reference Ekman, O’Sullivan and Matsumoto1991). Related to this point, people less frequently think of contempt as an emotion; thus, it is less accessible (Fehr & Russell, Reference Fehr and Russell1984).
If we first focus on the elicitors of contempt, within hierarchical societies contempt is elicited when an individual sees another individual as beneath them and not even worthy of strong feelings such as anger (Fischer & Giner-Sorolla, Reference Fischer and Giner-Sorolla2016). In more egalitarian societies, contempt is seen as an expression that an individual does not measure up (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003). Research has also identified that we experience contempt when we judge someone to be incompetent (Hutcherson & Gross, Reference Hutcherson and Gross2011). As mentioned previously, the CAD hypothesis links contempt with ethics of community, which includes concerns such as caring that a certain hierarchy exists and that everyone has certain roles within society that they must fulfill (Rozin et al., Reference Rozin, Lowery, Imada and Haidt1999). Contempt can be directed to the person as a whole being, or their actions (Malle et al., Reference Malle, Voiklis, Kim and Mason2018). In terms of the experience of contempt, this emotion is said to be much cooler than anger and disgust (Izard, Reference Izard and Izard1977; Rozin et al., Reference Rozin, Lowery, Imada and Haidt1999). Some have even questioned whether contempt is a sentiment (i.e., a standing attitude) rather than an experienced state emotion (for reviews see Fischer & Giner-Sorolla, Reference Fischer and Giner-Sorolla2016 and Malle et al., Reference Malle, Voiklis, Kim and Mason2018).
In terms of consequences, contempt can be associated with cognitive changes, in which an individual is treated as having less worth within future interactions (Oatley & Johnson-Laird, Reference Oatley, Johnson-Laird, Martin and Tesser1996). For the behavioral tendencies and actions, evidence has been mixed in terms of whether contempt is associated with avoidance and/or attack tendencies (Malle et al., Reference Malle, Voiklis, Kim and Mason2018). For example, it has been found that in the short term and long term contempt can result from unresolved anger and can lead to social exclusion behaviors (Fischer & Roseman, Reference Fischer and Roseman2007). Also, recent evidence indicates that disgust may be a better predictor than contempt for nonnormative collective action tendencies, such as violent protest (Noon, Reference Noon2019). Disgust has also been shown to be a better predictor of dehumanizing beliefs and action tendencies than contempt (Giner-Sorolla & Russell, Reference Bartoș, Russell and Hegarty2020). In summary, contempt seems to be a less straightforward moral emotion, in terms of its elicitors and consequences. Additionally, it may be part of the experience of disgust and anger, or more similar to an emotion like hatred, due to its longevity. In short, even though contempt is relevant to morality, it may not be a distinct emotion.
10.2.2 Anger
Next, I turn to the moral nature of anger. As a moral emotion, anger probably has the most long-standing history in the field, besides that of empathy. Mounting evidence demonstrates that anger can often be a moral emotion for two reasons. First, there seem to be common triggers of anger, which are often linked to moral situations and contexts (see Lomas, Reference Lomas2019, and P. S. Russell & Giner-Sorolla, Reference Russell and Giner-Sorolla2013, for a review). These contextual factors can operate as mitigating circumstances intensifying or reducing anger depending on the situation at hand. Second, the common belief that anger is a negative emotion that leads to aggression (i.e., it does bad) is questionable, with growing research refuting this assumption. Specifically, evidence indicates that anger can lead to positive outcomes in some circumstances. Below I outline the evidence for both reasons for designating anger as a typically moral emotion.
Over decades of research, a clear connection has been made between anger and its cognitive elicitors, which are often linked to moral situations. Anger has been linked with the appraisals of goal blockage, other blame, and unfairness (Cova et al., Reference Cova, Deonna and Sander2013; Lazarus, Reference Lazarus1991; Roseman et al., Reference Roseman, Antoniou and Jose1996; Smith & Ellsworth, Reference Smith and Ellsworth1985; Wranik & Scherer, Reference Wranik, Scherer, Potegal, Stemmler and Spielberger2010). In the moral realm, anger is elicited in response to actual or symbolic harm (Rozin et al., Reference Rozin, Lowery, Imada and Haidt1999) and especially intentional harm (Cova et al., Reference Cova, Deonna and Sander2013; P. S. Russell & Giner-Sorolla, Reference Giner-Sorolla and Espinosa2011). Anger has also been associated with attributions of responsibility and blame (Alicke, Reference Alicke2000; Goldberg et al., Reference Goldberg, Lerner and Tetlock1999; Tetlock et al., Reference Tetlock, Visser, Singh, Polifroni, Scott, Elson, Mazzocco and Rescober2007), in which there is a cyclical relationship between anger and these appraisals. Finally, anger can also be reduced if it is felt that the behavior was carried out in the service of a greater good (Darley et al., Reference Darley, Klosson and Zanna1978). Thus, all of these appraisals or elicitors are directly related to evaluations of a moral situation and its consequences.
Focusing on the consequences, anger is an approach-related emotion associated with appetitive motivations, as some positive emotions are (Carver & Harmon-Jones, Reference Carver and Harmon-Jones2009). The social function of anger is attained by “forcing a change in another person’s behavior,” in hopes of achieving a better outcome (Fischer & Roseman, Reference Fischer and Roseman2007, p. 104). Whether or not the behavioral consequence of anger is hostile, it can nevertheless be argued that anger, in general, motivates individuals to approach the cause of their anger. Numerous studies have highlighted aggression as a common response to feeling angry, whether verbal and/or physical (Izard, Reference Izard and Izard1977). In many instances, people are motivated to get back at individuals perceived as having wronged them (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003; Izard, Reference Izard and Izard1977; Plutchik, Reference Plutchik, Plutchik and Kellerman1980; Shaver et al., Reference Shaver, Schwartz, Kirson and O’Connor1987). Anger encourages the person experiencing the emotion to either punish or rebuke the person who has offended them (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003; Nussbaum, Reference Nussbaum2004). But social cohesion or reparation is a more common consequence of anger than previously thought (Averill, Reference Averill1983; Fischer & Roseman, Reference Fischer and Roseman2007). It has been found that anger inspires persons to engage in reparative behaviors, such as talking things over with the transgressor, particularly in the long term (Fischer & Roseman, Reference Fischer and Roseman2007; Weber, Reference Weber2004). Anger has also been shown to be a key motivator for collective action, particularly normative collective action, such as signing petitions and engaging in peaceful protest (Sasse et al., Reference Sasse, Halmburger and Baumert2020; Tausch et al., Reference Tausch, Becker, Spears, Christ, Saab, Singh and Siddiqui2011).
The reason anger can lead to such different behaviors is that anger varies depending on the current context. That is, it is a contextually or situationally dependent emotion, which is evident by the mitigating factors that can influence whether anger is experienced, and its intensity, as reviewed in this section earlier. Not only does the context impact the experience of anger, but it also impacts how people respond to their anger. For example, relationships between transgressors and victims influence both the intensity of anger and the resulting actions (Fischer & Roseman, Reference Fischer and Roseman2007; Kuppens et al., Reference Kuppens, Van Mechelen and Meulders2004, Reference Kuppens, Van-Mechelen, Smits, De Boek and Ceulemans2007; Weber, Reference Weber2004). By contrast, contempt and disgust are less likely to be elicited by those who are close to us and are more likely to elicit rejection consistently, whereas anger is more likely to occur in close relationships and groups and elicit variable behavioral responses (Fischer & Roseman, Reference Fischer and Roseman2007; Hutcherson & Gross, Reference Hutcherson and Gross2011). Anger seems to play a crucial role in both interpersonal relationships, as well as social and group contexts (Cottrell & Neuberg, Reference Cottrell and Neuberg2005), by eliciting approach behaviors that can include hostility, but generally anger can encourage reform or change, especially in the long term.
Another factor that may influence one’s anger, and associated response, is social accountability, that is, whether individuals feel that their actions will impact others influences their anger (Averill, Reference Averill1983). Thus, when persons feel accountable, they will be less likely to respond automatically and thoughtlessly to their anger. It has been argued that social accountability reduces the impact of anger (Lerner et al., Reference Lerner, Goldberg and Tetlock1998). Therefore, persons are motivated to respond appropriately and constructively to their anger because if they do not it can have extremely negative consequences for them and others (Izard, Reference Izard and Izard1977). Evidence surrounding both accountability and the nature of relationships suggests that anger is not just elicited in the moment but can lead individuals to consider how their anger, and associated response, may impact future relationships with other individuals and groups. Thus, anger focuses on the future and can elicit long-term change, which results in positive outcomes. In summary, evidence suggests that anger is typically a moral emotion due to both its moral elicitors and positive consequences, such as reconciliation.
10.2.3 Disgust
Next, I discuss the controversial emotion of disgust. Disgust is an emotion that has captured the attention of many researchers. However, within the literature, there is still debate as to whether disgust is a moral emotion at all, whether it is similar to or different from core disgust, and whether it overlaps with anger and contempt. If we first look at the individual or personal level of disgust, rather than the moral or social realm specifically, we can see that theorists have struggled to capture what elicits disgust, resulting in tautological explanations. For example, appraisals that elicit disgust include “distasteful stimuli” (Ortony et al., Reference Ortony, Clore and Collins1988) and “poisonous ideas” (Lazarus, Reference Lazarus1991). This debate concerning what elicits disgust also extends into the moral realm. One of the key questions within this family is whether disgust is a distinct emotion and how far into the moral realm it extends. There are four main positions regarding what moral disgust is: 1) the general morality/character position, 2) the metaphorical use position, 3) the purity position, and 4) the bodily norm position (P. S. Russell & Giner-Sorolla, Reference Russell and Giner-Sorolla2013). Here I will review the two extreme ends of this debate: the general morality/character position and the bodily norm position. Even though these two positions conflict in scope, they demonstrate that disgust is a person- or object-focused emotion that contrasts with anger, a situational or context-focused emotion. Before covering these two positions, for comparison purposes, I will briefly review the purity and metaphorical use positions. The purity position argues that disgust is elicited by purity or divinity violations, such as cleaning a toilet with a national flag or eating one’s pet dog (Horberg et al., Reference Horberg, Oveis, Keltner and Cohen2009; Rozin et al. Reference Rozin, Lowery, Imada and Haidt1999). However, there are very few purity violations that do not also involve bodily norm violations (e.g., sexual behaviors) and/or core disgust elicitors, such as bodily fluids or blood (see P. S. Russell & Giner-Sorolla, Reference Russell and Giner-Sorolla2013 for a review of the issue). Thus, because of this overlap in violations, the bodily norm position may encapsulate the purity position. In contrast, the metaphorical use position argues that when disgust is expressed in the moral realm, individuals are just using the term “disgust” to express their true feeling of anger (Nabi, Reference Nabi2002; Royzman et al., Reference Royzman, Atanasov, Landy, Parks and Gepty2014). Due to the importance of examining parallels between the general morality position and the bodily norm violation, I will now focus on these two positions.
First, according to the general morality or character position, disgust can be elicited by a range of immoral actions or violations, such as cheating and unfairness. In support of this position, one of the most common definitions of disgust has been proposed by Rozin et al. (Reference Rozin, Haidt, McCauley, Lewis, Haviland-Jones and Barrett1993). They argue that the core function of disgust is the avoidance of ingesting contaminating or offensive objects in the mouth. Extending further, disgust has evolved to include socio-moral elicitors in which disgust is used as a form of social control. At the socio-moral level, disgust is elicited in response to individuals who appear as if they cannot give back to society and/or have deep character flaws. Based on this general morality hypothesis, individuals or groups can elicit disgust when they have done something that is morally wrong or does not fit in with their society.
Supporting this view, Jones and Fitness (Reference Jones and Fitness2008) argue that individuals are physically repulsed by moral transgressors who use deception and/or abuse their power. Therefore, according to this definition, an individual can be deemed as disgusting if they have engaged in despicable behavior. Both accounts, Rozin et al. (Reference Rozin, Haidt, McCauley, Lewis, Haviland-Jones and Barrett1993) and Jones and Fitness (Reference Jones and Fitness2008), make it difficult to distinguish moral disgust from anger by associating disgust broadly to most norm violations and/or deceptive behavior. These definitions are then problematic because anger is just as likely to arise in these situations, making it difficult to distinguish anger and disgust’s individual effects. More recently, it has been argued that disgust is elicited by someone who has a bad character or has done something that shows bad character. For example, Giner-Sorolla and Chapman (Reference Giner-Sorolla and Chapman2017) found that disgust is elicited by bad character, whereas anger focuses on the event. The researchers demonstrated this across several studies by varying violation types (i.e., indicative of bad character or not) and manipulating relevant factors in an experimental design (i.e., harmful desire and harmful consequences). They found that the desire to cause harm (an indicator of bad character) was predictive of disgust, while harmful consequences were more closely related to anger. Conceptually, the triggers of moral disgust, according to the general morality or bad character position, seem to be similarly tautological as the triggers of nonmoral core disgust, since these elicitors just connote that a person or action is really bad.
Additionally, one problem with research in support of this position is that most studies still primarily rely on self-report of emotion terms or facial endorsement, that is, participants responding to whether an emotion expression corresponds to how they are feeling (P. S. Russell & Giner-Sorolla, Reference Russell and Giner-Sorolla2013). This is particularly problematic if researchers are asking people to report their “moral disgust,” as they may be artificially increasing the importance of this term (P. S. Russell et al., Reference Russell, Piazza and Giner-Sorolla2013). Relatedly, it has been found that when the physical sensations or action tendencies of disgust are not measured, anger, rather than disgust, is elicited by divinity violations unrelated to the body or pathogens (Royzman et al., Reference Royzman, Atanasov, Landy, Parks and Gepty2014).
In contrast to the previous arguments, according to the bodily norm position, disgust has a very specific function, which is to govern norms regarding the body, particularly norms about sexual behaviors and eating (e.g., bestiality, incest, and pedophilia). In these contexts, disgust tends to be elicited from a categorical judgment, as to whether the behavior is taboo or not. In these contexts, disgust appears to be an unreasoning emotion that gives rise to inflexible thoughts and behaviors, namely avoidance and purification (P. S. Russell & Giner-Sorolla, Reference Russell and Giner-Sorolla2013). People also find it difficult to justify their disgust in these contexts, instead providing tautological reasons, such as: “It’s just disgusting.” By contrast, in response to other socio-moral violations, such as harm and unfairness, disgust appears to heavily overlap with anger and does not appear to have the same detrimental consequences as disgust experienced in reaction to bodily norm violations (such as incest). Among consequences, disgust encourages avoidance, purification, and expulsion of objects, other individuals, or groups (see P. S. Russell & Giner-Sorolla, Reference Russell and Giner-Sorolla2013, for a review). Recent evidence also indicates that disgust may be associated with indirect aggression (Tybur et al., Reference Tybur, Molho, Cakmak, Cruz, Singh and Zwicker2020) and nonnormative collective action (Noon, Reference Noon2019).
The reason why we should care whether something is truly “morally” disgusting (or whether a different emotion is elicited, such as anger) is because disgust is a “sticky” emotion. For example, Rozin and colleagues have found that disgusting qualities can be transferred to different objects based on the laws of sympathetic magic (Rozin et al., Reference Rozin, Millman and Nemeroff1986; Rozin et al., Reference Rozin, Markwith and Ross1990; Rozin et al., Reference Rozin, Markwith and Nemeroff1992; Rozin & Nemeroff, Reference Rozin and Nemeroff2002). The first law of sympathetic magic holds that “once in contact, always in contact”; therefore, disgusting qualities cannot be eliminated once they have been transferred (e.g., a sweater worn by Hitler or someone with AIDS will remain disgusting) (Rozin et al., Reference Rozin, Millman and Nemeroff1986; Rozin et al., Reference Rozin, Markwith and Nemeroff1992). The second law of similarity holds that “the image equals the object.” This law can explain why an object that is similar in shape to an inherently disgusting object would also be deemed disgusting (e.g., chocolate that is in the shape of dog poop). These laws of sympathetic magic also imply that the effects of contagion are insensitive to dose (e.g., it doesn’t matter how long the sweater was worn by someone). It has been found that individuals engage in avoidance and purification behaviours when disgusting qualities are transferred to a previously neutral object (e.g., Rozin et al., Reference Rozin, Millman and Nemeroff1986). Additionally, when asked to explain these behaviors, persons admitted that they could not come up with reasons and could not deny that their behaviors were based on irrational thoughts. This evidence suggests that core disgust can have transference or contagion effects. Evidence also suggests that moral contagion effects can occur via disgust (Eskine et al., Reference Eskine, Novreske and Richards2013). Specifically, Eskine and colleagues found that after direct or indirect contact with someone who engaged in an immoral transgression (e.g., lying or cheating), people experienced more guilt following contact, suggesting a moral transfer effect. This effect was moderated by disgust sensitivity (i.e., an individual difference in the propensity to experience disgust); in other words, those with higher levels of disgust sensitivity were more likely to experience the moral transfer of guilt than those with lower levels of disgust sensitivity. However, the researchers did not measure whether feelings of state disgust were experienced by participants and/or the original transgressor, which is necessary for future research to establish whether moral disgust truly transfers interpersonally, that is, between individuals. This is important since recent evidence suggests that these moral contagion effects may occur because of reputation concerns, not necessarily because of disgust (Kupfer & Giner-Sorolla, Reference Kupfer and Giner-Sorolla2021). Specifically, these authors found that participants avoided morally tainted objects because of concerns about how they would appear to others if the object was on public display, more so than from concerns about coming into contact with the disgusting objects.
Another reason to be concerned with whether something is actually morally disgusting is that disgust has been found to have an automatic influence on moral judgment. For example, Wheatley and Haidt (Reference Wheatley and Haidt2005) elicited unconscious disgust using hypnosis, which made moral judgments more severe. Similarly, Schnall et al. (Reference Schnall, Haidt, Clore and Jordan2008) found that disgust from an outside source, that is, ambient disgust, had the same effect on moral judgments. However, a meta-analysis suggests that these effects may be smaller than previously thought, or even nonexistent (Landy & Goodwin, Reference Landy and Goodwin2015). Indeed, large-scale replication studies have failed to replicate the original effect, when disgust is elicited by taste (Ghelfi et al., Reference Ghelfi, Christopherson, Urry, Lenne, Legate, Fischer, Wagemans, Wiggins, Barrett, Bornstein, de Haan, Guberman, Issa, Kim, Na, O’Brien, Paulk, Peck, Sashihara and Sullivan2020). This ambiguous evidence regarding whether disgust impacts or amplifies our moral judgments casts doubt on the claim that disgust is directly connected to moral concerns. In summary, disgust may sometimes co-occur with moral concerns, but its standing as a moral emotion is open to question.
10.3 Self-Conscious Emotions
As with the other-condemning emotions, researchers have questioned the distinctiveness of the self-conscious emotions. Additionally, there has been mixed evidence in terms of whether the self-conscious emotions are always associated with positive consequences. Emotions that belong to this family of (moral) emotions include shame, regret, guilt, pride, and embarrassment. These emotions are secondary in nature, meaning they are more complex, normally less automatic, and require self-awareness (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003; Tangney et al., Reference Tangney, Stuewig and Mashek2007). We typically feel these emotions when we have experienced some kind of self-failure (for example, feeling as if you have not achieved something or failed to act in the way that you should have), which can be moral in nature (Tracy & Robins, Reference Tracy and Robins2006). We feel these emotions when we have either engaged in some moral wrongdoing (e.g., shame and guilt) or when we have acted in a morally superior way (e.g., pride). Extending further, we can feel these self-conscious emotions when reflecting on our own group’s present or past behavior (Branscombe & Doosje, Reference Branscombe and Doosje2004; Lickel et al., Reference Lickel, Steele and Schmader2011).
This section will focus on comparing shame and guilt’s elicitors and consequences. Evidence regarding their distinctiveness is mixed. There is an unresolved debate as to whether shame is always detrimental, and guilt always beneficial, as originally proposed by Tangney and colleagues (Reference Tangney, Stuewig and Mashek2007). I will not discuss pride here because it is elicited when we feel that we have done something good (see Tracy & Robins, Reference Tracy and Robins2007 for a review), which is different from the triggers of shame and guilt. Embarrassment and regret appear to have too much overlap with shame and guilt methodologically and conceptually, often being used as synonymous terms with shame and guilt, respectively (e.g., Lickel et al., Reference Lickel, Schmader, Curtis, Scarnier and Ames2005; Noon, Reference Noon2019). Thus, embarrassment and regret will not be evaluated here either.
One of the most prominent outstanding issues is whether shame and guilt are the same emotion (see Teroni & Deonna, Reference Teroni and Deonna2008 for a review), as Tompkins (Reference Tompkins1963) famously proposed. There are some important similarities between shame and guilt. First, they are often considered to be complex emotions that are uniquely experienced by humans and require some form of self-awareness or reflective thought (Tangney et al., Reference Tangney, Stuewig and Mashek2007). Second, the self is the focus event, providing immediate feedback on or punishment for one’s behavior (Tangney et al., Reference Tangney, Stuewig and Mashek2007). In other words, people feel bad about what they have done. Third, they are often triggered by self-failures (Gausel & Leach, Reference Gausel and Leach2011). Finally, these emotions typically develop later and are believed to be secondary emotions, as they are tied to more complex goals and behaviors (Izard, Reference Izard1971; Tangney & Dearing, Reference Tangney and Dearing2002).
By contrast, there are some notable differences between shame and guilt. It has been found that shame relates to proscriptive morality (i.e., what we should not do, avoidance) and guilt relates to prescriptive morality (i.e., what we should do, approach) (Sheikh & Janoff-Bulman, Reference Sheikh and Janoff-Bulman2010). This relationship was found at the trait level, where the authors found a positive correlation between the Behavioural Inhibition System and shame proneness and between the Behavioural Approach System and guilt proneness. Also, at the state level, priming a proscriptive orientation was found to increase shame, and priming a prescriptive orientation was found to increase guilt. Finally, when the authors manipulated the type of violations, proscriptive violations predicted feelings of shame and prescriptive violations predicted feelings of guilt.
People also frequently anticipate that they will feel either shame or guilt. In the extended theory of planned behavior, anticipated regret or guilt and moral norms are additional factors that can explain whether we engage in certain behaviors (Rivis et al., Reference Rivis, Sheeran and Armitage2009). A recent systematic review found that in the context of women’s reactions to breastfeeding, which is often perceived as a moralized issue, women commonly anticipate and experience shame and guilt about the way they choose to feed their baby and about public breastfeeding (P. S. Russell et al., Reference Russell, Smith, Birtel, Hart and Golding2021). Women experience guilt when they feel as if they have not acted in the way that they should have – for example, if they have not reached their feeding goals, or if they feel like a bad mother. This evidence suggests that shame and guilt are focused on different kinds of injunctive norms.
Additionally, shame and guilt show some unique parallels with disgust and anger, respectively, which suggests that shame and guilt are distinct. For instance, shame and disgust are believed to be avoidance emotions, while guilt and anger are approach emotions (Leach, Reference Leach, Woodyat, Worthington, Wenzel and Griffin2017; P. S. Russell & Giner-Sorolla, Reference Russell and Giner-Sorolla2013). It has also been argued that shame and disgust are bodily-focused emotions, while guilt and anger are focused on harm and fairness (Nussbaum, Reference Nussbaum2004). Like the other-condemning emotions, self-conscious emotions can be experienced in the moment as states and can exist as dispositions or traits (i.e., shame or guilt proneness; Cohen et al., Reference Cohen, Wolf, Panter and Insko2011). Recent evidence has also shown that core disgust sensitivity and contamination concerns are related to shame proneness whereas moral disgust sensitivity is related to guilt proneness (Terrizzi Jr. & Shook, Reference Terrizzi and Shook2020). Research has also found that anger and disgust can socially cue guilt and shame, respectively (Giner-Sorolla & Espinosa, Reference Giner-Sorolla and Espinosa2011). Specifically, it was found that after exposure to an angry expression, participants reported feeling more guilt, while after exposure to a disgusted facial expression, participants reported feeling more shame. Due to parallels between disgust and shame, this evidence may suggest that, like disgust, shame is a less typical moral emotion. By contrast, like anger, guilt may be a more typical moral emotion.
When examining differences between shame and guilt, until recently, shame has been positioned as a bad or detrimental emotion, in comparison to guilt. In terms of elicitors, some have argued that shame and guilt are triggered by similar types of moral violations, but what differs is the appraisal of the situation or wrongdoing (Tangney & Dearing, Reference Tangney and Dearing2002; Tangney et al., Reference Tangney, Stuewig and Mashek2007). Specifically, these findings indicate that shame is more focused on global negative beliefs about the self (“I am bad”), while guilt is more focused on the action or event (“I did a bad thing”). Additionally, it has been found that shame is associated with internal, stable, and uncontrollable attributions (i.e., lack of ability as the cause of self-failure), while guilt is associated with internal, unstable, and controllable attributions (i.e., lack of effort as the cause of self-failure; Tracy & Robins, Reference Tracy and Robins2006). It has also been argued that shame is triggered by concerns of image or reputation (Sznycer, Reference Sznycer2019). Others have contended that an important distinction between the two emotions is that shame relates to values whereas guilt is associated with norm violations (Teroni & Deonna, Reference Teroni and Deonna2008). These findings do not fully align with the categorical distinction between global self versus action appraisals originally proposed by Tangney but instead suggest more variability in terms of the elicitors of these emotions.
An additional distinction that requires more attention is whether guilt and shame relate to different behaviors or consequences. The general assumption is that shame is an avoidance emotion linked with hiding, denying, and escaping (Tangney & Dearing, Reference Tangney and Dearing2002), while guilt is an approach emotion linked with reparative and confession behaviors (Lickel et al., Reference Lickel, Schmader, Curtis, Scarnier and Ames2005; Tangney et al., Reference Tangney, Stuewig and Mashek2007). Shame proneness (i.e., an individual difference or disposition to feel shame more intensely) is also more closely related to detrimental outcomes related to the self, such as poor self-esteem, depression, and eating disorders (see Tangney et al., Reference Tangney, Stuewig and Mashek2007 for a review). Additionally, shame proneness increases the likelihood of engaging in risky behaviors, while guilt has the opposite effect (Tangney et al., Reference Tangney, Stuewig and Mashek2007). In comparison to shame, guilt has a long-standing history of being tied to compensatory behaviors or apologies (Doosje et al., Reference Doosje, Branscombe, Spears and Manstead1998) and collective action (Becker et al., Reference Becker, Tausch and Wagner2011; Tausch et al., Reference Tausch, Becker, Spears, Christ, Saab, Singh and Siddiqui2011). Guilt has also been found to relate to social improvement (Gausel & Leach, Reference Gausel and Leach2011). However, guilt has also be linked to less functional behaviors, such as self-punishment when someone feels they have done something wrong (Inbar et al., Reference Inbar, Pizarro, Gilovich and Ariely2013). Additionally, some evidence has shown that after controlling for shame, guilt does not always have such a strong association with reparations and apologies as other evidence suggests (Giner-Sorolla et al., Reference Giner-Sorolla, Piazza and Espinosa2011; Iyer et al., Reference Iyer, Schmader and Lickel2007).
Scholars have also suggested that shame may have more diverse relationships with behavior and motivation than previously thought (Gausel & Leach, Reference Gausel and Leach2011). Specifically, in their review, Gausel and Leach (Reference Gausel and Leach2011) found that, in response to moral self-failures, when an individual focuses on specific events/attributions this appraisal triggers feelings of shame and the need to self-improve. In contrast, when an individual has experienced moral self-failure and is focused on the global self, this appraisal triggers feelings of inferiority and defensive behaviors, such as avoidance. Therefore, according to this view, it is feelings of inferiority, not shame, that result in negative defensive behaviors. Further clarifying the role of guilt and shame in constructive behaviors, a meta-analysis by Leach and Cidam (Reference Leach and Cidam2015) identified that shame is more likely to be associated with positive outcomes when the situation seems reparable, in terms of either cause or consequence, while guilt is associated with positive outcomes regardless of how reparable the situation is.
Related to this point, when moral shame (triggered by one’s actions violating moral norms) and image shame (triggered by a tarnished social image) are distinguished from one another, this distinction provides evidence against the claim that shame always leads to negative behavioral effects (Rees et al., Reference Rees, Allpress and Brown2013). Specifically, in this research, it was found that image shame triggered from an in-group’s historical transgression (e.g., Germans’ role in the Holocaust) was associated with social distance from an unrelated victimized minority group (i.e., foreigners). By contrast, moral shame was found to be associated with support for foreigners. Similarly, it has been found that even longitudinally, moral shame and image shame are predictive of different types of behaviors, with image shame being related to negative behaviors and moral shame being related to positive behaviors (Allpress et al., Reference Allpress, Brown, Giner-Sorolla, Deonna and Teroni2014). In this research, it was also found that guilt was inconsistently related to positive behaviors.
This evidence suggests that shame may not always lead to negative outcomes in response to moral self-failures; thus, the damage to the self and social relations may not always occur. As a result, shame can be a functional moral emotion when the situation seems reparable. Also, like anger, the behavioral responses that are associated with guilt vary, but the conclusion seems to be the same as for anger, namely, that the overall response to guilt can be positive under certain circumstances. Cumulatively, the evidence suggests that when shame is focused on moral norms specifically, rather than one’s image or reputation, it can be considered a moral emotion, and that guilt is normally a moral emotion, due to its elicitors and consequences.
10.4 Positive Emotions
Up to this point, the chapter has focused on the moral status of several negative emotions, examining the similarities and differences between emotions within the moral families. It is now important to examine whether any positive emotions – that is, emotions that typically feel and/or do good – can be moral emotions. Also, it is important to examine whether these emotions can be considered separate constructs. Specifically, I will now turn toward comparing two other-suffering emotions (empathy and compassion) and two other-praising emotions (awe and elevation). Gratitude can also be considered to belong to the latter family but will not be discussed here because it is elicited by the perception that someone else has done something good or beneficial for the individual experiencing gratitude (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003). Thus, the self is more involved in the experience of gratitude than with the experience of awe and elevation, which are more focused on the object or another individual.
10.4.1 Empathy and Compassion
Empathy and compassion are generally considered to be other-suffering emotions, meaning they get us to focus on others and care for others. Like anger, empathy is an established emotion in the field of moral psychology (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003). At face value, many consider empathy to be central to morality. However, a central problem is that there have been far too many definitions of what empathy is, which makes it difficult to study empathy. For example, a review of prior literature has identified 43 different definitions of what empathy is (Cuff et al., Reference Cuff, Brown, Taylor and Howat2016). Additionally, among these definitions of empathy, there is a large amount of overlap with the constructs of compassion and sympathy.
Specifically, empathy can be defined as feeling the same or similar emotion as another individual or group, or understanding how another person is feeling (Decety & Jackson, Reference Decety and Jackson2004; Eisenberg et al., Reference Eisenberg, Fabes, Spinrad, Eisenberg, Damon and Lerner2006). It is recognized that there are then two components of empathy: affective empathy, which is feeling the same emotion as another person, and cognitive empathy, which is understanding what another individual or group is feeling (Cuff et al., Reference Cuff, Brown, Taylor and Howat2016). However, the latter (cognitive empathy) overlaps with the cognitive construct of perspective taking, that is, understanding what another individual or group is thinking, which is not an emotion. Compassion is conceptualized as an emotion we feel when we are concerned for another person’s suffering (Goetz et al., Reference Goetz, Keltner and Simon-Thomas2010). Overlapping with compassion, sympathy is defined as feeling compassionate or concerned for another individual’s or group’s current state (Eisenberg, Reference Eisenberg1988). Therefore, compassion seems to subsume feelings of sympathy. Compassion is believed to be different from empathy in that individuals are not experiencing the same emotion as the target person (Tangney et al., Reference Tangney, Stuewig and Mashek2007). However, there is still the overlap for both empathy and compassion, in that we are concerned or focused on what another individual or group is feeling.
There is also overlap in the consequences of compassion and empathy, in that both emotions have been linked to positive social outcomes. Generally, empathy (and also perspective taking) is believed to be essential for social relations, encouraging prosocial behaviors and discouraging hostile action (Batson & Ahmad, Reference Batson and Ahmad2009). Empathy facilitates the humanizing of others; thus, it can be seen to oppose other-condemning emotions, particularly disgust and contempt, which encourage dehumanization (Giner-Sorolla & P. S. Russell, Reference Giner-Sorolla and Russell2019). For example, in intractable conflicts, empathy and compassion have been shown to reduce aggression, increase positive attitudes as well as helping behavior, and increase the desire for reconciliation and forgiveness (Klimecki, Reference Klimecki2019). Empathy can lead to helping behaviors, situational attributions, and seeing more similarities with others, or self-other overlap (Batson & Ahmad, Reference Batson and Ahmad2009). Empathy can also reduce stereotyping, prejudice, and hostile action (Batson & Ahmad, Reference Batson and Ahmad2009).
However, other evidence has found that people may find it difficult to experience empathy, or may even avoid it, because it is physically, emotionally, and cognitively taxing (Cameron et al., Reference Cameron, Hutcherson, Ferguson, Scheffer, Hadjiandreou and Inzlicht2019; Hodges & Klein, Reference Hodges and Klein2001). Additionally, we often experience “empathy failures” when people are dissimilar to us and/or are rivals (Bloom, Reference Bloom2017; Zaki & Cikara, Reference Zaki and Cikara2015).
Compassion has similar positive effects as empathy. For example, compassion has been shown to enhance moral expansiveness, that is, including more beings in our moral circle (Crimston et al., Reference Crimston, Blessing, Gilbert and Kirby2022). However, “compassion fade” can occur (i.e., compassion can be reduced or eliminated) when there are multiple victims rather than a single victim (Västfjäll et al., Reference Västfjäll, Slovic, Mayorga and Peters2014). Thus, in conclusion, there is considerable overlap between the other-suffering emotions, and they do not always lead to the best social outcomes, either cognitively or behaviorally. As a result, even though empathy and compassion can be moral, as they often have positive consequences, there are instances when people do not feel these emotions despite typical triggers.
10.4.2 Awe and Elation
This then leads to the final type of moral emotions, the other-praising emotions. However, it is questionable whether the other-praising emotions of elevation and awe are unique moral emotions or rather fall under the umbrella term of “kama muta” (see Bartoș et al., Reference Bartoș, Russell and Hegarty2020; Zickfeld et al., Reference Zickfeld, Schubert, Seibt and Fiske2017). Kama muta is similar to feeling moved, and the most typical feature of this experience is the heightened sense of communal sharing (Fiske, Reference Fiske2020). It is also described as an emotion that elicits physiological sensations like those of elevation and awe, such as chills and a warm feeling in the chest (Fiske, Reference Fiske2020).
By contrast, some have argued that elevation, awe, and admiration are distinct in terms of what elicits them, how they are experienced, and their consequences (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003; Onu et al., Reference Onu, Kessler and Smith2016). Admiration is an other-focused emotion (Onu et al., Reference Onu, Kessler and Smith2016). However, admiration is distinct in that it is believed by some to be elicited when we see someone exceed expectations of skill or talent (Algoe & Haidt, Reference Algoe and Haidt2009; Onu et al., Reference Onu, Kessler and Smith2016). Thus, admiration is often focused on competency rather than morality, and as such, it seems less relevant to morality. It is also questionable whether it is elicited distinctly from the other other-praising emotions of elevation and awe, or whether it is just part of these experiences, as there is overlap in terms of elicitors and consequences. Admiration has been linked with prosocial outcomes and has been shown to reduce prejudice. Specifically, admiration facilitates social change (Sweetman et al., Reference Sweetman, Spears, Livingstone and Manstead2013). Admiration also underlies reductions in both sexual and racial prejudice through intergroup contact (Seger et al., Reference Seger, Banerji, Park, Smith and Mackie2017). Since some have questioned whether admiration is a moral emotion at all, and admiration appears to overlap with awe and elevation, only the latter two emotions will be compared in further detail.
Both awe and elevation are believed to operate as a “hive switch,” encouraging people to be less selfish and more prosocial, by broadening attention and regard for others (Haidt, Reference Haidt2012; Pohling & Diessner, Reference Pohling and Diessner2016). In terms of the experience or elicited bodily sensations, they are both associated with feeling moved in some way and described as feeling warm in the chest, tingly, and having goosebumps (Algoe & Haidt, Reference Algoe and Haidt2009). Awe is elicited in response to perceived vastness or by things that transcend previous experiences, or more specifically exceed our expectancies (Gocłowska et al., Reference Gocłowska, Elliot, van Elk, Bulska, Thorstenson and Baas2023). It can be elicited by a range of objects, including nature, landscapes, art, music, and religious experiences. Keltner and Haidt (Reference Keltner and Haidt2003) postulated that the awe experience can be categorized into five different kinds or flavors: threat, beauty, ability, virtue, and supernatural causality. What is striking about the flavors of awe (Keltner & Haidt, Reference Keltner and Haidt2003) is that ability overlaps with admiration’s elicitors and virtue overlaps with elevation’s elicitors (described below), which captures the clear overlap and co-occurrence of these emotions. Thus, there is overlap with awe and elevation, and with awe and admiration. What is also apparent from awe elicitors or flavors is that the experience of awe is not always entirely positive, since it is connected to feelings of threat (Chaudhury et al., Reference Chaudhury, Garg and Jiang2021). Finally, awe is rarely elicited by what other people are doing or have done, or to morality but instead can be triggered by physical objects (e.g., nature).
By contrast, elevation is elicited by witnessing acts of uncommon goodness or moral beauty, that is, someone acting in an exceptionally moral way (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003). More specifically, elevation is triggered when we see someone else assist another individual who is “poor, sick, or stranded in a difficult situation” (Thomson & Siegel, Reference Thomson and Siegel2017, p. 629). As a result, it could be argued that elevation is a mixed emotion, as even though it is primarily positive, there is a negative undertone to elevation, as it involves rising above some negative experience. It also has the potential to trigger self-comparisons that are not always positive, as we may not feel that we are good enough in comparison to the hero that elicits elevation. Prior research has already identified that the experience of elevation is moderated by one’s own moral identity (Aquino et al., Reference Aquino, Freeman, Reed, Lim and Felps2009). Specifically, it was found that those that are higher in moral identity are more likely to experience elevation intensely. This suggests that when unflattering self-comparisons are possible, those with higher moral identity may be more threatened, and elevation is less likely to be experienced, or may even backfire.
In terms of consequences, awe is characterized by numerous positive outcomes (see Gottlieb et al., Reference Gottlieb, Keltner and Lombrozo2018, and Keltner & Haidt, Reference Keltner and Haidt2003, for reviews). First, awe triggers a need for accommodation to change the current circumstances one has witnessed, for example, by eliciting the need to include more beings in our moral or social circle. Second, awe increases critical thinking, promotes consideration of additional perspectives, and can trigger feelings of humility. Third, awe decreases selfishness and triggers the need to connect with others. Fourth, awe increases positive feelings and well-being. Awe has also been shown to increase environmental concern (Yang et al., Reference Yang, Hu, Jing and Nguyen2018) and charitable giving (Guan et al., Reference Guan, Chen, Chen, Liu and Zha2019). Awe also triggers the belief that time is slowing down and, as a result, awe increases one’s willingness to dedicate more time to others (Rudd et al., Reference Rudd, Vohs and Aaker2012). Recently, awe has been shown to reduce ideological conviction and increase willingness to associate with others with opposing views (Stancatio & Keltner, Reference Stancato and Keltner2019). Therefore, even though awe in some instances can be a nonsocial rather than moral emotion based on its elicitors, it can result in numerous positive social outcomes, which foster connectedness and social harmony. Therefore, in terms of doing good, awe does seem to be a good candidate for a moral emotion.
Like awe, elevation has also been shown to have numerous positive outcomes. Elevation encourages helping behaviors (Schnall & Roper, Reference Schnall and Roper2012; Van de Vyver & Abrams, Reference Van de Vyver and Abrams2015), imitation of positive role models (Diessner et al., Reference Diessner, Iyer, Smith and Haidt2013), and the desire to share more overlap with others (see Pohling & Diessner, Reference Pohling and Diessner2016; Thomson & Siegel, Reference Thomson and Siegel2017; Van de Vyver & Abrams, Reference Van de Vyver and Abrams2015, for reviews). Haidt has argued that elevation operates in opposition to disgust in social relations (Haidt, Reference Haidt, Davidson, Scherer and Goldsmith2003; Lai et al., Reference Lai, Haidt and Nosek2014). For example, previous research found that elevation reduces sexual prejudice but not racial prejudice, which the authors argue was explained by disgust being the basis of sexual prejudice (Lai et al., Reference Lai, Haidt and Nosek2014). However, recent research that has aimed to replicate this effect found that admiration is also effective at reducing sexual prejudice (Bartoș et al., Reference Bartoș, Russell and Hegarty2020). Interestingly, elevation was found to be positively associated with disgust in one of the studies. This latter result may support the idea that elevation is a mixed emotion and the idea that, to see the true benefit of elevation, negative emotions such as disgust need to be diminished. Future research may focus on elevation and awe as prejudice reduction tools, to gain a better understanding of when, where, and why these emotions lead to positive social outcomes.
In summary, a common feature of awe and elevation is that they are triggered by witnessing something exceptional or unusual. But what differentiates these emotions is their focus: Elevation focuses on morality and awe often focuses on natural objects and scenes, such as landscapes. Thus, in terms of elicitors, awe shows less of an obvious connection with morality. However, both emotions are related to feelings of warmth, through the consequences that they typically elicit and their experience (i.e., feeling chills and goosebumps). It could also be argued that admiration is commonly triggered within both elevation and awe experiences. For example, when we witness someone engage in a selfless act (i.e., a situation that can trigger elevation), it is virtually impossible not to feel admiration as well. The same is true for awe, in that when viewing an exceptional beauty scene or art, we also come to admire the space that we are in or the piece of artwork. Thus, in this family of emotions, there is a large amount of overlap in terms of both elicitors and consequences. From this one could conclude that these emotions fall under the umbrella term of “kama muta” (i.e., feeling moved) and thus are equally beneficial in terms of promoting positive social outcomes. Thus, while both elevation and awe are relevant to morality because of their consequences, the elicitors of awe are not strongly related to morality.
10.5 Conclusion
To conclude, this chapter provided an extension of previous models of moral emotions, by highlighting the importance of examining both cognitive and behavioral consequences of emotions when determining the degree to which an emotion can be moral. Even though there have been some notable findings in terms of the likely consequences of the moral emotions examined in this chapter, it is evident that further research is needed on this topic, shifting focus away from what elicits moral emotions.
The unique components of four different emotion families have been examined in this chapter: other-condemning, self-conscious, other-suffering, and other-praising (see Table 10.1 for summary). Within the other-condemning emotion family, contempt overlaps considerably with anger and disgust. Disgust seems to be associated with morality, but according to the qualities of moral emotions proposed here, it is not typically a moral emotion, due to its negative consequences. By contrast, anger is often a moral emotion regarding both its elicitors and consequences. Future research should endeavor to focus on the positive consequences of anger and when these types of effects can be cultivated. Among the self-conscious emotions of shame and guilt, neither emotion shows a straightforward path to positive outcomes. Of the two emotions, however, it still seems that guilt is more likely to lead to moral consequences and improved social relations. For empathy and compassion, since there is still so much ambiguity in defining what these emotions even are, it is difficult to determine whether they are moral emotions and have positive consequences. Also, as reviewed here, there is growing evidence of the potential negative impact of experiencing empathy or compassion. Finally, elevation and awe show considerable overlap in their consequences, often leading to prosocial outcomes, but there are differences between their elicitors, in which awe is rarely triggered by moral situations but elevation is often elicited by moral situations that require self-comparisons that can backfire.
In summary, from this analysis of the moral emotions, there appears to be more overlap and ambiguities for the positive emotions than for the negative emotions. It is also evident that anger and guilt are the best candidate moral emotions in terms of the tendency to foster improved social relations, which aligns with previous analysis of moral emotions. In comparison, some have positioned emotions like empathy, compassion, shame, and disgust as being essential to morality. However, other evidence suggests that the moral character of these emotions is questionable. Hopefully, this review will encourage further research on both the cognitive and behavioral consequences of moral emotions.
There is general agreement that empathy is a central aspect of our humanity. Indeed, empathy plays a vital role in our interpersonal life, from bonding between parents and child, to enhancing affiliation among conspecifics, to understanding others’ subjective psychological states. Empathy motivates various kinds of prosocial behaviors, such as comforting and helping. It can also, in certain contexts, inhibit interpersonal aggression. Empathy increases trust, rapport, and affinity. There is a functional relation between empathy and guilt. Empathy can promote collective action by enhancing other-regarding motives and reducing self-regarding concerns, thus fostering cohesiveness and cooperation within human societies. However, contrary to what is commonly assumed, empathy is not always a driver of moral behavior. Here, morality is viewed as a set of biological and cultural adaptations, including values, norms, and practices, that evolved to regulate selfishness and facilitate cooperation (Curry, Reference Curry, Shackelford and Hansen2016).
The wealth of empirical findings from behavioral and social sciences demonstrates a complex relationship between morality and empathy (Decety & Cowell, Reference Decety and Cowell2014). Indeed, at times, empathy can interfere with morality by introducing partiality toward an individual, countering the moral principle of justice for all. Empathy is less likely to be felt for groups than for identifiable victims (Västfjäll et al., Reference Västfjäll, Slovic, Mayorga and Peters2014). It gives higher priority to friends than strangers. Empathy is parochial, favoring in-group over out-group members (Bruneau et al., Reference Bruneau, Cikara and Saxe2017). However, empathy can provide the emotional fire and the impetus to relieve a victim’s suffering. It can counter rationalization and derogation (Decety & Cowell, Reference Decety and Cowell2015). All of these examples, whether they are drawn from laboratory experiments or from real-world situations, reveal a complex functional relationship between affect, cognition, empathy, and moral decision making.
Empathy is costly, in that it draws upon attentional and emotional resources, but it is also beneficial in maintaining social relationships and serving the needs of others (DeSteno, Reference DeSteno2015). The empathy that we experience as a balance of these costs and benefits is not always under our control. It involves unconscious mechanisms to tune its responsiveness. While we may deliberately choose whether or not to feel empathy for a stranger, caring for our kin, close friends, and folks we associate with is unavoidable, almost like an impulse (Hodges & Klein, Reference Hodges and Klein2001). However, some have argued that being empathetic can also result from motivated choices to prioritize and balance competing goals within specific social contexts (Cameron, Reference Cameron2018).
In this chapter, I propose that empathy is a dynamic interpersonal phenomenon that encompasses three interacting functional components:
(1) Emotional contagion (affect sharing or emotional empathy), which is a quasi-instantaneous way to acquire and share social information. Such transmission of information between individuals is an adaptive evolutionary mechanism for individuals in danger;
(2) Empathic concern (sympathy or compassion), which piggybacks on the caring motivation, a specific biological adaptation that is both narrow in scope and yet highly flexible; and
(3) Perspective taking, the capacity to make inferences about and represent one’s own and others’ intentions, emotions, beliefs, and motives.
I draw on evolutionary theory, psychology, neuroscience, and behavioral economics to demonstrate that emotional contagion is unconsciously socially modulated. Empathic concern, by contrast, is relatively selective with regards to the input to which it responds and particularly sensitive to stimuli that have been important in the evolutionary past. As a corollary, the degree to which we experience empathy is partly constrained by information-processing biases that channel certain kinds of environmental input selected by the ecological pressures tied to our evolutionary history. These limits express themselves in unconscious, rapid, almost automatic tendencies to care more for some people but less for others, or for one person and not for many. Understanding the ultimate causes and proximate mechanisms of empathy allows characterizing the kind of information that gets prioritized as input and the kinds of behaviors they prompt as output. It also contributes to identification of its limits and which situational factors exacerbate empathic failure, which is essential if we want to mitigate our cognitive biases. Together, this knowledge is useful at a theoretical level as well as at a practical level: It provides information about how to reframe situations to activate alternative evolved systems in ways that promote normative moral conduct compatible with our current societal aspirations.
As a first step, I first describe the architecture of empathy and how it serves a motivational function to value others’ welfare.
11.1 The Architecture of Empathy
The word “empathy” has been used as an umbrella under which definitions vary enormously. This makes it difficult to determine which psychological function empathy relates to and which role it plays in morality (Batson, Reference Batson, Decety and Ickes2009). Differentiating conceptualizations is therefore necessary because they reflect distinct psychological processes that vary widely in their phenomenology, functions, and evolved biological mechanisms. Moreover, inconsistent definitions of empathy have a negative impact on both research and practice, especially in the domains of law, medicine, education, and decision making (Decety, Reference Decety2020).
Phenomenologically, the notion of empathy reflects an ability to perceive and be sensitive to the emotional states of others, often combined with a motivation to care about their well-being. This definition, although useful in interpersonal communication, remains vague in the specification of the underlying psychological mechanisms and their biological instantiation. Progress carried out over the past decades in social neuroscience has greatly contributed to clarifying the functions of empathy and their underlying component processes. This discipline is based on a resolutely interdisciplinary enterprise including evolutionary biology, behavioral ecology, neurobiology, psychology, anthropology, sociology, and behavioral economics, and on the vertical integration of multiple levels of analysis, from the molecular to the socio-cultural context (Cacioppo & Decety, Reference Cacioppo and Decety2011).
Theoretical and empirical work from social neuroscience converge to characterize empathy as a multidimensional phenomenon reflecting a capacity to share, understand, and respond to others’ emotions. Empathy comprises several evolved functional components that are emotional (sharing affect with another), cognitive (understanding the other’s subjective state), and motivational (feeling concerned for another) (Decety & Jackson, Reference Decety and Jackson2004). These components flexibly interact with one another and operate by way of automatic (bottom-up) and controlled (top-down) processes. Yet they can be dissociated, as they rely on partially separable information-processing neural systems in the brain and underlie different psychological functions (Shdo et al., Reference Shdo, Ranasinghe, Gola, Mielke, Sukhanov, Miller and Rankin2018). This model of empathy combines both representational aspects and processes involved in decision making.
11.2 The Adaptive Value of Empathy
To properly understand empathy and its contribution to moral decision making, we must obtain both ultimate and proximate explanations. Ultimate explanations are concerned with the fitness consequences of a trait or behavior – the why question. Proximate explanations address the way in which that functionality is achieved – the how question. Proximate causes are important, but they only tell part of the story. Ultimate explanations go below the surface, focusing on evolutionary functions. Ultimate and proximate explanations are not the opposite ends of a continuum, and we should not choose between them (Scott-Phillips et al., Reference Scott-Phillips, Dickins and West2011); though distinct from one another, they are complementary.
An important aspect to keep in mind is that adaptations must be understood in terms of survival and reproduction in the historical environments and ecological constraints in which they were selected. Many of our cognitive biases are heuristics – that is, simple, approximate, efficient rules or algorithms, learned or hard-coded by evolutionary processes. Our decision biases, errors, and misjudgments are not necessarily flaws. Rather, they are design features with which natural selection has equipped Homo sapiens to make decisions in ways that consistently enhanced our hominid ancestors’ inclusive fitness (Kenrick & Griskevicius, Reference Kenrick and Griskevicius2013). While these heuristics generally promote utility, they are fallible in predictable ways, and they can misfire in our contemporary socio-ecological context.
The essence of empathy, and its primary form across many species, is the communication of an emotional state from one individual to another. Affective signaling and communication between conspecifics contribute to inclusive fitness by facilitating coordination and cohesion, increasing defense against predators, and bonding individuals to one another within a social group. It is a widespread phenomenon in a great many species (Mendl et al., Reference Mendl, Burman and Paul2010). Discriminating and communicating emotions to conspecifics (at least on the main dimensions of valence and intensity) allows the facilitation and the regulation of social interactions. When emotions are transmitted from one individual to the next by vocal, facial, or chemical channels, it leads to information transfer and accelerated coordination between group members, and it facilitates decision making (Briefer, Reference Briefer2018). This spontaneous transfer of internal states is fundamental for survival, and social group cohesion. However, affect sharing does not lead to one single kind of decision making when it comes to moral judgment or conduct (Loewenstein & Small, Reference Loewenstein and Small2007).
Moreover, our capacity to experience affect, important in guiding our judgments, decisions, and driving our behavior, is limited. Many situations do not induce much distress in the observer. Some of the failures to experience empathy to others in distress or in need could be a result of not cognitively representing their situation and suffering in a meaningful way (Slovic, Reference Slovic2007). The capacity for perspective taking, which may be unique to our species, can expand the scope of affect sharing. Importantly, attention seems to be a necessary requirement for empathic feelings. One study placed participants in a position of reacting empathically to children in need of help and manipulated their ability to visually attend to a single victim or being distracted by several others (Dickert & Slovic, Reference Dickert and Slovic2009). Empathy responses were lower and reaction times were longer when the photo of a child was presented with distractor photos. When information about children is processed in a way that fosters vivid representations, affective reactions are stronger than when this information is processed in a detached, abstract, or intangible way.
Behavioral economics studies have shown that people donate much more after reading the story of one victim than a story about many victims (Small et al., Reference Small, Loewenstein and Slovic2007). Identified, single victims arouse empathy and personal distress to a greater extent than statistical victims. This effect has been suggested to account for the failure to bring meaning to abstractly represented large numbers or statistical victims, as compared to identifiable victims, and may explain why disasters that cost a large number of lives seem to evoke less of a helping response than disasters that befall an individual (Fetherstonhaugh et al., Reference Fetherstonhaugh, Slovic, Johnson and Friedrich1997; Västfjäll et al., Reference Västfjäll, Slovic, Mayorga and Peters2014).
Affective information influences decision processes and subsequent costly behavioral responses. For example, Kogut and Ritov (Reference Kogut and Ritov2005) asked participants how much money they would give to help develop a drug that would save the life of one child or eight children. They found that participants were willing to donate the same amount. But, when the single child’s name, age, and picture were shown, the donations dramatically increased for the single child, and this effect was mediated by the participants’ reported empathy. Showing the photo of a young child has a great impact in evoking strong emotions. On September 2, 2015, the photo of a Syrian child lying face-down on a Turkish beach filled social media and the front pages of newspapers worldwide. This photo of a single child had more impact than statistical reports of hundreds of thousands of deaths of Syrians fleeing the civil war, including on donations to the Red Cross (Slovic et al., Reference Slovic, Västfjäll, Erlandsson and Gregory2017). It is as if people who had been unmoved by the rising death toll in Syria suddenly appeared to care much more after seeing this photograph.
11.3 Proximate Mechanisms of Affect Sharing
Emotional contagion that leads to affect sharing is an important determinant of prosocial behavior. However, it can produce different decisions, depending on intra- and interpersonal factors, and social context. Resultant motivations can even lead to a lack of response to be of assistance. In highly arousing situations, people who are oversensitive may become upset and distressed. Emotional distress may result in withdrawing from the stressor, resulting in decreasing prosocial behavior or in helping the other merely to reduce one’s own discomfort (Tice et al., Reference Tice, Bratslavsky and Baumeister2001). The reduction of personal distress can be a form of emotional regulation by motivating actions that make oneself feel better. Thus, how emotional contagion can elicit either an egoistic or an altruistic motivation remains an important question but is difficult to distinguish. This is an ongoing debate in social psychology (see Cialdini et al., Reference Cialdini, Brown, Lewis, Luce and Neuberg1997 vs. Batson et al., Reference Batson, Duncan, Ackerman, Buckley and Birch1981).
The neurobiological mechanisms of emotional contagion in nonhuman animals, and in humans, are not entirely understood, with the exception of the contagion of stress and pain. The former results in the activation of the autonomic nervous system and hypothalamic-pituitary-adrenocortical axis (Engert et al., Reference Engert, Linz and Grant2019). In rodents, perceiving a conspecific in physical distress can facilitate social approach and helping behavior (Langford et al., Reference Langford, Tuttle, Brown, Deschenes, Fischer, Mutso, Root, Sotocinal, Stern, Mogil and Sternberg2010). Rats help cage mates escape from a transparent restrainer, and the helping rat engages in such prosocial behavior even if it does not gain any social reward from it (Bartal et al., Reference Bartal, Decety and Mason2011). Blocking emotional contagion with an anxiolytic agent in these rats inhibits their helping behavior, which demonstrates the importance of some level of vicarious distress to prompt the prosocial response (Ben-Ami Bartal et al., Reference Ben-Ami Bartal, Shan, Molasky, Murray, Williams, Decety and Mason2016). Another series of experiments involved a pool of water in which one rat was made to swim for its life, while another rat was in a cage adjacent to it (Sato et al., Reference Sato, Tan, Tate and Okada2015). The results showed that rats quickly learned to open the door to rescue their cage mates from the pool of water. Importantly, the rats did not open the door when the cage mate was not in distress. Overall, these results indicate that the decision to open the door to liberate the cage mate was elicited by processing distress cues.
Prairie voles match their anxiety-related behavior and corticosterone response of the stressed cage mate (Burkett et al., Reference Burkett, Andari, Johnson, Curry, de Waal and Young2016). In that study, exposure to a stressed familiar cage mate increased activity in neurons located in the anterior cingulate cortex (ACC) of the observer animal and led to grooming and licking behavior directed toward that conspecific. The ACC contains multisensory neurons that respond both when a rodent experiences pain and while witnessing another conspecific experiencing pain. Deactivating this region with muscimol microinjections impairs the social transmission of distress and impedes prosocial approach behavior (Carrillo et al., Reference Carrillo, Han, Migliorati, Liu, Gazzola and Keysers2019). Infusing an oxytocin receptor antagonist into this region also eliminates the partner-directed prosocial response (Burkett et al., Reference Burkett, Andari, Johnson, Curry, de Waal and Young2016). Oxytocin is a neuropeptide mainly produced in the hypothalamus. It makes social information more salient by connecting brain areas involved in processing social information and helps link those areas to the reward system. Importantly, consoling behavior occurred only between those voles who were familiar with each other but not strangers. This suggests that the behavior is not simply a reaction to aversive cues but modulated by social cues of familiarity.
In humans too, and in accordance with evolutionary theory, perceived similarity or closeness between people increases the degree to which emotional contagion takes place and leads to prosociality. The perceived overlap between self and other is an important predictor of helping behavior and motivates empathic concern (Cialdini et al., Reference Cialdini, Brown, Lewis, Luce and Neuberg1997). People display higher levels of prosocial behavior toward others who are similar to them, are members of their group, share their political attitudes, or favor one individual in need rather than many, and they do so because they experience higher levels of empathetic concern under these conditions (Dovidio & Banfield, Reference Dovidio, Banfield and Wright2015).
Numerous functional magnetic resonance imaging (fMRI) studies have demonstrated that empathy relies on overlapping processing of personal and vicarious experience, or shared neural representations (Decety & Sommerville, Reference Decety and Sommerville2003; Lockwood, Reference Lockwood2016). In particular, the perception and even imagination of another person suffering leads to an increase in neuro-hemodynamic activity in a restricted network of brain regions that are also involved in the first-hand experience of pain (Figure 11.1). These regions include the periaqueductal gray (PAG), insula, and ACC. This latter region contains multisensory neurons and belongs to the medial pain system that processes the affective aspects of nociceptive information (Lamm et al., Reference Lamm, Decety and Singer2011). It is important to note that there is no complete overlap between neural representations engaged in pain processing and those engaged in the vicarious experience of pain (Krishnan et al., Reference Krishnan, Woo, Chang, Ruzic, Gu, López-Solà, Jackson, Pujol, Fan and Wager2016). The same is true for vicarious neural representations of pleasure and reward. A meta-analysis of functional neuroimaging studies of rewarding outcomes in social contexts found that both vicarious and personal rewards activate the ventromedial prefrontal cortex (vmPFC) and amygdala, and the latter also engage the nucleus accumbens and regions involved in theory of mind (Morelli et al., Reference Morelli, Sacchet and Zaki2015). The implication of both shared and nonshared neural representations in vicarious experience is not surprising, given the different sensory inputs during personal and vicarious experience (Lockwood, Reference Lockwood2016).

Figure 11.1 Brain circuits associated with different functional components of empathy
11.4 Affect Sharing Is Socially Modulated
The vicarious experience and neural response to others’ joys and sorrows is not automatic. Rather, it is modulated by beliefs, attitudes, prejudices, and group coalitions. Imagining a loved one in physical pain is associated with greater signal increase in the insula and ACC than imagining a stranger in pain (Cheng et al., Reference Cheng, Chen, Lin, Chou and Decety2010). Witnessing a rival’s failure triggers a subjective feeling of pleasure parametrically reflected by neural activity in the rewards system (Cikara & Fiske, Reference Cikara and Fiske2013). Stronger emotional reactions and associated neural responses are elicited when witnessing the pain of someone from one’s own ethnic group than when observing pain from an out-group member (Contreras-Huerta et al., Reference Contreras-Huerta, Baker, Reynolds, Batalha and Cunnington2013; Xu et al., Reference Xu, Zuo, Wang and Han2009). This bias to the pain expressed by other-race individuals changes over time and is mitigated by familiarity of contact with people of the out-group. One such study recruited Chinese students who had first arrived in Australia within the past six months to five years and assessed their level of contact with other ethnic groups across various contexts (Cao et al., Reference Cao, Contreras-Huerta, McFadyen and Cunnington2015). During fMRI scanning, participants were shown videos of own-race/other-race individuals, as well as own-group/other-group individuals, expressing pain. The typical group bias in neural responses to observed pain was evident, whereby neural activation was greater for pain in own-race compared to other-race people. Critically, the response increased significantly with the level of contact participants reported with people of the other ethnic groups.
The perception of another person in distress or pain is modulated by competitive social contexts. For instance, in a competitive interaction, a competitor’s pain leads to positive emotions in oneself, whereas perceiving of the competitor’s joy results in distress (Lanzetta & Englis, Reference Lanzetta and Englis1989). This effect occurs very early during the perception of emotional expression, as demonstrated by a study using event-related potentials (ERPs) and a card game (Yamada et al., Reference Yamada, Lamm and Decety2011). In that experiment, participants played a card game under the belief that they were doing so jointly with another player who sat in an adjoining room and whose smiles and frowns in response to winning or losing in the game could be observed on a computer screen. Depending upon the experimental condition, the other player’s facial expressions conveyed one of two opposing values to the participant. In the empathic condition, her emotional expressions were congruent with the participant’s outcome (win or loss), whereas in the counter-empathic one, they signaled incongruent outcomes. Results revealed that counter-empathic responses are associated with modulation of early sensory processing (~170 ms after stimulus onset) of emotional cues.
In a neuroeconomics study, participants were engaged in a sequential prisoner’s dilemma game with confederate individuals who were playing the game either fairly or unfairly (Singer et al., Reference Singer, Seymour, O’Doherty, Stephan, Dolan and Frith2006). Following this behavioral manipulation, participants were scanned while watching fair and unfair players in pain. Compared to the observation of fair players, participants’ observation of unfair players in pain led to significantly reduced activation in brain areas coding the affective components of pain. Another study showed that the failures of an in-group member, like a fellow Red Sox fan, are experienced as painful and are associated with increased neural response in the ACC and insula, whereas failures of a rival out-group member, like a fellow Yankees fan, give a sense of pleasure, which is associated with reward-related signal augmentation in the striatum (Cikara et al., Reference Cikara, Botvinick and Fiske2011).
This absence of vicarious experience for rivals’ pain should not be understood as an empathic failure. Rather, this reflects an adaptive response in competitive situations and social coalitions. Humans are spontaneously tribal. The tendency to favor in-group over out-group, especially when resources are scarce, has been observed in children before their second birthday (Jin & Baillargeon, Reference Jin and Baillargeon2017).
Mathematical modeling of social evolution as well as anthropological observations indicate that intragroup motivation to be invested in their own members’ welfare coevolved with intergroup competition over valuable resources. An optimal condition under which genetically encoded hyperprosociality can propagate is, paradoxically, when groups are in conflict. In line with cultural group selection theory (Richerson et al., Reference Richerson, Boyd and Henrich2010), it has been proposed that, during the late Pleistocene, groups with higher numbers of prosocial individuals cooperated more effectively and thus outcompeted others (Marean, Reference Marean2015). This synergy between cooperation and competition, which shapes our prosocial preferences, can be observed in both laboratory experiments and in the workplace (Francois et al., Reference Francois, Fujiwara and Van Ypersele2018).
Affect sharing is moderated by attitudes and prejudices toward people. For instance, one fMRI study demonstrated that the vicarious response is intensified or reduced by a priori attitudes toward individuals on video clips expressing the same pain intensity (Decety et al., Reference Decety, Echols and Correll2010). Study participants were more sensitive to the facial expressions of pain of individuals who were described as infected with the acquired immunodeficiency syndrome (AIDS) as the result of a blood transfusion (thus clearly victims of a lack of medical foresight) than to the pain of individuals who were described to have contracted AIDS as the result of their illicit drug addiction and the sharing of needles (people often seen as responsible for their behavior). Moreover, controlling for both explicit and implicit AIDS biases, the more participants blamed these individuals, the less subjective pain they attributed to them as compared with healthy controls.
People easily distinguish between in-group members and outsiders. Social identity formation drives people to adopt arbitrary markers to signal their group membership. It can thus be expected that knowing the religious affiliation of someone suffering affects the vicarious response in the observer. One study recruited Christian and atheist participants who were all Han Chinese in Beijing and thus were highly similar in terms of facial features (Huang & Han, Reference Huang and Han2014). Event-related potentials, small voltages generated in neurons in response to specific events or stimuli, were recorded while participants viewed pain and neutral expressions of Chinese faces that were marked (with a symbol on a necklace) as Christians or atheist. The religious/irreligious identifications significantly modulated the ERPs’ amplitudes (200 ms after stimulus onset) to pain expressions, with larger amplitudes when an observer and a target shared religious (or irreligious) beliefs. Similarly, a simple difference in a single-word text label on a hand in pain, indicating the person’s religious affiliation (Hindu, Christian, Jewish, Muslim, Scientologist, or atheist), seems sufficient to modulate neural activity in the observer and can be predicted by the observer’s own religion (Vaughn et al., Reference Vaughn, Savjani, Cohen and Eagleman2018). In that study, the brain response was larger when participants viewed a painful event occurring to a hand labeled with their own religion (in-group) than to a hand labeled with a different religion (out-group). Importantly, the size of this bias correlated positively with the magnitude of participants’ dispositional empathy. Group biases have evolved for their adaptive functional roles. They encourage us to be kind to in-group members, who are likely to reciprocate, and at times to be hostile to out-group members, especially when resources are scarce. However, such biases are also the source of prejudice that conflict with our current social and political environment, and particularly the principle of justice for all.
Vicarious neural responses to others’ suffering are thus highly flexible and are dependent on sociomoral values (shared beliefs). Moral values exert a powerful motivational force that varies both in direction and intensity, guide the differentiation of just from unjust courses of action, and direct behavior toward desirable outcomes (Higgins, Reference Higgins, Brosch and Sander2015). For instance, people who are sensitive to animal suffering and become vegetarian for ethical reasons show a greater neural response when exposed to photos depicting animals suffering compared to omnivore participants (Filippi et al., Reference Filippi, Riccitelli, Falini, Di Salle, Vuilleumier, Comi and Rocca2010). Notably, vegan and vegetarian participants have greater neural activation while looking at photos of animals suffering than photos of humans suffering.
While affect sharing or emotional empathy is often portrayed as facilitating prosociality, affiliation, rapport, and linking, it doesn’t necessarily mean that it promotes morality. As discussed earlier in this chapter, it is unconsciously and rapidly modulated by various social factors that are evolutionarily advantageous, such as similarity of many kinds, including kinship, group memberships, and shared political attitudes or religious beliefs. The social context, the nature of the situation, and the characteristics of the person in need not only affect assessments of costs and rewards and the decisions about whether to engage in prosocial behavior but also shape empathic experiences (Dovidio & Banfield, Reference Dovidio, Banfield and Wright2015).
11.5 Empathic Concern
Empathic concern, also known as sympathy or compassion, is interwoven yet distinct from affect sharing, although the latter can elicit the former. Generalized parental nurturance seems the most likely evolutionary basis of empathic concern. In humans, the motivation for parental care is far more flexible and future-oriented than in any other mammalian species (Batson, Reference Batson and Decety2014; Zahn-Waxler et al., Reference Zahn-Waxler, Schoen, Decety, Roughley and Schramme2018).
At the ultimate level, caring for offspring is a biological necessity. Our survival as a species would be strongly compromised without it. Kin selection is the main force driving the evolution of parental care (Hamilton, Reference Hamilton1964). Both natural and sexual selection have led to the emergence of a motivation state that leads individuals to care for and promote the welfare of offspring. Without sufficiently close genetic relatedness and an appropriate ratio of benefits to costs, caretaking and other cooperative propensities that do not directly increase the helper’s own reproductive success would not have evolved.
Compared to other primates, human offspring are born more prematurely and more dependent, requiring exceptional care. This has been possible because Homo sapiens ancestors were cooperative breeders, also known as alloparenting. Caring for individuals other than one’s biological offspring seems to be a universal behavior among humans (Kenkel et al., Reference Kenkel, Perkeybile and Carter2017). In other apes, once youngsters are weaned, they are basically nutritionally independent. But in the case of early hominids, alloparental care and provisioning set the stage for infants to develop in new ways. Alloparental assistance allows mothers to conserve energetical resources, remain safer from predators, and live longer (Hrdy, Reference Hrdy, Decety and Christen2014). This pressure to care for vulnerable offspring gave rise to several adaptations such as powerful responses to distress vocalizations, neotenous traits, and classes of attachment-related behaviors between caregiver and offspring, including empathic concern (Goetz et al., Reference Goetz, Keltner and Simon-Thomas2010). Empathic concern has emerged as the affective component of a caregiving system, selected to raise vulnerable offspring to the age of viability, thus ensuring that genes are more likely to be passed on (Goetz et al., Reference Goetz, Keltner and Simon-Thomas2010). This motivational component of empathy relies on subcortical circuits that originally evolved to support parental caregiving and can be engaged for vulnerable and distressed others more generally (Vekaria et al., Reference Vekaria, O’Connell, Rhoads, Brethel-Haurwitz, Cardinale, Robertson, Walitt, VanMeter and Marsh2020).
At the proximate level, the caring motivation arises from a set of biological mechanisms located in the brainstem, hypothalamus, ventral pallidum, dorsal raphe nucleus, vmPFC, and the bed nucleus of the stria terminalis (Kenkel et al., Reference Kenkel, Perkeybile and Carter2017). The caring motivation triggers oxytocin release that counteracts the effects of stress and encourages us to approach others and tend to their needs, and also the dopaminergic reward system, which mediates feelings of subjective pleasure when nurturing and helping. That is why it feels good to help and care. Neural activity in the mesolimbic reward circuit predicts donations to orphans depicted in photographs (Genevsky et al., Reference Genevsky, Västfjäll, Slovic and Knutson2013). In one fMRI study, even when subjects were forced to pay a tax to a local food bank, these reward pathways were activated – albeit not as much as when subjects chose to donate voluntarily some of their cash to the food bank (Harbaugh et al., Reference Harbaugh, Mayr and Burghart2007).
Valuing offspring is a highly positive experience in nonhuman animals (Ferris, Reference Ferris, Decety and Christen2014). In humans too, infant cues such as smiling or crying expressions are powerful motivators of parental behavior, activating dopamine-associated brain reward circuits. Increased activation of the mesolimbic reward pathway, including the nucleus accumbens (Strathearn et al., Reference Strathearn, Fonagy, Amico and Montague2009), and higher levels of oxytocin (Gordon et al., Reference Gordon, Zagoory-Sharon, Leckman and Feldman2010) are found in mothers and fathers in response to their infants’ cues.
It has long been known, since Konrad Lorenz’s notion of “Kindchenschema,” that neotenous characteristics, such as babyish faces, a big head, small nose, and big eyes, elicit social approach and caretaking behavior. These infantile physical characteristics, also known as neotenous cues, signal vulnerability and were favored by natural selection to facilitate provision of care. Adults with baby faces are perceived to have childlike traits – to be naïve, weak, warm, and honest. These neotenous cues inspire caretaking, protection, and compassion. These characteristics can also sway criminal sentencing and imprisonment decisions. Johnson and King (Reference Johnson and King2017) conducted an analysis of a random sample of 1,200 men who had been convicted of felony crimes in the Minneapolis-St. Paul metropolitan area in 2009, including their booking photos. The results showed that baby-faced individuals were significantly less likely to be incarcerated, even after controlling for other relevant case characteristics. It is thus not all that surprising that the convicted terrorist, Dzhokhar Tsarnaev, whose action killed 3 people and injured 260 during the Boston Marathon in 2013, has received a striking amount of sympathy (Rosin, Reference Rosin2013). Thus, caution should be in order regarding the role of empathy in criminal justice.
The proximate neural mechanisms of empathic concern are partially distinct from the mechanisms of affect sharing. In one fMRI study, participants listened to true biographies describing a range of human suffering such as children born with congenital disease, adults struggling with cancer, experiences of homelessness and other hardships (Ashar et al., Reference Ashar, Andrews-Hanna, Dimidjian and Wager2017). Participants were asked to provide moment-by-moment ratings of empathic concern and emotional distress while listening to these biographies. Empathic concern was associated with neural response in the striatum and vmPFC, whereas emotional distress was related with neural response in the insula and the somatosensory cortex. Another neuroimaging study reported that individuals with high dispositional empathic concern are more likely to engage in altruistic behavior, and this relationship was mediated by neural activity in the vmPFC and ventral striatum, regions involved in the reward anticipation circuit and the subjective valuation process (FeldmanHall et al., Reference FeldmanHall, Dalgleish, Evans and Mobbs2015).
The neurophysiological circuits for caring first evolved in the context of mother–infant relationships and subsequently became extended to others in groups of closely related individuals. A variety of kin-recognition mechanisms or heuristics have evolved to facilitate behavioral tendency to care and help (Neyer & Lang, Reference Neyer and Lang2003). Kin recognition is characterized by highly automatic, heuristic cue-based processes, such as familiarity or proximity, that are sometimes fallible. The fact that humans possess additional, more cognitive means of assessing kinship does not rule out the role of these earliest adaptations. The evolution of increasingly complex psychological mechanisms occurs by adding to, rather than replacing, previous mechanisms and this without any guarantee of optimality (Jacob, Reference Jacob1977). Behavioral genetics studies demonstrate that highly related people are more similar to each other on a variety of attitudes, values, and personality characteristics, and such similarities are used as kinship cues (Park et al., Reference Park, Schaller and Van Vugt2008). Thus, one can expect that empathic concern is more readily triggered when cues of similarity between self and other are salient. These cues are not limited to physical appearance and familiarity such as ethnicity, language, and accent; they include many dimensions of human social categorization and social identity, such as values, opinions, attitudes, and personality traits. Of course, this does not mean that empathic concern is solely a product of perceived similarity of the other to the self. Humans can feel empathic concern for a wide range of others in need, even dissimilar others, as long as they value their welfare (Batson et al., Reference Batson, Lishner, Cook and Sawyer2005). Furthermore, the neotenous characteristics that elicit attention, social approach, and caregiving do so regardless of kinship.
Empathic concern is a powerful motivator of costly prosocial behaviors (Batson, Reference Batson, Decety and Ickes2009), especially for members of one’s own social group. People tend to display more empathic concern toward in-group members and are more sensitive to perceived harmful behaviors committed by out-group members. Across cultural contexts (e.g., Americans vs. Arabs), research indicates that parochial empathy is a strong predictor of altruism and passive harm toward out-groups (Bruneau et al., Reference Bruneau, Cikara and Saxe2017). For example, individuals respond with more empathic concern when they perceive interpersonal harm perpetrated by someone from their own university as compared with when the perpetrator is from a different university, within the same country (Australia), and this reaction was associated with a neural response in the vmPFC (Molenberghs et al., Reference Molenberghs, Gapp, Wang, Louis and Decety2016). A recent study using a large national sample documented that high levels of dispositional empathic concern were predictive of social polarization (Simas et al., Reference Simas, Clifford and Kirkland2020). The authors also showed that individuals high in empathic concern disposition expressed greater partisan bias in evaluating contentious political events.
Taken together, empathic concern accounts for a positive emotional state associated with a motivation to care for the welfare of others. However, empathic concern is unconsciously influenced by various signals such as neotenous cues, interpersonal factors, and intergroup contexts, and may in certain situations motivate out-group hostility.
11.6 Perspective Taking
The capacity for perspective taking is the ability to put oneself in the place of someone else while recognizing their point of view, experiences, and beliefs. It is often invoked as a remedy for some of the empathy biases that, as I have discussed, influence moral decision making. In general, perspective taking often refers to understanding that another person has a different mental state than the observer, a construct that largely overlaps with theory of mind. Being exposed to narrative fiction spontaneously triggers perspective taking. Several studies with children and adults have demonstrated that reading stories fosters an understanding of other people, using implicit perspective taking, and correlates with better empathy and theory of mind (Mar, Reference Mar2018; Mumper & Gerrig, Reference Mumper and Gerrig2017).
Two ways people understand another’s subjective perspective are 1) using situational and dispositional factors to model the other’s perspective and 2) projecting themselves into the other (Ames, Reference Ames2004). Thus, perspective taking as a mental simulation requires executive functions, including attention, working memory, and inhibitory control. The related projection-and-correction account of simulation (Gordon, Reference Gordon, Gilead and Ochsner2021) is comparable to the anchoring and adjustment heuristic proposed by Epley et al. (Reference Epley, Keysar, Van Boven and Gilovich2004). These authors proposed that “individuals adopt others’ perspectives by initially anchoring on their own perspective, and then subsequently, and effortfully accounting for differences between themselves and others until a plausible estimate is reached” (Epley et al., Reference Epley, Keysar, Van Boven and Gilovich2004, p. 328).
There is evidence from cognitive neuroscience in support of the simulation theory, in that understanding what others experience partly relies on our own projections of what we would think and feel in similar situations (Steinbeis, Reference Steinbeis2016). While this process relies on shared neural representations between self and other, the perceiver must also maintain a self–other distinction (Decety & Sommerville, Reference Decety and Sommerville2003). Results from brain imaging and lesion studies in neurological patients converge in a number of regions and circuits implicated in perspective taking. For instance, Ruby and Decety (Reference Ruby and Decety2004) presented participants with short sentences depicting real-life situations that induce social emotions such as guilt, envy, pride, or embarrassment (e.g., someone opens the bathroom door that you have forgotten to lock), as well as emotionally neutral situations. They asked participants to imagine how they would feel in those situations and how their mother would feel in those situations. Regions involved in emotional processing were similarly activated in the conditions that included emotionally laden situations for both self and other perspectives, including the amygdala and the temporal poles. Importantly, adopting the other’s perspective led to a specific neural response in the temporoparietal junction (TPJ) as well as the vmPFC. The TPJ plays a key role the sense of agency (Ruby & Decety, Reference Ruby and Decety2001) and computations in the social domain that require self–other distinction. The right TPJ is activated when participants mentally simulate actions from someone else’s perspective but not from their own (Ruby & Decety, Reference Ruby and Decety2001) or imagine painful experiences (Jackson et al., Reference Jackson, Brunet, Meltzoff and Decety2006; Lamm et al., Reference Lamm, Batson and Decety2007) but not when they imagined these situations for themselves. The TPJ, because of its anatomical characteristics and connectivity, plays a pivotal role in self–other processing. Evidence from functional neuroimaging studies indicates that the TPJ is systematically associated with perspective-taking tasks, theory of mind, and detection of intentional agents in the environment (Carter & Huettel, Reference Carter and Huettel2013; Decety & Lamm, Reference Decety and Lamm2007). More recent work, using repetitive transcranial magnetic stimulation, demonstrates that the TPJ is causally involved in the spontaneous attribution of mental states (Bardi et al., Reference Bardi, Six and Brass2017). Its temporary inhibition disrupts the updating of internally (self) and externally (other) generated representations.
There are two distinct ways in which people can take the perspective of suffering others. One form is thinking about how a suffering other feels, or “imagine-other” perspective taking; the other form is imagining oneself in the suffering other’s shoes, or “imagine-self” perspective taking (Buffone et al., Reference Buffone, Poulin, DeLury, Ministero, Morrisson and Scalco2017). Research in social psychology (e.g., Batson et al., Reference Batson, Lishner, Carpenter, Dulin, Harjusola-Webb, Stocks, Gale, Hassan and Sampat2003) has documented this distinction by showing that the imagine-other perspective evokes empathic concern or compassion, whereas imagine-self perspective taking induces both empathic concern and personal distress (i.e., a self-oriented aversive emotional response). In participants asked to either adopt an imagine-self or an imagine-other perspective while watching people experiencing somatic pain, neural response was detected in neural circuits involved in the first-hand experience of pain (Jackson et al., Reference Jackson, Brunet, Meltzoff and Decety2006, Lamm et al., Reference Lamm, Batson and Decety2007), except in individuals with psychopathy, who have a profound lack of empathy (Decety et al., Reference Decety, Chen, Harenski and Kiehl2013). However, the imagine-self perspective led to higher activity in brain areas involved in the affective response to threat and pain, including the amygdala and ACC. Consistently, the imagine-self perspective led to a potentially debilitating physiological state of threat, compared to an imagine-other perspective during active pursuit of a helping goal (Buffone et al., Reference Buffone, Poulin, DeLury, Ministero, Morrisson and Scalco2017). In addition, this effect was mediated by perceiving the helping task as more demanding, suggesting that imagining self may increase the perceived difficulty of providing help.
Though it may be mentally taxing and energy costly, perspective taking has several positive consequences for downstream inter-group relations. For instance, adopting the perspective of an out-group member leads to a decrease in the use of explicit and implicit stereotypes for that individual and to more positive evaluations of that group as a whole (Galinsky & Moskowitz, Reference Galinsky and Moskowitz2000). Feelings of empathic concern induced by perspective taking can lead to valuing the welfare of an out-group target. This is what Oliner and Oliner (Reference Oliner and Oliner1988) found from interviewing 436 individuals who were involved in rescue activity of Jews in Nazi Europe, at great risk to themselves. Most of them frequently began with concern for a specific individual for whom compassion was felt – often individuals known previously. Importantly, while 37 percent were characteristically empathetic – centered on the needs of others, with emotions of compassion and sympathy, 52 percent were primarily normocentric – having strong feelings of obligation to a social reference group that imposed normative standards and social values on their behavior. Another 11 percent acted largely from autonomously derived moral principles (Allison, Reference Allison1990).
Perspective taking can boost empathic concern and influence how we value the welfare of a person. Thus, one can use empathic concern to increase valuing another and elicit prosocial behavior. While this can be a very good thing, it can also create problems for the moral principle of justice. For example, in one experiment, college students were told about a 10-year-old girl named Sheri Summers who had a fatal disease and was waiting in line for treatment that would relieve her pain (Batson et al., Reference Batson, Klein, Highberger and Shaw1995). Participants learned that they could move her to the front of the waiting list. When simply asked what to do, most participants acknowledged that she had to wait because other more needy children were ahead of her. But if the participants were first asked to imagine what Sheri felt, they tended to choose to move her up, putting her ahead of children who were presumably more deserving. Here, empathy was more powerful than fairness, leading to a decision that most of us would see as unfair. Empathy triggered by perspective taking can produce myopia in the same way as egoistic self-interest.
The idea that perspective taking boosts empathic concern has recently been challenged by a meta-analysis examining whether individuals who received instructions to imagine the feelings of a distressed person experience more empathic concern than do individuals who receive no instructions or who receive instructions to remain objective (McAuliffe et al., Reference McAuliffe, Carter, Berhane, Snihur and McCullough2020). The authors found that empathy was greater when people were told to imagine the feelings of the needy person when compared to the condition where people were told to remain objective and detached. However, and more surprisingly, the study also found that individuals who were deliberately instructed to imagine how a suffering individual is feeling did not experience more empathic concern than subjects who received no instructions at all. Overall, this meta-analysis does not support the view that one can increase empathic concern by imagining what the other person is experiencing. However, the fact that people seem better at suppressing their empathy than they are at amplifying it (Zaki, Reference Zaki2014) suggests that we are walking around with naturally high amounts of empathy already.
11.7 Empathy Cannot Replace Reasoning in Moral Judgment
Empathy is a complex, multifaceted construct that encompasses affect sharing, perspective taking, and a motivated concern for others’ well-being. These functional components often work in concert, yet each is implemented in specific brain circuits. This has important implications for moral reasoning and decision making.
At the most basic level, emotions are attention-getting and supplement the information provided by rational belief and inference. Perspective taking can be used to adopt the subjective viewpoint of others, and this can facilitate the extent to which an observer understands that a victim experiences harm or distress. Conversely, affect sharing in reaction to the plight of another may be foundational for motivating prosocial behaviors and moral condemnation (Patil et al., Reference Patil, Calò, Fornasier, Cushman and Silani2017). Yet affect sharing, elicited by emotional contagion or by perspective taking, may also lead to personal distress, the aversive affect arising in response to others’ suffering, which does not necessarily lead to prosocial behavior and may even cloud our moral judgment.
The aversion to harming others is an integral part of the moral sense, underlying deeply rooted moral intuitions across societies (Haidt & Joseph, Reference Haidt and Joseph2004). Asking individuals to simulate harmful actions such as discharging a gun into someone else is sufficient to generate an aversive response accompanied with autonomic nervous system changes (Cushman et al., Reference Cushman, Gray, Gaffey and Mendes2012). Such a reaction emerges very early in development and is considered as a necessary foundation of morality (Decety & Cowell, Reference Decety and Cowell2018). Experiencing an aversive emotional reaction to the anticipation of harming someone plays a critical role in moral judgment. This aversion can partially stem from the bad outcome due to empathic concern for the victim’s suffering, which causes personal distress in the observer or elicits feelings of guilt. Some studies have documented that low dispositional levels in empathic concern reduce harm aversion, which leads to an increased propensity to endorse utilitarian moral judgments in sacrificial-harm dilemmas (Gleichgerrcht & Young, Reference Gleichgerrcht and Young2013).
It is also reasoning that guides moral progress and abstract principles, such as the idea that all humans are worthy of dignity and respect. The My Lai massacre in March 1968 provides a pertinent illustration of the powerful impact of moral principles. It was one of the most horrific incidents of violence committed against unarmed civilians during the Vietnam War. A company of American soldiers brutally killed 500 women, children, and old men in the village of My Lai. US Army officers covered up the carnage for a year before it was reported in the American press, thanks to helicopter pilot Hugh Thompson, sparking a firestorm of international outrage. In this incident, according to Blader and Tyler (Reference Blader, Tyler, Ross and Miller2002, p. 242), “most soldiers involved did not feel an emotional connection to the civilians, whom they regarded as being, or at least aiding, the enemy.” But not all soldiers participated. What, therefore, stopped some soldiers from killing civilians? One important factor was the soldiers’ view that killing civilians was a morally inappropriate behavior in which they should not engage (Blader & Tyler, Reference Blader, Tyler, Ross and Miller2002). Those soldiers who held these abstract moral values about what is just were less likely to engage in killing civilians, irrespective of whether they knew, liked, or empathized with the particular civilians they encountered.
11.8 What We Have Learned
Understanding the ultimate and proximate mechanisms of empathy elucidates the information that is prioritized as input and the behaviors prompted as output. Knowing our cognitive biases and their evolutionary origins is critical if we want to make better moral decisions. Explaining human behavior does not equate to justifying it or defending it. But if we want to improve our society, we need an accurate understanding of human nature rather than a denial of it. Moral decision making guided by empathy alone is not optimal, especially when dealing with large groups or when individuals are engaged in competition. However, empathy can create a strong motivation to act. Empathy and morality are neither systematically opposed to one another, nor inevitably complementary. Empathy alone is powerless in the face of rationalization and denial. Our saving grace is our ability to generalize and to direct our empathy through the use of reason and deliberation, as well as our capacity to cooperate with other people, create coalitions, and organize ourselves around any reliable sign, value, or idea that is our saving grace.


