Consider the various social norms that dictate how you behave in your daily life: You refrain from having conversations in a theatre, you dress conservatively in a place of worship, you tip your waiter after good service, and you keep a secret when a friend tells you something confidential. Humans share a set of social and moral beliefs that govern how we behave, from mundane chitchat during a movie to the most consequential behaviors that dictate whether we harm or help others. Norms help create social cohesiveness and an understanding of shared expectations that support and shape identities at both a societal and individual level. For this reason, norms are critically important for determining whether social communities function well and efficiently.
Humans rely on a set of complex, evolved, and learned norms to encourage community members to adopt certain perspectives that can guide and promote prosocial interactions (Cialdini, Reference Cialdini2003; Cialdini, Reno, & Kallgren, Reference Cialdini, Reno and Kallgren1990; Goldstein, Cialdini, & Griskevicius, Reference Goldstein, Cialdini and Griskevicius2008; Nichols, Reference Nichols2004; Sherif, Reference Sherif1936). These moral norms are so important for social functioning that there appears to be a sort of universal moral grammar, through which certain moral norms are sacredly held. This has been demonstrated in multiple research fields (Bicchieri, Reference Bicchieri2006), including affective neuroscience (Chang & Smith, Reference Chang and Smith2015), cognitive development (Kohlberg & Hersh, Reference Kohlberg and Hersh1977; Van de Vondervoort & Hamlin, Reference Van de Vondervoort and Hamlin2018), cross-cultural studies (Hauser, Reference Hauser2006; Mikhail, Reference Mikhail2007), and work on nonhuman primates (de Waal, Reference Delgado, Frank and Phelps2009). The growing literature suggests that this seemingly elaborate system of natural jurisprudence is relatively stable over time and extends across social groups (Mikhail, Reference Mikhail2008; Sripada, Reference Sripada2008).
Here we discuss the importance of social and moral norms, what types of values they convey, and how their existence can alter behavior. We begin by defining social and moral norms: What are they, how do they develop, and how are they sustained? We then go on to discuss the set of moral norms we consider to be foundational for harmonious and successful social living. We propose that a single mechanism—reciprocity—underpins the adherence to, and enforcement of, most moral behaviors. Using a game theoretic perspective, we illustrate how these norms act as a driving force behind flexible moral behavior (Melnikoff & Bailey, Reference Melnikoff and Bailey2018), whereby different classes of behavioral patterns can arise depending on which norm is activated (Ajzen, Reference Ajzen1991). We review neural evidence that people find it intrinsically rewarding to comply with moral norms, before examining how emotions can enhance reciprocal behaviors and the adherence to moral norms. Finally, we discuss how moral norms are likely learned and sustained through reward and punishment contingencies based on expectations of reciprocity.
1. What are social and moral norms?
Social norms are ubiquitous and endemic to social life. They provide a standard for behavior based on mutual and widely shared psychological attitudes, expectations, and beliefs about how members of society ought to behave (House, Reference House2018). At the broadest level, these norms help to promote harmonious living, in which the concerns of others are taken into account (Ullmann-Margalit, Reference Ullmann-Margalit1978). They prescribe mores (e.g., wear black to a funeral) and sometimes even consequential rules (e.g., while in America, drive on the right side of the road) about what people should and should not do in various social situations (Turiel, Reference Turiel1983). Since deviations from social norms often elicit informal (or even formal) social sanctions, they are a useful explanatory tool for describing many of our everyday social behaviors.
Moral norms can be considered a subset of social norms in that they explicitly govern behaviors that have positive or negative outcomes for both the self and others. For example, social norms, such as “do not chew gum at the table,” typically appeal to a wide set of behaviors without necessitating harm be prevented (Turiel, Reference Turiel1983). In contrast, moral norms, such as “behave fairly,” dictate that individuals navigate through the world without harming others (Schein & Gray, Reference Schein and Gray2017). In some cases, moral norms act in opposition to ingrained desires (e.g., biological urges), which are generated to promote survival (Darwin, Reference Darwin1859; Dawkins, Reference Dawkins1989). Enhancing self-benefit—for example, increasing one’s wealth, power and prestige—is one avenue by which biological urges can be expressed. If increasing one’s wealth leads an individual to deviate from morally normative patterns, negative consequences for others may ensue (harm is applied, money stolen, and so forth). The existence of moral norms, which aim to promote the well-being of others and the community at large, can help attenuate these negative consequences by tempering (either through suppression or regulation of) these self-enhancing desires.
Accordingly, if a core component of morality is that humans share a set of codes and beliefs that dampen selfish inclinations, it is important to examine what those moral strictures might be. We make the case that there are four fundamental moral norms—fairness, altruism, trust, and cooperation—that play a prominent role in shaping many everyday social interactions. While there are other possible candidate norms that could be included (e.g., norms of respect, justice, harm, and so forth), these four norms are sufficiently general enough to be applicable to a wide array of moral behavior (e.g., trusting that an individual will not be harmed by others), while also having enough specificity to capture unique behavioral patterns across them. Here we argue that these norms of fairness, altruism, trust, and cooperation are all subserved by, and rooted in, a single mechanism—reciprocity—that enables people to make flexible moral decisions across a range of social contexts.
1.1. Reciprocity as a mechanism
Reciprocity has traditionally been operationalized either as individual beliefs about the structure of the world, or as a culturally mandated standard of behavior. In regards to the first, reciprocity is often construed within a framework of a “just world” (Lerner, Reference Lerner1980), whereby people believe in a system of social exchanges that reach a fair equilibrium over time (Gouldner, Reference Gouldner1960). Such a belief in universal justice implies that destructive individuals who violate expectations of reciprocity will eventually face consequences for disturbing the equilibrium. On the other hand, from a culturally mandated normative standpoint, reciprocity is widely perceived as a moral “ought” (Eisenberger, Lynch, Aselage, & Rohdieck, Reference Eisenberger, Lynch, Aselage and Rohdieck2004; Gouldner, Reference Gouldner1960; Tsui & Wang, Reference Tsui and Wang2002). This framework argues that reciprocity operates either by responding to negative actions with negative treatment, or by responding to positive actions with positive treatment. One particularly potent example of negative reciprocity is when punishment is levied on those who do not comply with moral norms (Fehr & Fischbacher, Reference Fehr and Fischbacher2004; Gintis, Reference Gintis2000).
However, instead of viewing reciprocity as a norm in and of itself, it may be more appropriate to refashion the concept of reciprocity as a mechanism that motivates adherence to a suite of moral norms (Cropanzano & Mitchell, Reference Cropanzano and Mitchell2005; Leimgruber, Reference Leimgruber2018). From this framework, reciprocity can powerfully and flexibly drive different behaviors, from rewarding those who help, to punishing those who harm (Dufwenberg & Kirchsteiger, Reference Dufwenberg and Kirchsteiger2004; Gintis, Henrich, Bowles, Boyd, & Fehr, Reference Gintis, Henrich, Bowles, Boyd and Fehr2008; Gouldner, Reference Gouldner1960; Nowak, Reference Nowak2006; Rabin, Reference Rabin1993; Rand & Nowak, Reference Rand and Nowak2013). Below, we discuss the roles that contextual factors, emotional experiences, and learning play in influencing how reciprocity supports flexible moral action.
2. The norms that govern moral behavior
In the following section, we use a game theoretic approach to examine fairness, altruism, trust, and cooperation. This is done for two reasons. First, behavioral economic games allow researchers to observe how individuals anticipate, infer, and act on what others do (Von Neumann & Morgenstern, Reference Von Neumann and Morgenstern1945). As each game has a series of discrete rule sets, researchers can control and manipulate the structure and context of any game (Camerer, Reference Camerer2003). To the extent that people’s decisions are exquisitely sensitive to the intricacies and contextual minutia of the game environments, researchers can observe how different norms and expectations alter social and moral behavior by modifying the games’ rule sets. Second, the strategic interactions and behaviors that fall out of economic games are mathematically expressed on a universal scale: We know with precision how much money an individual is willing to altruistically offer another, how much punishment is conferred upon a perpetrator in the wake of a fairness violation, and how much people care to trust or cooperate with an unfamiliar partner. Together, these two dimensions of economic games provide a powerful testbed for understanding moral norms and their influence on behavior.
2.1. Fairness
It is difficult to imagine how groups of individuals would manage to divide resources in a harmonious way without appealing to a shared standard of fairness (Charness & Rabin, Reference Charness and Rabin2002; Fehr & Schmidt, Reference Fehr and Schmidt1999). Although resources can be divided meritoriously (e.g., according to an individual’s effort or contribution), the overarching norm of fairness mandates that, all things considered, resources ought to be divided equitably among community members. Evidence for this fairness norm is abundant. Strangers routinely split resources equitably in the absence of social sanctions (Fehr & Fischbacher, Reference Fehr and Fischbacher2003; Roth, Prasnikar, Okuno-Fujiwara, & Zamir, Reference Roth, Prasnikar, Okuno-Fujiwara and Zamir1991; Zelmer, Reference Zelmer2003), notions of fairness are universally appealed to across cultures (Henrich et al., Reference Henrich, Boyd, Bowles, Camerer, Fehr, Gintis and … Ensminger2005, Reference Henrich, Ensminger, McElreath, Barr, Barrett, Bolyanatz and … Henrich2010), human infants are sensitive to (and expect) the equitable distribution of resources (Sommerville & Enright, Reference Sommerville and Enright2018), and even nonhuman animals (e.g., primates, dogs, and birds) are attentive to unequal outcomes between members of their own species (Brosnan & de Waal, Reference Brosnan and de Waal2014).
How humans resolve fairness transgressions has been a central question in behavioral economics for decades. Economists have traditionally used the Ultimatum Game (Güth, Schmittberger, & Schwarze, Reference Güth, Schmittberger and Schwarze1982) to demonstrate that after experiencing a fairness violation (Bicchieri & Chavez, Reference Bicchieri and Chavez2010), people are willing to forgo even large sums of money to punish norm violators (Camerer, Reference Camerer2003). In the Ultimatum Game, two players partake in an economic exchange. One player acts as the Proposer and makes an offer to the other participant, the Responder. The Responder can then either accept or reject the offer. If accepted, the money is split as proposed. If rejected, then neither player receives any money, which effectively punishes the Proposer for offering an unfair split. The most rational decision for the Responder is to accept any offer from the Proposer no matter how small, since some money is better than no money. However, people routinely go against monetary self-interest to reject unfair offers (Fehr & Fischbacher, Reference Fehr and Fischbacher2004; Fehr & Gächter, Reference Fehr and Gächter2002; Henrich et al., Reference Henrich, McElreath, Barr, Ensminger, Barrett, Bolyanatz and … Ziker2006; Herrmann, Thöni, & Gächter, Reference Herrmann, Thöni and Gächter2008; Jordan, Hoffman, Bloom, & Rand, Reference Jordan, Hoffman, Bloom and Rand2016; Sanfey, Rilling, Aronson, Nystrom, & Cohen, Reference Sanfey, Rilling, Aronson, Nystrom and Cohen2003), indicating a strong preference for upholding fairness norms.
Despite this preference, individuals frequently split resources unequally to benefit themselves, revealing a dueling desire to act in one’s own self-interest (Camerer & Thaler, Reference Camerer and Thaler1995; Kahneman, Knetsch, & Thaler, Reference Kahneman, Knetsch and Thaler1986). This class of unfairness is termed advantageous inequality (when one receives more than another), which stands in contrast to disadvantageous inequality (when one receives less than another). Such unequal distributions are not ideal for lasting partnerships, since receiving less than another signals a disadvantageous relationship that should potentially be terminated, while receiving more reward might risk undermining future goals because one’s partner could feel exploited (Nishi, Shirado, Rand, & Christakis, Reference Nishi, Shirado, Rand and Christakis2015). Accordingly, although self-interest may initially lead an individual to prefer advantageous inequality, minimizing both types of unfairness helps individuals and societies stabilize long-term ventures (Piketty, Reference Piketty2017; Tavoni, Dannenberg, Kallis, & Löschel, Reference Tavoni, Dannenberg, Kallis and Löschel2011). The mutual expectation of fair treatment therefore leads individuals on both sides of the dyad to prefer equal payoffs.
As a consequence, concerns about maintaining fairness create incentives for individuals to punish those who violate fairness norms, even if the transgression does not affect one’s own welfare. Indeed, the desire to punish is so strong that even third-party members—who have no clear vested interest in the equal distribution of resources between others—are still willing to incur a cost to ensure that those peddling unfair allocations are punished (Cronk, Chagnon, & Irons, Reference Cronk, Chagnon and Irons2000; Fehr & Fischbacher, Reference Fehr and Fischbacher2004). This may be due in part to the notion that failing to punish can itself be punishable (Kandori, Reference Kandori1992).
Decisions to punish unfairness—whether as a victim or third-party member—ultimately demonstrate negative reciprocity, whereby the amount of punishment approximately equals the harm caused (Carlsmith, Darley, & Robinson, Reference Carlsmith, Darley and Robinson2002; Fehr & Gächter, Reference Fehr and Gächter2000a, Reference Fehr and Gächter2000b). As a mechanism for enforcing equality, negative reciprocity encourages individuals to offer fair distributions (Azar, Lahav, & Voslinsky, Reference Azar, Lahav and Voslinsky2015). Furthermore, people who engage in negative reciprocity can procure positive reputational benefits (Gintis, Smith, & Bowles, Reference Gintis, Smith and Bowles2001). For example, individuals who punish are trusted more, and in return, behave in a more trustworthy manner (Jordan et al., Reference Jordan, Hoffman, Bloom and Rand2016). In contrast, the threat of public exposure of unfair behavior (and thus the possibility of accruing a negative reputation), leads individuals to make more fair offers (Bolton & Zwick, Reference Bolton and Zwick1995; Straub & Murnighan, Reference Straub and Murnighan1995). These findings illustrate that negative reciprocity, through punishment, helps enforce and maintain norms of fairness, and by extension the overall well-being of social communities (Gürerk, Irlenbusch, & Rockenbach, Reference Gürerk, Irlenbusch and Rockenbach2006; Herrmann et al., Reference Herrmann, Thöni and Gächter2008).
2.2. Altruism
Some accounts of natural selection argue that survival requires self-benefit be prioritized at all costs. At first blush, acts of altruism—choosing to help another at a cost to the self (de Waal, Reference Decety, Michalska, Akitsuki and Lahey2008)—seem to significantly reduce one’s evolutionary fitness (Darwin, Reference Darwin1859). However, an influential concept known as “kin selection” posits that altruistic individuals’ genes propagate when prosocial behaviors are performed, which aids in the survival of genetically related individuals (Trivers, Reference Trivers1971; Wilson, Reference Wilson2000). Accumulating evidence now demonstrates that altruistic behavior is not confined to kin selection strategies, and many species expend valuable resources to help unrelated others (FeldmanHall, Mobbs, et al., Reference FeldmanHall, Mobbs, Evans, Hiscox, Navrady and Dalgleish2012; Pitman et al., Reference Pitman, Deecke, Gabriele, Srinivasan, Black, Denkinger and … Neilson2017; Preston & de Waal, Reference Preston and de Waal2002; Quervel-Chaumette, Dale, Marshall-Pescini, & Range, Reference Quervel-Chaumette, Dale, Marshall-Pescini and Range2015; Warneken, Hare, Melis, Hanus, & Tomasello, Reference Warneken, Hare, Melis, Hanus and Tomasello2007). Given these findings, it has been subsequently argued that altruism may have evolved for the good of the social community and not just individual genes (Nowak, Tarnita, & Wilson, Reference Nowak, Tarnita and Wilson2010).
Applying a slight modification to the Ultimatum Game described above elegantly illustrates this point. If the option to reject the offer is removed, the social exchange becomes a Dictator Game (Camerer, Reference Camerer2003) where the Receiver must accept any offer no matter how small. Although the rational decision is for a Dictator (analogous to the Proposer in the Ultimatum Game) to offer the smallest amount of money (since the split is always realized as-is), Dictators routinely go against such monetary self-interest and offer around 28% of their initial endowment (Engel, Reference Engel2011). Critically, these acts of generosity observed in the lab reflect real-world concerns for altruism (Benz & Meier, Reference Benz and Meier2008; Kosfeld, Heinrichs, Zak, Fischbacher, & Fehr, Reference Kosfeld, Heinrichs, Zak, Fischbacher and Fehr2005; Moll et al., Reference Moll, Krueger, Zahn, Pardini, de Oliveira-Souza and Grafman2006). In America, approximately 60% of households give to charity each year (at a rate of about 4% of a household’s income), which totals to more than $250 billion a year (Meer, Miller, & Wulfsberg, Reference Meer, Miller and Wulfsberg2017).
On the surface, charitable giving does not seem to be a self-beneficial act. Money is given to unknown others, oftentimes in distant countries where there is little chance of meeting those who received the donation. In these cases, it is unlikely that one’s altruistic behavior will be directly reciprocated by that specific individual. However, when viewed through the lens of an indirect reciprocity mechanism, the existence of altruistic behaviors has important implications for how we expect humans to behave and be treated in a community (Simpson & Willer, Reference Simpson and Willer2008). One example is that individuals hold expectations that people will be behave in generous ways (Brañas-Garza, Rodríguez-Lara, & Sánchez, Reference Brañas-Garza, Rodríguez-Lara and Sánchez2017), and violating these expectations may result in punishment by third-parties (Fehr & Fischbacher, Reference Fehr and Fischbacher2004). Thus, acting selflessly by donating to others provides an advantage to the altruist in that there will be some kind of indirect, downstream benefit (or avoidance of admonishments) from the community at large.
Compellingly, human social groups that act altruistically appear to fare better than those who do not (Ostrom, Reference Ostrom2014). Take welfare states for example: Even though many Western societies are large and complex, members are intimately dependent on one another, as there are social expectations that people who are more fortunate will help those who are less fortunate (Wilensky, Reference Wilensky1974). This norm of altruism ranges from long-term governmental edicts to fleeting one-on-one relationships (Barr, Reference Barr2012). If an individual is drowning in a lake or falls off a subway platform, people nearby will even risk their lives to help the distressed individual (Marsh et al., Reference Marsh, Stoycos, Brethel-Haurwitz, Robinson, VanMeter and Cardinale2014). These acts of altruism are typically performed without the belief that the beneficiary will directly return the favor. Rather, the expectation (even if implicitly held) is that someone else will display a similarly altruistic act if the altruist were later in a situation and needed help (Nowak & Sigmund, Reference Nowak and Sigmund1998, Reference Nowak and Sigmund2005). Indeed, removing the ability to directly reciprocate a generous act can motivate individuals to “pay it forward” by helping another in need (Gray, Ward, & Norton, Reference Gray, Ward and Norton2014; Hackel & Zaki, Reference Hackel and Zaki2018).
Such costly indirect altruism is believed to be a key factor in the evolution of human cooperation (Nowak, Reference Nowak2006), and simulations of Dictator Game behavior reveal that indirect and direct generosity is driven by the anticipation of such uncertain future relationships (Delton, Krasnow, Cosmides, & Tooby, Reference de Waal and Luttrell2011; Zisis, Di Guida, Han, Kirchsteiger, & Lenaerts, Reference Zisis, Di Guida, Han, Kirchsteiger and Lenaerts2015). This is in part believed to be causally influenced by feelings of moral obligation, social responsibility (Schwartz, Reference Schwartz1977), and the knowledge that others are behaving in generous ways (Bartke, Friedl, Gelhaar, & Reh, Reference Bartke, Friedl, Gelhaar and Reh2017). For example, activating norms of altruism induces greater helping, and fluctuations in the environment (e.g., the level of a target’s expressed distress or number of individuals who can readily help) can either amplify or attenuate altruistic decisions (Cameron & Payne, Reference Cameron and Payne2011; Darley & Batson, Reference D’Arms and Jacobson1973; FeldmanHall, Dalgleish, Evans, & Mobbs, Reference FeldmanHall, Dalgleish, Evans and Mobbs2015; Gottlieb & Carver, Reference Gottlieb and Carver1980; Preston, Reference Preston2013; Preston & de Waal, Reference Preston and de Waal2002). Other motivations, such as the desire for social prestige and reputation (Olson, Reference Olson1965), or avoiding social ostracism (Becker, Reference Becker1974), are also known to influence altruism and can be considered positive or negative reciprocity, respectively.
2.3. Trust
Trust spans a multitude of situations, cultures, and disciplines, as it is an integral feature of relationships between spouses, friends, teachers and students, and governments and civilians (Cottrell, Neuberg, & Li, Reference Cottrell, Neuberg and Li2007). Trust facilitates positive social interactions and has been suggested to be one of the foundations of an efficient economy; there is a strong correlation between economic growth and the percentage of citizens who generally trust others (Knack & Keefer, Reference Knack and Keefer1997). This is unsurprising given that a significant aspect of any economic transaction is the ability to trust and cooperate with nonrelated others (Arrow, Reference Arrow1974). At the dyadic level, deciding to trust—lending money to a friend or sharing personal information with an acquaintance—allows for the formation of partnerships that can produce mutual advantages to maximize an individual’s social fitness (Trivers, Reference Trivers1971) and overall societal well-being (Fehr & Camerer, Reference Fehr and Camerer2007). However, decisions to trust are inherently risky because of the unpredictability and uncertainty of partners’ responses during social exchanges (Vives Moya & FeldmanHall, Reference Vives Moya and FeldmanHall2018). For example, an untrustworthy individual may fail to repay a loan, or gossip about another’s personal information. Without trust, however, neither markets nor social relations could thrive, as there would be an unwillingness to risk something of value in exchange for a later reward.
As with fairness and altruism, trust can be measured using a simple economic game (Berg, Dickhaut, & McCabe, Reference Berg, Dickhaut and McCabe1995). A typical Trust Game involves a one-shot social interaction between two players, an Investor and a Trustee. The first player, the Investor, is initially faced with a decision to keep an endowment of money (e.g., $10) or share part of it with the Trustee. If shared, the investment is multiplied (often by a factor of four), and the Trustee faces the difficult decision to repay the trust by sending back up to half of the increased sum, or to violate that trust by keeping all the money, leaving the Investor with nothing. The social dilemma for the Investor is clear: Though it is more profitable to trust if it will be reciprocated, doing so leaves the Investor susceptible to the risk of a breach in trust, and ultimately, the loss of money. This game can be adapted for repeated play, such that social sanctions, communication between players, reputation, and relationships can all be manipulated.
In traditional formulations, the Investor normally trusts approximately 50% of their endowment to a Trustee, and the Trustee typically returns 50% of the expanded pie (Camerer, Reference Camerer2003). In one-shot games where there is no opportunity for social sanctions or reputation building through repeated play, it is rather surprising that Trustees return so much of the money, especially since many economists would argue that a rational, self-interested person should return nothing. That Trustees do not exhibit this behavioral pattern—even in situations where individuals are playing together only once and doing so anonymously—suggests the existence of (and adherence to) a moral norm of reciprocal trust (Cox, Reference Cox2004; Dunning, Anderson, Schlösser, Ehlebracht, & Fetchenhauer, Reference Dunning, Anderson, Schlösser, Ehlebracht and Fetchenhauer2014; McCabe, Rigdon, & Smith, Reference McCabe, Rigdon and Smith2003). Moreover, if Trustees were only motivated by altruistic generosity, then their typical return should map onto the 28% given by Dictators in the Dictator Game (Engel, Reference Engel2011). Thus, the expectation of direct reciprocity, the critical component of any Trust Game, appears to exist on both sides of the dyad: an individual invests her money because she believes that it will be reciprocated (Ma, Meng, & Shen, Reference Ma, Meng and Shen2015), and partners reciprocate the increased monetary sum because there is a strong expectation of reciprocity (Baumgartner, Fischbacher, Feierabend, Lutz, & Fehr, Reference Baumgartner, Fischbacher, Feierabend, Lutz and Fehr2009; Chang, Smith, Dufwenberg, & Sanfey, Reference Chang, Smith, Dufwenberg and Sanfey2011; Fareri, Chang, & Delgado, Reference Fareri, Chang and Delgado2012, Reference Fareri, Chang and Delgado2015).
Of course, the degree to which individuals value norms of trust can vary. Even when the parameters of a task are held constant, there are some individuals who resolutely adhere to reciprocal trust norms and others who deviate from this norm (Baumgartner et al., Reference Baumgartner, Fischbacher, Feierabend, Lutz and Fehr2009; Cesarini et al., Reference Cesarini, Dawes, Fowler, Johannesson, Lichtenstein and Wallace2008). There are other cases in which individuals might doggedly reciprocate trust in one situation, but swiftly forgo reciprocal behavior when the situation changes (Melnikoff & Bailey, Reference Melnikoff and Bailey2018). For example, if a Trustee knows an Investor made a highly risky decision to trust, the Trustee will reciprocate with more money, illustrating the exquisite sensitivity people have to normative signals (Van Den Bos, van Dijk, Westenberg, Rombouts, & Crone, Reference Van Den Bos, van Dijk, Westenberg, Rombouts and Crone2009). Evidence that individuals behave in accordance with, or deviate from, a moral norm depending on the context, suggests that adhering to a moral code of trust can be quite malleable.
2.4. Cooperation
Norms can also powerfully influence cooperative behavior (Ostrom, Reference Ostrom2014). Behavior in one-shot cooperation problems such as the Prisoner’s Dilemma or Public Goods Game reveals that people typically cooperate, despite understanding that it is in one’s best self-interest to maximize reward by defecting (Andreoni & Miller, Reference Andreoni and Miller1993; Blake, Rand, Tingley, & Warneken, Reference Blake, Rand, Tingley and Warneken2015; Harrington, Reference Harrington1995; Sally, Reference Sally1995). From a purely economic perspective, this is a puzzling behavior, as it suggests that the social norm of cooperation is more motivating than maximizing favorable outcomes for the self. Even group size (i.e., playing with four individuals as opposed to 100) does not substantially change the rate at which individuals cooperate (Isaac, Walker, & Williams, Reference Isaac, Walker and Williams1994), indicating that there is a desire to maintain cooperation even in large, anonymous, and complex settings.
Prominent theories suggest that cooperation exists because of a reciprocal tit-for-tat pattern of behavior (Fehr & Fischbacher, Reference Fehr and Fischbacher2004; Fehr & Gächter, Reference Fehr and Gächter2000a, Reference Fehr and Gächter2000b; Hamilton & Axelrod, Reference Hamilton and Axelrod1981). Once there is an initial signal to cooperate, others will cooperate in return. This notion of conditional cooperation is supported by strong empirical evidence: When communication of intentions is allowed between partners, high levels of cooperation follow suit (Bohnet & Frey, Reference Bohnet and Frey1999; Messick & Brewer, Reference Messick and Brewer1983; Ostrom & Walker, Reference Ostrom and Walker2003; Sally, Reference Sally1995). In contrast, cooperation languishes when external rules and sanctions are directly and explicitly imposed, compared with systems that allow internal norms to spontaneously develop (Yamagishi, Reference Yamagishi1988). These cooperative patterns can be manipulated by expectations of either direct or indirect reciprocity. Repeated play, for instance, typically garners greater rates of cooperative behavior (Fudenberg, Rand, & Dreber, Reference Fudenberg, Rand and Dreber2012; Nowak, Sasaki, Taylor, & Fudenberg, Reference Nowak, Sasaki, Taylor and Fudenberg2004; Rand & Nowak, Reference Rand and Nowak2013). In these cases, individuals form a belief that their fellow partners will cooperate if they cooperate, a form of direct reciprocity. Cooperation can also arise out of indirect reciprocal actions, such as when an individual cooperates knowing that other individuals will be privy to this information (Gächter & Fehr, Reference Gächter and Fehr1999; Mao, Dworkin, Suri, & Watts, Reference Mao, Dworkin, Suri and Watts2017). Such a system allows individuals to enhance their reputation by cooperating more, thereby procuring the downstream benefits that are associated with positive social standing (Pfeiffer, Tran, Krumme, & Rand, Reference Pfeiffer, Tran, Krumme and Rand2012).
As with other norms, patterns of cooperation can vary depending on the setting (Hilbe, Chatterjee, & Nowak, Reference Hilbe, Chatterjee and Nowak2018; Ostrom, Reference Ostrom2014). Contextual factors—such as whether others around you are cooperating (Fowler & Christakis, Reference Fowler and Christakis2010; Mao et al., Reference Mao, Dworkin, Suri and Watts2017), whether the norm of cooperation has been primed (Capraro, Smyth, Mylona, & Niblo, Reference Capraro, Smyth, Mylona and Niblo2014; Peysakhovich & Rand, Reference Peysakhovich and Rand2015), whether resources are abundant (Van Vugt & Samuelson, Reference Van Vugt and Samuelson1999), whether the size of temptation to freeride is small (Van Lange, Reference Van Lange, Liebrand, Messick and Wilke1992), or whether an individual is a member of a collectivist culture where there are strong norms of reciprocity amongst in-group members (Hofstede, Reference Hofstede1980; Leung, Reference Leung1997)—all positively contribute to an individual ultimately cooperating. Importantly, many of these contextual factors can also shape perceptions of reciprocity. For example, when social cues are available (e.g., discussing strategies with a partner before starting the game), the likelihood of reciprocity rises by as much as 40% (Sally, Reference Sally1995). In contrast, when there is uncertainty within the environment (e.g., ambiguity around the size of the resource being split, or how many members are using the resource), it reduces an individual’s willingness to cooperate (Budescu, Rapoport, & Suleiman, Reference Budescu, Rapoport and Suleiman1990, Reference Budescu, Rapoport and Suleiman1992; Budescu, Suleiman, & Rapoport, Reference Budescu, Suleiman and Rapoport1995).
3. Norm compliance is rewarding
From a decision-making perspective, an individual who chooses to comply with moral norms demonstrates that the value of norm compliance is greater than the value of selfishly maximizing one’s self-benefit. In this section, we evaluate evidence from the neuroimaging literature that demonstrates how norm compliance and reciprocal behaviors systematically engage the brain’s reward network.
The clearest neural evidence that people value reciprocity comes from studies on trust, cooperation, and fairness. In the domain of trust, neuroimaging experiments utilizing the Trust Game illustrate that the caudate, a region critical for indexing reward, computes information about the intention to reciprocate trusting acts (King-Casas et al., Reference King-Casas, Tomlin, Anen, Camerer, Quartz and Montague2005), and that other regions within the reward network—most notably, the ventral tegmental area (VTA) and ventral striatum—subserve reciprocal exchanges of trust between two players (Krueger et al., Reference Krueger, McCabe, Moll, Kriegeskorte, Zahn, Strenziok and … Grafman2007; Phan, Sripada, Angstadt, & McCabe, Reference Phan, Sripada, Angstadt and McCabe2010). A recent meta-analysis further reveals that these value signals are likely to be linked to aspects of reciprocity rather than to trust itself. When deciding to trust in repeated games (where direct reciprocity has the opportunity to manifest), there is a high likelihood that ventral striatum is recruited, but not in one-shot games, where direct reciprocity is impossible (Bellucci, Chernyak Sergey, Goodyear, Eickhoff Simon, & Krueger, Reference Bellucci, Chernyak Sergey, Goodyear, Eickhoff Simon and Krueger2016). Indeed, reciprocation of trust can be experimentally increased by stimulating the right orbitofrontal cortex suggesting that reward regions contribute critically to reciprocal actions (Wang, Li, Yin, Li, & Wei, Reference Wang, Li, Yin, Li and Wei2016). The value associated with reciprocal trust also appears to be conditional on social distance. Individuals trust close friends with more money than strangers (even when friends and strangers reciprocate at the same rate), which is associated with greater ventral striatum activity (Fareri et al., Reference Fareri, Chang and Delgado2015). Thus, not only does reciprocity appear to depend on immediate observations (i.e., did my partner just behave in a way that reciprocated my trust?), but it also seems linked to previously learned expectations (i.e., is my partner generally someone who would reciprocate my trust?).
Studies of reciprocal cooperation demonstrate a similar engagement of reward-processing regions. An early experiment using the Prisoner’s Dilemma observed that mutual cooperation was reported as highly satisfying, and these cooperative decisions were associated with enhanced blood-oxygen-level-dependent activity within in the nucleus accumbens, caudate, and orbitofrontal cortex (Rilling et al., Reference Rilling, Gutman, Zeh, Pagnoni, Berns and Kilts2002). Subsequent work contrasting neural responses in cooperative and competitive variants of a coordination game found that mutual cooperation recruits orbitofrontal cortex (Decety, Jackson, Sommerville, Chaminade, & Meltzoff, Reference de Quervain, Fischbacher, Treyer and Schellhammer2004) even when coordination does not increase monetary reward. This suggests that even when monetary benefits to the self are not maximized, the act of cooperating is in itself rewarding.
There is also an abundance of evidence illustrating that reciprocal actions are valued in the wake of a fairness violation. In these cases, however, violating fairness norms characteristically engenders behaviors that are construed as negative reciprocity, such as punishing the perpetrator (Fehr & Fischbacher, Reference Fehr and Fischbacher2004). Nearly two decades of work demonstrates that receiving unfair offers in the Ultimatum Game is associated with increased anterior insula and anterior cingulate cortex activity, regions associated with negative emotional experiences and conflict (Chang & Sanfey, Reference Chang and Sanfey2011; Sanfey et al., Reference Sanfey, Rilling, Aronson, Nystrom and Cohen2003; Xiang, Lohrenz, & Montague, Reference Xiang, Lohrenz and Montague2013). In contrast, receiving fair offers recruits the reward network, including ventral striatum and orbitofrontal cortex (Tabibnia, Satpute, & Lieberman, Reference Tabibnia, Satpute and Lieberman2008). These reward regions also become engaged when punishment is levied on the transgressor, suggesting that people highly value enforcing fairness norms, even when punishment comes with a monetary cost (de Quervain, Fischbacher, Treyer, & Schellhammer, Reference Decety, Jackson, Sommerville, Chaminade and Meltzoff2004; Hu, Strang, & Weber, Reference Hu, Strang and Weber2015; Singer et al., Reference Singer, Seymour, O’Doherty, Stephan, Dolan and Frith2006).
Neural evidence for the value of reciprocity in altruism is less straightforward and less abundant, largely due to the fact that altruism appears to draw upon an indirect reciprocity mechanism. This tautologically requires that any expected returns from norm compliance be abstracted from the altruistic action itself (e.g., in the form of “social capital”). Accordingly, identifying the neural underpinnings of reciprocity in the domain of altruism requires observing how indirect reciprocity manifests over time, or at least between multiple individuals in an iterated task. These features make it relatively difficult to study the neural value of reciprocity within an altruistic context, and thus there is limited work on the topic. However, in the few cases in which researchers have fruitfully examined the blood-oxygen-level-dependent signal underpinning the effects of indirect reciprocity during altruistic social exchanges, evidence dovetails with work on trust, fairness, and cooperation: Altruistic decisions are influenced by indirect reciprocity motivations, which is subserved, in part, by increased caudate activity (Watanabe et al., Reference Watanabe, Takezawa, Nakawake, Kunimatsu, Yamasue, Nakamura and … Masuda2014). In other words, even an indirect reciprocity mechanism that manifests across multiple individuals behaving altruistically appears to rely on regions that process reward.
In contrast, those who have broken a moral norm (oftentimes to selfishly enhance their own monetary benefit) demonstrate a different pattern of neural activity that does not reliably include reward regions. Several neuroimaging studies across multiple different social domains illustrate that the dorsolateral prefrontal cortex (dlPFC) is recruited when selfishly violating a moral norm (Baumgartner, Knoch, Hotz, Eisenegger, & Fehr, Reference Baumgartner, Knoch, Hotz, Eisenegger and Fehr2011; De Neys, Novitskiy, Geeraerts, Ramautar, & Wagemans, Reference Decety, Chen, Harenski and Kiehl2011; FeldmanHall, Dalgleish, et al., Reference FeldmanHall, Dalgleish, Thompson, Evans, Schweizer and Mobbs2012; Ruff, Ugazio, & Fehr, Reference Ruff, Ugazio and Fehr2013; Yamagishi et al., Reference Yamagishi, Takagishi, Fermin, Kanai, Li and Matsumoto2016). Given the role of the dlPFC in cognitive control (Greene, Nystrom, Engell, Darley, & Cohen, Reference Greene, Nystrom, Engell, Darley and Cohen2004; Mansouri, Tanaka, & Buckley, Reference Mansouri, Tanaka and Buckley2009; Ochsner & Gross, Reference Ochsner and Gross2005), activation of this region during selfish decisions suggests that it may be difficult for individuals to adjudicate between options when a selfish opportunity is sufficiently tempting. Neural activity in dlPFC may therefore reflect the deployment of cognitive control to overcome concern for another’s welfare (Rilling et al., Reference Rilling, Glenn, Jairam, Pagnoni, Goldsmith, Elfenbein and Lilienfeld2007). These neural data paint an emerging picture that cognitive control appears to be required to resolve self-other conflicts that ultimately favor the self.
4 . Emotions facilitate reciprocal behavior
Although emotion has historically been regarded as an irrational and dangerous threat to our moral calculus (Plato, 1955), the last few decades have fruitfully illustrated how emotion can play a special role in the establishment of response-dependent values and norm compliance (D’Arms & Jacobson, Reference Darley and Batson1994; Phelps, Lempert, & Sokol-Hessner, Reference Phelps, Lempert and Sokol-Hessner2014). Take, for instance, a situation where you contemplate cheating on your spouse. You might feel a pang of disapproval or shame upon considering such behavior. These moral emotions moderate moral standards (is it wrong if you are in an unhappy marriage?), and by extension, moral behavior (do you decide to have the affair?). In essence, the link between norm compliance and moral behavior is thought to be influenced by moral emotions (Tangney, Stuewig, & Mashek, Reference Tangney, Stuewig and Mashek2007), insofar that emotional experiences can sustain one’s own compliance with moral norms and motivate enforcement of norm compliance in others (Dunning, Fetchenhauer, & Schlösser, Reference Dunning, Fetchenhauer and Schlösser2012; Fehr & Gächter, Reference Fehr and Gächter2002).
4.1. Self-directed emotions
Guilt and shame are emotions that are explicitly linked to promoting the interests of society rather than one’s own interests (Pizarro, Reference Pizarro2000). These moral emotions emerge early in childhood (Vaish, Reference Vaish2018), and are negative evaluations of one’s own morally transgressive behavior (Eisenberg, Reference Eisenberg2000). Guilt appears to be a particularly salient motivator of reparative behavior, as it encourages people to make amends for violating moral norms, and can thus enhance how positively the transgressing person is perceived (Stearns & Parrott, Reference Stearns and Parrott2012). Guilt-proneness consistently correlates with measures of perspective-taking and is inversely related to antisocial and criminal behavior (Tangney et al., Reference Tangney, Stuewig and Mashek2007). Aligning with these findings, several neuroimaging studies have found that when describing moral transgressions, feelings of guilt are associated with neural activity in a network that corresponds with thinking about other people (Basile et al., Reference Basile, Mancini, Macaluso, Caltagirone, Frackowiak and Bozzali2011; Shin et al., Reference Shin, Dougherty, Orr, Pitman, Lasko, Macklin and … Rauch2000; Takahashi et al., Reference Takahashi, Yahata, Koeda, Matsuda, Asai and Okubo2004), which may reflect that a key function of guilt is to promote perspective-taking. In these cases, it is likely that individuals are thinking about their partner’s expectations, and thus guilt seems to exert the greatest influence on reciprocal moral actions. As guilt is associated with breaches of moral norms and social standards, the existence of guilt (or even the anticipation of guilt) is a potent motivator for upholding moral norms (Battigalli & Dufwenberg, Reference Battigalli and Dufwenberg2007; Chang et al., Reference Chang, Smith, Dufwenberg and Sanfey2011).
Although emotions such as guilt encourage people to avoid breaking norms, other emotions motivate people to actively comply with norms. For example, some theories propose that empathy sensitizes people to value altruism (Batson et al., Reference Batson, Batson, Slingsby, Harrell, Peekna and Todd1991; Preston, Reference Preston2013; Zaki, Reference Zaki2014). To the extent that the interplay between norms enables flexible moral action, it may therefore be the case that empathy’s primary contribution is the promotion of altruism above other norms (Rumble, Van Lange, & Parks, Reference Rumble, Van Lange and Parks2009; Zaki & Mitchell, Reference Zaki and Mitchell2011), which can be amplified by warm glow motives (Andreoni, Reference Andreoni1990; Ashar, Andrews-Hanna, Dimidjian, & Wager, Reference Ashar, Andrews-Hanna, Dimidjian and Wager2017; FeldmanHall et al., Reference FeldmanHall, Dalgleish, Evans and Mobbs2015). Indeed, recent work on extraordinary altruists demonstrates that these individuals maintain atypically high concern for the welfare of distant others (Vekaria, Brethel-Haurwitz, Cardinale, Stoycos, & Marsh, Reference Vekaria, Brethel-Haurwitz, Cardinale, Stoycos and Marsh2017), a finding that is mirrored by experimental inductions of empathy in normative populations (Klimecki, Mayer, Jusyte, Scheeff , & Schönenberg, Reference Klimecki, Mayer, Jusyte, Scheeff and Schönenberg2016). In addition to warm glow motives, other positive emotions (such as happiness) can also actively facilitate prosocial behaviors through a reward reinforcement mechanism (Aknin, Van de Vondervoort, & Hamlin, Reference Aknin, Van de Vondervoort and Hamlin2018.
4.2. Other-directed emotions
Negative emotions such as anger and disgust arise from being treated unfairly, and are believed to motivate punishment (Pillutla & Murnighan, Reference Pillutla and Murnighan1996; Srivastava, Espinoza, & Fedorikhin, Reference Srivastava, Espinoza and Fedorikhin2009; Van’t Wout, Kahn, Sanfey, & Aleman, Reference Van’t Wout, Kahn, Sanfey and Aleman2006). Recent work reveals that the act of punishing can alleviate the onslaught of these negative emotional experiences (Hétu, Luo, D’Ardenne, Lohrenz, & Montague, Reference Hétu, Luo, D’Ardenne, Lohrenz and Montague2017). Unsurprisingly, watching people break moral norms that target other individuals can also give rise to a similar set of moral emotions, including righteous anger, indignation, contempt, and disgust (Dubreuil, Reference Dubreuil2010; Moll et al., Reference Moll, de Oliveira-Souza, Eslinger, Bramati, Mourão-Miranda, Andreiuolo and Pessoa2002; Rozin, Lowery, Imada, & Haidt, Reference Rozin, Lowery, Imada and Haidt1999).
Contempt (the moral denunciation of others) is often expressed in response to the violation of communal codes, and is therefore a negative social evaluation of others (Tangney et al., Reference Tangney, Stuewig and Mashek2007). Contempt is most often expressed by those not directly harmed by the violation, and thus deals with norm compliance from a third-party perspective. Bystanders observing an injustice can express contempt to ostracize the agent causing harm. For example, people feel contempt towards those who violate social hierarchy norms (Rozin et al., Reference Rozin, Lowery, Imada and Haidt1999). When presented with an angry, contemptuous face criticizing a norm violation, individuals report greater feelings of guilt (Giner-Sorolla & Espinosa, Reference Giner-Sorolla and Espinosa2011), which can affect rates of future norm compliance. In essence, these third-party emotions are used for social policing, with the aim to minimize morally offensive behavior (Tangney, Miller, Flicker, & Barlow, Reference Tangney, Miller, Flicker and Barlow1996).
4.3. Atypical emotion processing
Emotion’s critical role in guiding norm compliance is even more evident when considering populations whose processing of emotions is atypical. Individuals who fail to generate an emotional arousal response before approving harmful, immoral actions illustrate how a lack of anticipatory emotional response results in behavior that is insensitive to moral norms and future consequences (Blair, Reference Blair1996; Harenski, Harenski, Shane, & Kiehl, Reference Harenski, Harenski, Shane and Kiehl2010; Moretto, Làdavas, Mattioli, & Di Pellegrino, Reference Moretto, Làdavas, Mattioli and Di Pellegrino2010; Rilling et al., Reference Rilling, Glenn, Jairam, Pagnoni, Goldsmith, Elfenbein and Lilienfeld2007; Shamay-Tsoory, Harari, Aharon-Peretz, & Levkovitz, Reference Shamay-Tsoory, Harari, Aharon-Peretz and Levkovitz2010). For example, lesions to the medial frontal cortex typically lead to blunted emotional responding (Bechara, Damasio, & Damasio, Reference Bechara, Damasio and Damasio2000), and accumulating evidence indicates that this region is critical for evaluating emotional states and integrating this information within the context of current goal states, such as adhering to relevant social norms (Forbes & Grafman, Reference Forbes and Grafman2010). In other words, the medial frontal cortex likely processes internal emotional signals alongside cues about social norms to help guide successful moral behavior.
Individuals diagnosed with psychopathy and conduct disorder also provide a compelling case for the intimate link between disrupted emotional responses and patterns of aberrant moral behaviors. For example, when watching others in pain, adult psychopaths, adolescents who exhibit psychopathic traits, and adolescents diagnosed with conduct disorder all show attenuated engagement of brain regions known to respond to another’s pain (Decety, Chen, Harenski, & Kiehl, Reference De Neys, Novitskiy, Geeraerts, Ramautar and Wagemans2013; Decety, Michalska, Akitsuki, & Lahey, Reference de Waal2009; Marsh et al., Reference Marsh, Finger, Fowler, Adalio, Jurkowitz, Schechter and … Blair2013). For psychopaths, these failures in appreciating the emotional aspects of a victim’s suffering has been explicitly linked to both abnormal (i.e., immoral) judgments (Young, Koenigs, Kruepke, & Newman, Reference Young, Koenigs, Kruepke and Newman2012), and an insensitivity to norms of generosity (Koenigs, Kruepke, & Newman, Reference Koenigs, Kruepke and Newman2010). There is emerging research, however, that suggests these aberrant moral behaviors may also be a product of failures in processing value (Baskin-Sommers, Stuppy-Sullivan, & Buckholtz, Reference Baskin-Sommers, Stuppy-Sullivan and Buckholtz2016; Hosking et al., Reference Hosking, Kastman, Dorfman, Samanez-Larkin, Baskin-Sommers, Kiehl and … Buckholtz2017; Mitchell et al., Reference Mitchell, Fine, Richell, Newman, Lumsden, Blair and Blair2006). Individuals with higher levels of psychopathy cooperate less and exhibit reduced activity in orbitofrontal cortex when cooperating (Rilling et al., Reference Rilling, Glenn, Jairam, Pagnoni, Goldsmith, Elfenbein and Lilienfeld2007), hinting at a causal role of reward in motivating cooperative behavior. However, given the intimate link between emotion and reward (Adolphs, Reference Adolphs2002; Berridge & Robinson, Reference Berridge and Robinson2003; Knutson, Adams, Fong, & Hommer, Reference Knutson, Adams, Fong and Hommer2001; Kringelbach, Reference Kringelbach2005; Murray, Reference Murray2007; O’Doherty, Kringelbach, Rolls, Hornak, & Andrews, Reference O’Doherty, Kringelbach, Rolls, Hornak and Andrews2001; Phelps & LeDoux, Reference Phelps and LeDoux2005), it is likely that perturbed representations of value manifest because of failures in generating an emotional response (Bechara, Reference Bechara2004; Bechara, Damasio, Damasio, & Anderson, Reference Bechara, Damasio, Damasio and Anderson1994; Bechara, Damasio, Tranel, & Damasio, Reference Bechara, Damasio, Tranel and Damasio1997), which can subsequently result in immoral behavior.
5 . Learning moral norms through reciprocity
Moral norms develop and are transmitted through social interactions and relationships (Ho, MacGlashan, Littman, & Cushman, Reference Ho, MacGlashan, Littman and Cushman2017). The frequency with which these norms are attended and adhered to suggests that they are indoctrinated at an early age (House, Reference House2018). Children as young as three years old can distinguish between legal and social violations (Smetana, Reference Smetana1983). Recent developmental research further reveals that children begin to obey norms after an authority figure illustrates they should be followed (Hardecker & Tomasello, Reference Hardecker and Tomasello2017; Schmidt, Butler, Heinz, & Tomasello, Reference Schmidt, Butler, Heinz and Tomasello2016). Such vicarious learning appears early in the developmental trajectory, and can help facilitate the distinction between social and moral norms in young children (e.g., wearing pajamas to school versus hitting another; (Turiel, Reference Turiel1983). As a child’s moral calculus develops further, they begin to consider contextual factors, such as intent, provocation, and duty when evaluating which moral norms might be appropriate for the situation (Engelmann, Herrmann, & Tomasello, Reference Engelmann, Herrmann and Tomasello2017).
Once learned, moral norms seem to be sustained through reward and punishment contingencies that are based on expectations of reciprocity (Göckeritz, Schmidt, & Tomasello, Reference Göckeritz, Schmidt and Tomasello2014; Hardecker, Schmidt, & Tomasello, Reference Hardecker, Schmidt and Tomasello2017; Leimgruber, Reference Leimgruber2018). These expectations can be expressed both directly (e.g., monetary benefit) and indirectly (e.g., gaining social capital (Hackel & Zaki, Reference Hackel and Zaki2018). For example, breaking certain social norms, such as wearing the wrong outfit to school, can evoke scorn and mockery from peers, and if the transgression is particularly egregious, it may even induce social rejection. Accordingly, the feedback received from others acts as a reinforcement mechanism that can dictate the adherence to (or deviance from) moral norms (Aknin et al., Reference Aknin, Van de Vondervoort and Hamlin2018). Over the last few years, researchers have begun to successfully apply reinforcement learning frameworks to explain how social learning unfolds. For example, prediction errors—when an actual outcome deviates from an expected outcome—allow individuals to update their expectations about the social world to align with reality (Behrens, Hunt, Woolrich, & Rushworth, Reference Behrens, Hunt, Woolrich and Rushworth2008; FeldmanHall, Otto, & Phelps, Reference FeldmanHall, Otto and Phelps2018; Joiner, Piva, Turrin, & Chang, Reference Joiner, Piva, Turrin and Chang2017; Klucharev, Hytönen, Rijpkema, Smidts, & Fernández, Reference Klucharev, Hytönen, Rijpkema, Smidts and Fernández2009; Montague & Lohrenz, Reference Montague and Lohrenz2007). These prediction errors, which are largely generated by the midbrain dopaminergic system and the structures it innervates (Haber & Knutson, Reference Haber and Knutson2010; Ruff & Fehr, Reference Ruff and Fehr2014; Schultz, Dayan, & Montague, Reference Schultz, Dayan and Montague1997), may drive moral learning by encoding norm violations.
In one of the first studies illustrating that norm violations generate prediction errors, researchers found that subjects in a Trust Game transferred less money to partners who violated trust (King-Casas et al., Reference King-Casas, Tomlin, Anen, Camerer, Quartz and Montague2005). This behavior was underpinned by prediction error signals in the caudate, such that the magnitude of neural activity in response to a partner’s reciprocation (or lack thereof) tracked decisions to trust a partner with more money on the next round. Though the prediction error signal was initially observed after subjects saw feedback about whether a partner upheld a trust norm, it began to shift backward in time as subjects learned more about a partner’s trustworthiness, suggesting that subjects were developing a stable impression of their partner’s moral traits (i.e., their willingness to reciprocate). Subsequent work further decoupled monetary reward from learning about moral traits (e.g., generosity), revealing that activity in a key learning hub—the ventral striatum—indexes dissociable prediction errors when learning about money and stable moral characteristics such as generosity (Hackel, Doll, & Amodio, Reference Hackel, Doll and Amodio2015). Moreover, prediction errors associated with learning about another’s generosity correlated with activity in a network of brain regions implicated in impression updating (including ventrolateral prefrontal cortex and right temporoparietal junction), illustrating that people find norm violations especially diagnostic in helping to form a stable impression of another’s personality (Mende-Siedlecki, Baron, & Todorov, Reference Mende-Siedlecki, Baron and Todorov2013).
However, an individual’s ability to glean information about their social world (and to subsequently adaptively update their behavior) may depend on the social context and the relevant moral norm. To probe whether prediction errors are contextually modulated, researchers have dynamically manipulated moral expectations using the Ultimatum Game. When led to believe that unfair offers are ubiquitous, subjects were less willing to punish partners who break fairness norms (Sanfey, Reference Sanfey2009), which provides compelling evidence that people adjust their behaviors according to the prevailing norms of a specific social environment. These context-sensitive decisions to punish were supported by prediction errors in canonical learning regions such as ventral striatum, substantia nigra, and VTA (Hétu et al., Reference Hétu, Luo, D’Ardenne, Lohrenz and Montague2017; Xiang et al., Reference Xiang, Lohrenz and Montague2013). The notion that stable impressions about another’s moral traits are dependent on moral expectations is also supported by memory research. In a study from our own lab examining how decision-making is influenced by episodic memory, we observed that people adaptively play with past partners if accurate impressions of the partner’s norm compliance have been fully encoded by rich episodic memories (Murty, FeldmanHall, Hunter, Phelps, & Davachi, Reference Murty, FeldmanHall, Hunter, Phelps and Davachi2016). Together, these results suggest that people use learned impressions of others’ moral traits to guide adaptive decision-making.
The tight coupling between norms, moral learning, and adaptive decision-making demonstrates that people use knowledge of norm violations to form impressions of others’ moral traits. Direct experience of another person’s failure to comply with norms produces prediction errors, and these errors drive fast and flexible learning about others’ moral traits, such as generosity and trustworthiness (Hackel et al., Reference Hackel, Doll and Amodio2015; King-Casas et al., Reference King-Casas, Tomlin, Anen, Camerer, Quartz and Montague2005). Once moral impressions stabilize, learning regions cease to track deviations from expected normative behavior (Delgado, Frank, & Phelps, Reference de Waal2005). As norm violations provide diagnostic information about others’ traits (Mende-Siedlecki et al., Reference Mende-Siedlecki, Baron and Todorov2013), stable impressions can guide optimal choices by enabling people to affiliate with those who are likely to be rewarding social partners and to avoid those who are likely to be unrewarding (Murty et al., Reference Murty, FeldmanHall, Hunter, Phelps and Davachi2016). In fact, these moral impressions can weigh so heavily on social decisions that people choose not to cooperate with a stranger if they know that the stranger is friends with a norm violator (Martinez, Mack, Gelman, & Preston, Reference Martinez, Mack, Gelman and Preston2016).
6. Integration with other theories
While we posit that moral decision-making is largely motivated by four fundamental norms, other prominent theories have argued that a number of additional norms are critical for successful socialization (Moral Foundations Theory; Haidt, Reference Haidt2007), or that all moral behaviors can be reduced to a single motivation—the desire to reduce harm (Theory of Dyadic Morality; Schein & Gray, Reference Schein and Gray2017). Here, we have tried to strike a balance between parsimony and explanatory power. For example, Moral Foundations Theory may place undue weight on certain norms (e.g., purity) that are less represented in many everyday moral quandaries. On the other hand, the Theory of Dyadic Morality is parsimonious by its very nature. Although we would agree that many moral situations can be perceived through a lens of harm, such an account can be overly restrictive when trying to explain the wide range of findings in psychology, economics, and neuroscience.
By allowing the findings from psychology and neuroeconomics to guide us, we have highlighted reciprocity as a common mechanism that motivates adherence to a discrete suite of moral norms. The idea that reciprocity provides a unifying principle for social behavior is not new (Berg et al., Reference Berg, Dickhaut and McCabe1995; Bolton & Ockenfels, Reference Bolton and Ockenfels2000; de Waal & Luttrell, Reference Delton, Krasnow, Cosmides and Tooby1988; Fehr & Gächter, Reference Fehr and Gächter2000b; Gouldner, Reference Gouldner1960). Early models such as Social Exchange Theory suggested that reciprocity is a universally held principle (Gouldner, Reference Gouldner1960), and that high-quality relationships can emerge and flourish through reciprocal actions (Cropanzano & Mitchell, Reference Cropanzano and Mitchell2005; Thibaut & Kelley, Reference Thibaut and Kelley1959). We build on this work, examining how this one simple mechanism can explain people’s adherence to a set of specific moral norms, and how these moral norms collectively provide an overarching framework for understanding moral behavior across a variety of domains.
7. Conclusions
Moral norms facilitate harmonious interpersonal exchanges by providing people with a set of common expectations. Here we highlight four norms—fairness, altruism, trust, and cooperation—that we believe to be the most foundational for successful social living. By discussing the ways in which these norms can shape behavior, we offer an account for the proximate psychological mechanisms motivating moral norm compliance: Reciprocity. People comply with moral norms because they have the direct or indirect expectation that others will also adhere to these norms, and because they believe that norm violations may have negative repercussions for the future well-being of both specific individuals and entire societies. Activation in the brain’s reward network supports active adherence to these moral norms, suggesting that people find value in complying with norms and engaging in reciprocal behaviors with others. In addition, we examine how aversive moral emotions such as contempt and guilt facilitate norm enforcement by devaluing selfish, norm-violating actions. Finally, we review evidence that learning about norm violators depends on a network of brain regions that encode for reward and violated expectations of receiving reward, suggesting that people learn about others’ social value through a reinforcement learning mechanism.
The degree to which humans act fairly, help, trust, and cooperate is often viewed as a puzzle across an array of disciplines. Some of the deepest thinkers in human history, including Adam Smith, Jean-Jacques Rousseau, and Charles Darwin, have attempted to provide accounts of how social norms dictate appropriate behaviors in nearly every aspect of human life, from the trivial (e.g. wearing the correct attire to a wedding) to the deeply consequential (e.g. punishing a criminal with the death sentence). However, few accounts have successfully reconciled two seemingly contradictory features of norm compliance: Although social norms are pervasive and often perceived as inflexible in nature, the degree to which an individual adheres to these norms produces malleable and context-specific behaviors. Emerging research in moral psychology and neuroscience elucidates how norms are supported by the simple cognitive mechanism of reciprocity. Reciprocal behavior is stable enough to support interpersonal exchanges between strangers, yet flexible enough to accommodate adaptive behavior across a range of social environments. We provide a unifying framework for understanding how a wide variety of putatively unrelated moral behavior—helping a homeless person, getting angry at a fraudster, asking a stranger at the library to look after your computer while you take a call—are supported by expectations of reciprocity and the associated neural encoding of reward.
Conflicts of Interest:
None.
Acknowledgment:
We gratefully acknowledge helpful comments and assistance from members of the FeldmanHall Lab.