The key to the evolution of morality was in Robert Frank’s book “Passions Within Reason” all along. How did we miss that?

There are literally hundreds, maybe even thousands of papers with the same introduction. Human cooperation. Enduring conundrum. Selection. Model. Prisoners’ dilemma. Conditions. There are so many of them, that it has become really hard to find new words to say the same thing over and over again. I know, because I wrote some of them myself. And yet, in spite of this endless supply of models, the question seems to remain unanswered (because every next paper still starts with the same introduction) and the conundrum endures.

So why is it that all of these papers, with their elegant theory and well-done simulations, still do not quench our thirst for answers? In our review paper “The evolution of morality and the role of commitment”, Aslıhan Akdeniz and I argue that the reason is that these papers look at the wrong game, and, as a consequence, at the wrong mechanisms.

The standard approach is to have individuals play prisoners’ dilemmas, and the most popular mechanisms for the evolution of cooperation fall under the broad categories of population structure / kin selection, and repetition. For behavior in games like the ultimatum game or the trust game, it is then assumed that we bring the altruism and reciprocity that evolved for playing prisoners dilemmas (repeated or one-shot) to these other games too.

In our paper, we give two indications that this approach does not work.

The first is that in order to be an explanation for human cooperation, models should not only explain why we cooperate as much as we do, but also why other species do not. And while both of these mechanisms matter, neither of them gives a good reason why we humans would be special in our social behavior, or in the extent of our cooperation. Population structures of hunter-gatherers are not really different from those of other primates, and also the repeated nature of our interactions is shared with many other species.

The second reason is that human behavior in these other games (like trust games or ultimatum games) does not really match with what one would expect, if we bring cooperative or reciprocal attitudes that evolved for playing prisoners’ dilemmas to those other games. A large share of our paper is devoted to linking the existing, prisoners’ dilemma-based theory as best as possible to behavior in those games, and the result is not encouraging.

Our answer to the question why humans are different from other species, is that humans are playing proper different games. (Well, our answer, it is really Robert Frank’s answer, but more about that later). We argue that this has to do with humans being a social, technological species. That means that the way we make a living requires that we plan ahead, and that we do that together. That gives us a bunch of psychological traits that makes us different. We can delay gratification. That is needed to be able to do investments that pay off in the future. We have language and theory of mind. That is needed to be able to coordinate our complex joint ventures. And we care. About getting our fair share of the fruits of our collective enterprises, but also about others. And the reason that we care is not population structure, or repetition. It is commitment.

Games like the ultimatum game or the trust game have a time component. That makes them different from the standard prisoners’ dilemma, in which both players simultaneously choose between cooperation and defection. This time component makes it easier for commitment problems to arise. Think for instance of the trust game. In this game, one player can hand over money to a second player. The money is then doubled, and the second player can choose to send some of the increased amount of money back. Or not.

Such a game could represent a situation in which the first player has a resource, but no opportunity to put it to use, and the second does not have the resource, but would know how to use it, if made available. Now the second player has a commitment problem. If she can convince the first player that she will return the investment, plus a little extra, then the first player is best off investing the money. But for a selfish second player, that is a hard case to make. A second player that cares for not robbing the first of her money, on the other hand, and actually prefers to reward her trust, might end up getting the first player to invest – to their mutual benefit.

The mechanism of commitment always comes with a combination of something not being fitness maximizing ex post (once the investment is made, the second player would end up with more money if she did not send back anything) but nonetheless being fitness maximizing ex ante (because otherwise the first player would not invest). The key to the benefits of this “rationally irrational” behavior therefore is in the effect it has on the behavior of others.

The irony is that this mechanism has been around as an explanation for human deviations from selfishness since 1988, when Robert Frank published his book “Passions Within Reason: the strategic role of the emotions” – which is well before most of the literature on the evolution of cooperation. Why we missed it? I don’t know. I read the book, and I found it pretty convincing, but that did not stop me from writing papers with prisoners’ dilemmas either. Maybe it is the allure of model simplicity. Maybe it is a combination of vanity and the limited scope for technically advanced models that commitment offers. Maybe we are just a little stuck in a tradition. But either way, it does not seem that the problem will ever be solved by more prisoners’ dilemmas. So, unless we also consider different games, and start taking commitment seriously, the conundrum will endure. Indefinitely. 

Matthijs van Veelen discusses the evolution of morality and the role of commitment in this talk for the CWESS 2021 Conference.

Image Credits: “NYC – National Museum of the American Indian – Whale Hunt Painted Screen” by wallyg is licensed with CC BY-NC-ND 2.0. To view a copy of this license, please click here.

Author biography

Matthijs van Veelen is Professor of Evolution and Behaviour at CREED, the experimental economics lab of the University of Amsterdam. The paper in Evolutionary Human Sciences is joint with Aslıhan Akdeniz, who is a PhD student at the same lab. Because the content refers to some of Matthijs’ previous work, the blog post is written from his perspective.

Original publication:

Aslıhan Akdeniz and Matthijs van Veelen

The evolution of morality and the role of commitment

Evolutionary Human Sciences, [22 July 2021, DOI: 10.1017/ehs.2021.36]

Leave a reply

Your email address will not be published. Required fields are marked *