The evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters.
Andrew W. Delton*, Max M. Krasnow*, Leda Cosmides & John Tooby
*Indicates joint first authorship
Proceedings of the National Academy of Sciences, 2011. <PDF>
Abstract: Are humans too generous? The discovery that subjects choose to incur costs to allocate benefits to others in anonymous, one-shot economic games has posed an unsolved challenge to models of economic and evolutionary rationality. Using agent-based simulations, we show that such generosity is the necessary byproduct of selection on decision systems for regulating dyadic reciprocity under conditions of uncertainty. In deciding whether to engage in dyadic reciprocity, these systems must balance (i) the costs of mistaking a one-shot interaction for a repeated interaction (hence, risking a single chance of being exploited) with (ii) the far greater costs of mistaking a repeated interaction for a one-shot interaction (thereby precluding benefits from multiple future cooperative interactions). This asymmetry builds organisms naturally selected to cooperate even when exposed to cues that they are in one shot interactions.
How uncertainty creates generosity
The scientific problem
An unsolved puzzle has been confronting evolutionary biologists, economists, game theorists, and other behavioral scientists for the past 15 years: Humans appear to be surprisingly generous (that is, their allocations to others in anonymous one-shot interactions seem to violate fundamental and otherwise well-established theories in evolutionary biology, game theory, and economics).
Specifically, people will often help strangers when there is no foreseeable gain in doing so, for example when there is little likelihood of ever running into the recipient of the aid again. People leave tips when traveling through cities far from home, or stop on the interstate to help someone whose car has broken down. Indeed, when psychologists and economists began using laboratory games to study theories of preferences and cooperation they found to their surprise that observed experimental behavior diverged sharply from predictions of traditional economic models of rationality and self-interest. Careful experimental tests of human cooperation revealed a distinctive and unexpected pattern: When placed in anonymous, one-shot experimental economic settings and informed they will never interact with their game partner again, subjects are typically willing to donate money to their partners that they could have kept for themselves without penalty, and without anyone else knowing what they did. These experiments have involved a number of games, such as dictator games (where one party is given money, and allowed to keep it all, or give any fraction of it to another), and one-shot prisoners’ dilemmas. These findings—incurring costs to allocate benefits to others in anonymous, one-shot games— have been replicated in a broad array of cultures.
Not only did these observations violate standard theories drawn from economics, but they also violated the predictions of widely accepted models of fitness maximization in evolutionary biology—models that (in the absence of kinship) similarly predict selfishness. Natural selection is relentlessly utilitarian, and is expected to replace designs that unnecessarily give up resources without return with those that retain those resources for enhanced reproduction or kin-directed investment.
While socially or morally welcome, these findings’ apparent violation of fundamental theories central to biology, economics, and game theory constitutes a nontrivial scientific problem.
It is important to recognize that these experiments were designed to rule out both kin-selection and reciprocity (exchange) as an explanation for this generosity, and are widely seen as doing so. In reciprocity, individuals deliver benefits to their partners at some cost to themselves, but are compensated by reciprocal benefits being delivered back to them. Reciprocity depends on repeated interactions. Evolutionary game theory (and common sense) indicates that if the interaction is one-shot, the response favored by natural selection is to defect (take benefits if given, but not to deliver any benefits).
Therefore, experiments that are explicitly one-shot, as well as anonymous, seemed to eliminate reciprocity as an explanation.
Impact on recent theories about the evolution of human cooperation
By ruling out the two most common pathways for the evolution of prosocial acts, these data have therefore had a major impact on recent thinking about the evolution of cooperation in humans. In attempting to explain this excess generosity, a cascade of mutually inconsistent economic, evolutionary, cultural, and psychological explanations have been advanced, many relying on various forms of genetic or cultural group selection. Nevertheless, many evolutionarily sophisticated scholars in this area consider these new proposals to be problematic because they fragilely depend on unwieldy technical machinery and large sets of unsupported assumptions.
A sizeable cohort of evolutionary biologists are suspicious of the argument by various economists and anthropologists that group selection has played a dominating role in human evolution, creating psychological adaptations for prosociality designed to benefit the group. Nevertheless, this argument has gained traction among behavioral scientists because this excess generosity effect seemed otherwise difficult to explain. It is safe to say, however, that none of these approaches has won general acceptance and that the problem remains unresolved.
Are humans too generous? A new approach
In this paper, we report a solution to this problem, established through agent-based simulations that explore the evolutionary dynamics of cooperation. (The major results are established analytically, as well.) The key is considering a necessary, but previously neglected step in cooperative decision-making: the decision-theoretic step of discriminating between one-shot and repeated interactions. Well-validated theoretical results show that the strategy of donating to others as part of long-term, beneficial, repeated interactions can evolve, but—when considered in isolation—the strategy of donating in one-shot interactions cannot. Thus, many have concluded that traditional theories of social evolution predict that humans should cooperate in repeated, but not in one-shot, interactions.
However, to behave differently in one-shot versus repeated interactions necessarily requires the capacity to distinguish the two situations. These simulations explore the impact on cooperative decision-making strategies of evolving in conditions where this discrimination cannot be performed perfectly—the normal case. These simulations show that such generosity is the necessary by-product of selection on decision systems that, in the course of deciding whether to engage in reciprocity, must balance (1) the costs of mistaking a one-shot for a repeated interaction (hence risking a single chance of being exploited), with (2) the far greater costs of mistaking a repeated for a one-shot interaction (thereby precluding benefits from multiple future cooperative interactions). Decision strategies cannot simultaneously minimize both types of errors, but must necessarily buy reductions in one kind of error with increases in the other. If the two errors inflict costs of different magnitudes, selection will favor buying a reduction in the more expensive error with increases in the cheaper type. Thus, organisms such as humans will be selected to cooperate “excessively” even when exposed to cues predicting that an interaction is one-shot. Indeed, our analysis shows that organisms can be selected to cooperate even in the extreme case where they have a rationally computed, explicit belief that they are in a one-shot interaction. This analysis suggests that humans are unlikely to be selfish Machiavellians (or altruists solely because of cultural penalties), but that generosity in the absence of any apparent potential gain (that the individual foresees) is a real feature of human nature.
How do these results impact the ongoing scientific debate about the origins of cooperation and generosity?
By straightforwardly explaining observations of excess generosity, there is no longer a residual set of unexplained cooperative phenomena to uniquely motivate the more cumbersome new theoretical proposals, such as group selection, cultural group selection, and so on. More concretely, the results show how “rational designs”—designs that form the most accurate belief possible and then choose the highest paying option given that belief—consistently achieve lower payoffs than less “rational”—but more generous—alternatives. This result is surprising in light of widely held theories in economics and biology, as well as in the broader behavioral and cognitive science community. Yet, once this issue is acknowledged and its impact incorporated into the evolution of mechanisms that implement reciprocity, it is hard to see how this dynamic could be prevented from operating.
These results have significant implications for our understanding of human cooperation. Traditional economic models of human nature, and models of how selection is expected to have acted on our ancestors both imply that humans ought to be completely Machiavellian and exploitive. The widespread evidence that, to some degree, humans are not simply exploitive has led to several explanatory approaches. Some credit culture, socialization, and the enforcement of social norms as the primary explanation for prosocial behavior toward nonrelatives. Others argue that intergroup competition and group selection favored the evolution of ingroup prosociality.
These explanations are not mutually exclusive, and they could all have operated ancestrally. Still, there is overwhelming evidence that social exchange (reciprocity) is a major feature of human social life, and its centrality has selected for an array of psychological adaptations to implement it (e.g., Cosmides & Tooby, 2015, & see here for extensive discussion). These analyses demonstrate that the evolution of reciprocity among our forager ancestors inevitably carried with it a halo of generosity as a byproduct. Hence, the widespread conclusion that reciprocity cannot explain the experimental findings of excess generosity is erroneous: The studies reported in this paper shows that selection based solely on the personal gains in trade made possible by reciprocal exchange nonetheless almost inevitably creates organisms that cooperate in one-shot encounters. Hence, more complicated and indirect models (though internally consistent) are unnecessary for understanding the patterns of human cooperation they were designed to explain.
Finally, the selection dynamics outlined here explain analogous problems in the evolution of cooperation that have been considered equally puzzling: For example, in game theory, any set of repeated interactions with a known termination point is expected to elicit the unraveling of cooperation, leading to an always-defect response. Uncertainty in the identification of exactly how long repeat encounters will continue for should select for excess cooperation, compared to traditional analyses. Similarly, the expected amount of altruism or generosity in anonymous situations is zero, given that exploitive acts cannot be traced back to the perpetrator. But uncertainty over the discrimination of anonymous from nonanonymous situations should lead to analogous selection for at least some level of generosity (or limitation on exploitation) as a byproduct of the evolution of reciprocity.
In short, human generosity, far from being a thin veneer of cultural conditioning atop a Machiavellian core, may be a bedrock feature of human nature.
For economists, it is important to recognize that the explanation advanced here does not reduce to simpler, more general economic principles. Intuition and economic theory both suggest that economic behavior follows economic belief: If an investment will have zero, or even negative, return, the mind should prompt us not to invest. Surprisingly, however, evolution favors a mind that does not follow the logic of economic maximization when cooperating, but rather one designed to be generous even when rational assessment indicates that one’s generosity will most likely never be repaid.
Click here to see a Science letter by Delton, Krasnow, Cosmides, & Tooby on alternative approaches to the evolution of cooperation.