Hostname: page-component-77f85d65b8-jkvpf Total loading time: 0 Render date: 2026-04-19T02:25:33.786Z Has data issue: false hasContentIssue false

Cultural adaptation is maximised when intelligent individuals rarely think for themselves

Published online by Cambridge University Press:  10 August 2020

Elena Miu*
Affiliation:
School of Human Evolution and Social Change and Institute of Human Origins, Arizona State University, Tempe, AZ 85287, USA
Thomas J. H. Morgan
Affiliation:
School of Human Evolution and Social Change and Institute of Human Origins, Arizona State University, Tempe, AZ 85287, USA
*
*Corresponding author. E-mail: elena.miu@gmail.com

Abstract

Humans are remarkable in their reliance on cultural inheritance, and the ecological success this has produced. Nonetheless, we lack a thorough understanding of how the cognitive underpinnings of cultural transmission affect cultural adaptation across diverse tasks. Here, we use an agent-based simulation to investigate how different learning mechanisms (both social and asocial) interact with task structure to affect cultural adaptation. Specifically, we compared learning through refinement, recombination or both, in tasks of different difficulty, with learners of different asocial intelligence. We find that for simple tasks all learning mechanisms are roughly equivalent. However, for hard tasks, performance was maximised when populations consisted of highly intelligent individuals who nonetheless rarely innovated and instead recombined existing information. Our results thus show that cumulative cultural adaptation relies on the combination of individual intelligence and ‘blind’ population-level processes, although the former may be rarely used. The counterintuitive requirement that individuals be highly intelligent, but rarely use this intelligence, may help resolve the debate over the role of individual intelligence in cultural adaptation.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2020
Figure 0

Figure 1. Base payoffs (black) and received payoffs given choices made at previous steps (blue) for 10 options for steps 1–3, when (a) σ = 5, ‘easy’ and (b) σ = 1, ‘moderate’. Yellow dots indicate the option chosen for each step. Received payoffs for step 2 in blue are calculated assuming that option 7 was chosen for step 1, and received payoffs for step 3 are calculated assuming that option 7 was chosen for step 1 and option 6 was chosen for step 2. Note that there is no blue line shown for step 1 because, for the first step, base payoffs are identical to received payoffs. When σ is lower and the task is harder, the penalties are much harsher (b).

Figure 1

Figure 2. A graphical representation of how manipulating task difficulty affects the optimal solution (on a reduced version of five steps with five options, for illustration purposes). Coloured dots correspond to options, with the colour signifying the step. The size of the coloured dot corresponds to the base payoff. Chosen options are coloured black, and the size of the black dots corresponds to the received payoffs. In easy tasks steps do not restrict each other, and agents can freely explore a wide range of options. In hard tasks previous choices constrain future ones, and solutions follow much narrower paths and typically receive lower overall payoffs.

Figure 2

Table 1. Parameter summary

Figure 3

Figure 3. Payoffs for one run of the simulation for (a) the recombination inheritance mechanism (b) the refinement mechanism, and (c) the combined mechanism, with populations of 500 agents each, np = 2, σ = 0.1, over 100 generations, and the same payoff matrix. Every point indicates the payoff for one individual, and the black line follows the population mean. In this example, near-sighted learners converge quickly on a suboptimal solution when using the refinement and combined mechanism, but increase in performance over time and converge on a high payoff solution with recombination. Mid-sighted learners behave similarly. Long-sighted learners converge on a good solution with recombination only, but converge on a suboptimal solution when using the combined mechanism, and never reach convergence with the refinement only.

Figure 4

Figure 4. (a) Average normalised payoff for each learning algorithm relative to the average 10,000 individual long-sighted learners, where the colour indicates the mechanism. (b) The effect of task difficulty: normalised payoff for each mechanism relative to average payoff of 10,000 individual long-sighted learners, where the shape indicates the condition and the colour indicates the learning algorithm foresight, across three standard deviation values (0.1 for difficult tasks, 1 for medium, and 5 for easy). The points plot the values in the final generation (after 60 generations), averaged over 500 repeated simulations, averaged over all other parameter combinations with bars indicating two standard errors.

Figure 5

Figure 5. Payoffs across task difficulty relative to performance of average individual learners in the first generation within each learning algorithm. The shape indicates the mechanism and the colour indicates the learning algorithm foresight, across three standard deviation values. The points plot the values in the final generation (after 60 generations), averaged over 500 repeated simulations, averaged over all other parameter combinations, with bars indicating two standard errors.

Figure 6

Figure 6. (a) Distribution of repertoire similarity at the end of 60 generations, averaged over all agents in each of 200 simulation repeats, and averaged over all parameter combinations for near-, mid- and long-sighted learners, for the three mechanisms. (b) Mixability at the end of 60 generations relative to the random baseline for near-, mid- and long-sighted learners averaged over 200 runs and over all other parameter values (σ, na and np), for the recombination and combined mechanisms. (c) Distribution of repertoire uniformity (measured as the mean difference between consecutive options) for simulations in the three mechanisms, for hard and easy tasks, after 60 generation and 200 runs. A high difference translates into lower uniformity – recombination maintains more varied repertoires, and more so when the task is hard.

Supplementary material: PDF

Miu and Morgan supplementary material

Miu and Morgan supplementary material

Download Miu and Morgan supplementary material(PDF)
PDF 2.6 MB