Hostname: page-component-89b8bd64d-z2ts4 Total loading time: 0 Render date: 2026-05-09T00:03:18.398Z Has data issue: false hasContentIssue false

Evidence for and against a simple interpretation of the less-is-more effect

Published online by Cambridge University Press:  01 January 2023

Michael D. Lee*
Affiliation:
Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, 92697-5100
*
Rights & Permissions [Opens in a new window]

Abstract

The less-is-more effect predicts that people can be more accurate making paired-comparison decisions when they have less knowledge, in the sense that they do not recognize all of the items in the decision domain. The traditional theoretical explanation is that decisions based on recognizing one alternative but not the other can be more accurate than decisions based on partial knowledge of both alternatives. I present new data that directly test for the less-is-more effect, coming from a task in which participants judge which of two cities is larger and indicate whether they recognize each city. A group-level analysis of these data provides evidence in favor of the less-is-more effect: there is strong evidence people make decisions consistent with recognition, and that these decisions are more accurate than those based on knowledge. An individual-level analysis of the same data, however, provides evidence inconsistent with a simple interpretation of the less-is-more effect: there is no evidence for an inverse-U-shaped relationship between accuracy and recognition, and especially no evidence that individuals who recognize a moderate number of cities outperform individuals who recognize many cities. I suggest a reconciliation of these contrasting findings, based on the systematic change of the accuracy of recognition-based decisions with the underlying recognition rate. In particular, the data show that people who recognize almost none or almost all cities make more accurate decisions by applying the recognition heuristic, when compared to the accuracy achieved by people with intermediate recognition rates. The implications of these findings for precisely defining and understanding the less-is-more effect are discussed, as are the constraints our data potentially place on models of the learning and decision-making processes involved. Keywords: recognition heuristic, less-is-more effect.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2015] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Figure 1: The intuition underlying the less-is-more effect. The left panel shows the number of decisions out of a total of 45 made by guessing, recognition, and partial knowledge as the number of recognized alternatives increases from 0 to 10. The right panel shows the number of correct decisions made by guessing, recognition, and partial knowledge, assuming accuracy rates of 0.5, 0.8, and 0.6 respectively. The total number of correct decisions is shown by the solid line, and peaks when 6 out of 10 alternatives are recognized.

Figure 1

Figure 2: The main panel shows the number and accuracy of the four possible classes of decisions in judging which of two cities has the larger population. The problems are divided according to whether both cities were recognized (“both recognized”), one city was recognized and was chosen (“choose recognized”), one city was recognized but the unrecognized one was chosen (“choose unrecognized”), or neither city was recognized (“neither recognized”). The overall height of each bar corresponds to the proportion of all decisions that belonged to that class. The darker and lighter areas within bars indicate how many of these decisions were correct and incorrect, respectively. The label above each bar gives the overall percentage of correct decisions so that, for example, the accuracy of decisions when neither city is recognized is close to 50% consistent with guessing. The arrow indicates how many times more often the “chosen recognized” rather than “chosen unrecognized” class occurred, and so measures how many times more often decisions followed the recognition heuristic. The four sub-panels show the same information, for each country separately.

Figure 2

Figure 3: The main panel shows the number of cities recognized and accuracy for each participant as a small circular marker. The larger markers connected by the line show the trend in the relationship between recognition and accuracy. The four panels below show the same information for each country separately.

Figure 3

Figure 4: The pattern of change in the accuracy of recognition (squares), knowledge (circles), and guessing (dashed line), in choosing the larger city, for individuals with different recognition rates. The bars show the distribution of individuals over the recognition rates.

Figure 4

Figure 5: The left-hand panels show the prior predictive (broken lines) and posterior predictive (solid lines) distributions for the constant (top), linear (middle), and quadratic (bottom) models of the relationship between accuracy and recognition rate. The right-hand panels show the prior (broken line) and posterior (solid histogram) distribution for the parameters corresponding to the constant (top), linear (middle), and quadratic (bottom) terms that allow for the estimation of Bayes factors between the models using the Savage-Dickey density ratio method.

Figure 5

Figure 6: The left-hand panels show the prior predictive (broken lines) and posterior predictive (solid lines) distributions for the modified constant (top), linear (middle), and quadratic (bottom) models of the relationship between accuracy and recognition rate. The right-hand panels show the prior (broken line) and posterior (solid histogram) distribution for the parameters corresponding to these modified constant (top), linear (middle), and quadratic (bottom) terms that allow for the estimation of Bayes factors between the models using the Savage-Dickey density ratio method.

Figure 6

Figure 7: The left-hand panels show the prior predictive (broken lines) and posterior predictive (solid lines) distributions for the constant (top), linear (middle), and quadratic (bottom) models of the relationship between the accuracy of recognition-based decisions and recognition rate. The right-hand panels show the prior (broken line) and posterior (solid histogram) distribution for the parameters corresponding to these constant (top), linear (middle), and quadratic (bottom) terms that allow for the estimation of Bayes factors between the models using the Savage-Dickey density ratio method. The error bars in the left-hand panels show one standard error for the binomial proportions.

Figure 7

Figure 8: The left-hand panels show the prior predictive (broken lines) and posterior predictive (solid lines) distributions for the constant (top), linear (middle), and quadratic (bottom) models of the relationship between the accuracy of knowledge-based decisions and recognition rate. The right-hand panels show the prior (broken line) and posterior (solid histogram) distribution for the parameters corresponding to these constant (top), linear (middle), and quadratic (bottom) terms that allow for the estimation of Bayes factors between the models using the Savage-Dickey density ratio method. The error bars in the left-hand panels show one standard error for the binomial proportions.

Figure 8

Figure 9: The left-hand panels show the prior predictive (broken lines) and posterior predictive (solid lines) distributions for the constant (top) and linear (bottom) models of the relationship between the accuracy of guessing-based decisions and recognition rate. The right-hand panels show the prior (broken line) and posterior (solid histogram) distribution for the parameters corresponding to these constant (top) and linear (bottom) terms that allow for the estimation of Bayes factors between the models using the Savage-Dickey density ratio method. The error bars in the left-hand panels show one standard error for the binomial proportions.

Supplementary material: File

Lee supplementary material

Lee supplementary material
Download Lee supplementary material(File)
File 499.5 KB