Hostname: page-component-89b8bd64d-46n74 Total loading time: 0 Render date: 2026-05-09T18:56:23.593Z Has data issue: false hasContentIssue false

Why recognition is rational: Optimality results on single-variable decision rules

Published online by Cambridge University Press:  01 January 2023

Clintin P. Davis-Stober*
Affiliation:
University of Missouri
Jason Dana
Affiliation:
University of Pennsylvania
David V. Budescu
Affiliation:
Fordham University
*
* Address: Clintin P. Davis-Stober, University of Missouri at Columbia, Department of Psychological Sciences, Columbia, MO, 65211. Email: stoberc@missouri.edu.
Rights & Permissions [Opens in a new window]

Abstract

The Recognition Heuristic (Gigerenzer & Goldstein, 1996; Goldstein & Gigerenzer, 2002) makes the counter-intuitive prediction that a decision maker utilizing less information may do as well as, or outperform, an idealized decision maker utilizing more information. We lay a theoretical foundation for the use of single-variable heuristics such as the Recognition Heuristic as an optimal decision strategy within a linear modeling framework. We identify conditions under which over-weighting a single predictor is a mini-max strategy among a class of a priori chosen weights based on decision heuristics with respect to a measure of statistical lack of fit we call “risk”. These strategies, in turn, outperform standard multiple regression as long as the amount of data available is limited. We also show that, under related conditions, weighting only one variable and ignoring all others produces the same risk as ignoring the single variable and weighting all others. This approach has the advantage of generalizing beyond the original environment of the Recognition Heuristic to situations with more than two choice options, binary or continuous representations of recognition, and to other single variable heuristics. We analyze the structure of data used in some prior recognition tasks and find that it matches the sufficient conditions for optimality in our results. Rather than being a poor or adequate substitute for a compensatory model, the Recognition Heuristic closely approximates an optimal strategy when a decision maker has finite data about the world.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2010] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Figure 1: This figure displays the ratio as a function of r and p under Condition 1 with r = .3.

Figure 1

Figure 2: Figure 2: This figure displays the maximal risk as a function of sample size for four choices of weighting schemes: aR (solely recognition), aK (solely knowledge), aM (mini-max weighting), and OLS (ordinary least squares). These values are displayed under Condition 1, where r= .25, r = .6, p = 6, and σ2 = 2. The left-hand graph displays these values assuming R2 = .3. The center and right-hand graphs display these values for R2 =.4 and R2 = .5 respectively.

Figure 2

Figure 3: This figure displays the maximal risk as a function of sample size for four choices of weighting schemes: aR (solely recognition), aK (solely knowledge), aM (mini-max weighting), and OLS (ordinary least squares). These values are displayed under Condition 2, where r= 0, r = .3, p = 3, and σ2 = 2. The left-hand graph displays these values assuming R2 = .3. The center and right-hand graphs display these values for R2 = .4 and R2 = .5 respectively.

Figure 3

Table 1: Predictor Cues for the “German Cities” Study (Gigerenzer et al., 1999; Goldstein, 1997).

Figure 4

Table 2: Inter-correlation Matrix of Predictor Cues for “German Cities” Study (Goldstein, 1997). The variables are labeled as in Table 1. Values denoted with * are significant at p< .05, values with + at p<.01, where p denotes the standard p-value.