Hostname: page-component-77f85d65b8-v2srd Total loading time: 0 Render date: 2026-03-30T04:16:07.400Z Has data issue: false hasContentIssue false

Belief bias and representation in assessing the Bayesian rationality of others

Published online by Cambridge University Press:  01 January 2023

Richard B. Anderson*
Affiliation:
Department of Psychology, Bowling Green State University, Bowling Green, Ohio, 43403
Laura Marie Leventhal
Affiliation:
Computer Science Department, The University of Findlay
Don C. Zhang
Affiliation:
Louisiana State University, Department of Psychology
Daniel Fasko Jr.
Affiliation:
School of Educational Foundations, Leadership, and Policy, Bowling Green State University
Zachariah Basehore
Affiliation:
Department of Psychology, Bowling Green State University
Christopher Gamsby
Affiliation:
Department of Psychology, Bowling Green State University
Jared Branch
Affiliation:
Department of Psychology, Bowling Green State University
Timothy Patrick
Affiliation:
Department of Psychology, Bowling Green State University
*
Rights & Permissions [Opens in a new window]

Abstract

People often assess the reasonableness of another person’s judgments. When doing so, the evaluator should set aside knowledge that would not have been available to the evaluatee to assess whether the evaluatee made a reasonable decision, given the available information. But under what circumstances does the evaluator set aside information? On the one hand, if the evaluator fails to set aside prior information, not available to the evaluatee, they exhibit belief bias. But on the other hand, when Bayesian inference is called for, the evaluator should generally incorporate prior knowledge about relevant probabilities in decision making. The present research integrated these two perspectives in two experiments. Participants were asked to take the perspective of a fictitious evaluatee and to evaluate the reasonableness of the evaluatee’s decision. The participant was privy to information that the fictitious evaluatee did not have. Specifically, the participant knew whether the evaluatee’s decision judgment was factually correct. Participants’ judgments were biased (Experiments 1 and 2) by the factuality of the conclusion as they assessed the evaluatee’s reasonableness. We also found that the format of information presentation (Experiment 2) influenced the degree to which participants’ reasonableness ratings were responsive to the evaluatee’s Bayesian rationality. Specifically, responsivity was greater when the information was presented in an icon-based, graphical, natural-frequency format than when presented in either a numerical natural-frequency format or a probability format. We interpreted the effects of format to suggest that graphical presentation can help organize information into nested sets, which in turn enhances Bayesian rationality.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2019] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Figure 1: Illustration of the procedure for a single trial. In this example, the diagnosis is counterfactual because it does not match the pregnancy’s actual status. Additionally, the diagnosis of “single fetus” is Bayes-consistent because it is consistent with the expressed base rates (which are so extreme that they overwhelm the test’s diagnostic accuracy). We combined the results of the yes/no and the certainty rating to compute a reasonableness rating. The independent variables (IV) are the Bayes-consistency and factuality of the diagnosis. Note that the information about actual status is irrelevant to the judgment of reasonableness. In this example, the probability that the test result is correct, given the evidence, is 0.11.

Figure 1

Table 1: Stimulus-Set Structure for Experiment 1

Figure 2

Figure 2: Experiment 1. Reasonableness ratings as a function of the Bayes-consistency and factuality of a physician’s conclusion. N = 98.

Figure 3

Figure 3: Effect-indices computed for each participant in Experiment 1.

Figure 4

Figure 4: Example of information presented to participants in the probability condition, in Experiment 2.

Figure 5

Figure 5: Example of information presented to participants in the graphical natural-frequency and numerical natural-frequency conditions in Experiment 2.

Figure 6

Figure 6: Effects of factuality, Bayes-consistency (consistent or inconsistent), and information format (probability, numeric natural-frequency, or graphical natural-frequency) on participants’ assessments of reasonableness. To facilitate graphical comparisons of the data patterns for binary versus quantitative (−4 to 4) scale data, each binary response value is scored as +4 or −4).

Figure 7

Table 2: Analyses of variance: quantitative and binary assessments of reasonableness of several measures

Figure 8

Table 3: Experiment 2. Supplemental analyses assessing the interaction between format and Bayes consistency, with only two levels of format in each analysis

Figure 9

Figure 7: Quantitative ratings of reasonableness as functions of attentiveness score, factuality, Bayes-consistency (consistent or inconsistent), and information format (probability, numeric natural-frequency, or graphical natural-frequency).

Figure 10

Figure 8: Effect-indices as a function of attentiveness score (0 through 4) and presentation format (graphical frequency, numerical frequency, probability), computed for each participant in Experiment 2.

Supplementary material: File

Anderson et al. supplementary material

Anderson et al. supplementary material 1
Download Anderson et al. supplementary material(File)
File 17.5 KB
Supplementary material: File

Anderson et al. supplementary material

Anderson et al. supplementary material 2
Download Anderson et al. supplementary material(File)
File 30.5 KB