Hostname: page-component-89b8bd64d-5bvrz Total loading time: 0 Render date: 2026-05-06T13:18:20.330Z Has data issue: false hasContentIssue false

Using conventional framing to offset bias against algorithmic errors

Published online by Cambridge University Press:  16 April 2025

Hamza Tariq*
Affiliation:
Department of Psychology, University of Waterloo, Waterloo, ON, Canada
Jonathan A. Fugelsang
Affiliation:
Department of Psychology, University of Waterloo, Waterloo, ON, Canada
Derek J. Koehler
Affiliation:
Department of Psychology, University of Waterloo, Waterloo, ON, Canada
*
Corresponding author: Hamza Tariq; Email: h33tariq@uwaterloo.ca
Rights & Permissions [Opens in a new window]

Abstract

Prior research has shown that people judge algorithmic errors more harshly than identical mistakes made by humans—a bias known as algorithm aversion. We explored this phenomenon across two studies (N = 1199), focusing on the often-overlooked role of conventionality when comparing human versus algorithmic errors by introducing a simple conventionality intervention. Our findings revealed significant algorithm aversion when participants were informed that the decisions described in the experimental scenarios were conventionally made by humans. However, when participants were told that the same decisions were conventionally made by algorithms, the bias was significantly reduced—or even completely offset. This intervention had a particularly strong influence on participants’ recommendations of which decision-maker should be used in the future—even revealing a bias against human error makers when algorithms were framed as the conventional choice. These results suggest that the existing status quo plays an important role in shaping people’s judgments of mistakes in human–algorithm comparisons.

Information

Type
Empirical Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association for Decision Making
Figure 0

Figure 1 College admissions scenario as presented to participants. Note: This includes all scenario parts of Study 1 in sequence, as seen by the participants, barring the comprehension questions. Colored text within square brackets shows variations made per condition (blue = human convention and red = algorithm convention).

Figure 1

Table 1 All measures presented to participants following the error

Figure 2

Figure 2 Study 1: Combined severity and level of concern ratings for admissions scenario by error maker and convention. When the admissions algorithms were framed as convention (red bars), algorithm mistakes were judged more severely than human errors, but less so than when human admissions officers were presented as the convention (blue bars). Error bars represent standard error throughout this paper.

Figure 3

Figure 3 Preference to retain admissions review method after it made an error. Note: Study 1: When admissions algorithms were framed as the convention (red bars), participants preferred retaining the algorithm more than the human after identical mistakes. When human admissions officers were framed as the convention (blue bars), participants preferred retaining the human officer over the algorithm.

Figure 4

Figure 4 Proportion of recommendations for error makers under different conventions in the admissions scenario. Note: Study 1: When admissions algorithms were framed as the convention (red bars), participants recommended both the humans and the algorithms at similar rates after identical mistakes. But when human admissions officers were framed as the convention (blue bars), participants recommended humans more than algorithms.

Figure 5

Figure 5 Sound speakers scenario as presented to participants. Note: This includes all scenario parts of Study 2 in sequence, as seen by the participants, barring the comprehension questions. Colored text within square brackets shows variations made per condition (blue = human convention, red = algorithm convention).

Figure 6

Figure 6 Combined severity and level of concern ratings for speakers scenario by error maker and convention. Note: Study 2: As in Study 1, when the sound quality algorithms were framed as the convention (red bars), algorithm mistakes were judged more severely than human errors, but less so than when human analysts were presented as the convention (blue bars).

Figure 7

Figure 7 Preference to retain sound testing method after it made an error. Note: Study 2: When sound quality algorithms were the framed convention (red bars), participants retained the algorithm more than the human after identical mistakes. When human analysts were the framed convention (blue bars), participants preferred retaining the human over the algorithm.

Figure 8

Figure 8 Proportion of recommendations for error makers under different conventions in the speakers’ scenario. Note: Study 2: When sound quality algorithms were framed as the convention (red bars), participants recommended the algorithm more than the human for future use after identical mistakes. When human analysts were the framed convention (blue bars), participants recommended humans more than algorithms.