Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-08T12:48:43.045Z Has data issue: false hasContentIssue false

Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence

Published online by Cambridge University Press:  20 October 2022

Georg Starke
Affiliation:
College of Humanities, EPFL, 1015 Lausanne, Switzerland
Marcello Ienca*
Affiliation:
College of Humanities, EPFL, 1015 Lausanne, Switzerland
*
*Corresponding author. Email: marcello.ienca@epfl.ch
Rights & Permissions [Opens in a new window]

Abstract

Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust in nonhuman agents constitutes a category error and worry about the concept being misused for ethics washing. Proponents of trust have responded to these worries from various angles, disentangling different concepts and aspects of trust in AI, potentially organized in layers or dimensions. Given the substantial disagreements across these accounts of trust and the important worries about ethics washing, we embrace a diverging strategy here. Instead of aiming for a positive definition of the elements and nature of trust in AI, we proceed ex negativo, that is we look at cases where trust or distrust are misplaced. Comparing these instances with trust expedited in doctor–patient relationships, we systematize these instances and propose a taxonomy of both misplaced trust and distrust. By inverting the perspective and focusing on negative examples, we develop an account that provides useful ethical constraints for decisions in clinical as well as regulatory contexts and that highlights how we should not engage with medical AI.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press
Figure 0

Figure 1. Two kinds of AI trustworthiness.

Figure 1

Figure 2. Taxonomy of misplaced trust and distrust. The lighter boxes with dark font denote a morally adequate placement of, respectively, trust and distrust. In contrast, the darker boxes with white font denote misplaced forms of trust.