Hostname: page-component-89b8bd64d-72crv Total loading time: 0 Render date: 2026-05-06T16:08:34.049Z Has data issue: false hasContentIssue false

AI-Inclusivity in Healthcare: Motivating an Institutional Epistemic Trust Perspective

Published online by Cambridge University Press:  29 April 2024

Kritika Maheshwari*
Affiliation:
Ethics and Philosophy of Technology Section, Department of Values, Technology and Innovation, Delft University of Technology, Delft, The Netherlands
Christoph Jedan
Affiliation:
Ethics and Comparative Philosophy of Religion, Department of Christianity and the History of Ideas, Faculty of Religion, Culture and Society, University of Groningen, Groningen, The Netherlands
Imke Christiaans
Affiliation:
Department of Genetics, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
Mariëlle van Gijn
Affiliation:
Department of Genetics, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
Els Maeckelberghe
Affiliation:
Bioethics and Research Ethics, Faculty of Medical Sciences, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
Mirjam Plantinga
Affiliation:
Department of Genetics and Data Science Center in Health, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
*
Corresponding author: Kritika Maheshwari; Emails: k.maheshwari@tudelft.nl; Kritika136@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on the reliability of AI-inclusive medical practices and programs, its knowledge and understanding among different stakeholders involved, its effect on epistemic and communicative duties and burdens on medical professionals and, finally, its interaction and alignment with the public’s ethical values and interests as well as background sociopolitical conditions against which AI-inclusive healthcare systems are embedded. To assess the applicability of these conditions, we explore a recent proposal for AI-inclusivity within the Dutch Newborn Screening Program. In doing so, we illustrate the importance, scope, and potential challenges of fostering and maintaining institutional epistemic trust in a context where generating, assessing, and providing reliable and timely screening results for genetic risk is of high priority. Finally, to motivate the general relevance of our discussion and case study, we end with suggestions for strategies, interventions, and measures for AI-inclusivity in healthcare more widely.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. Five conditions of epistemic institutional trust in AI-inclusive healthcare.