Hostname: page-component-699b5d5946-w8gxj Total loading time: 0 Render date: 2026-03-03T17:56:37.655Z Has data issue: false hasContentIssue false

Broadening Ethical and Social Perspectives for AI in Healthcare

Published online by Cambridge University Press:  05 February 2026

Shotaro Kinoshita
Affiliation:
School of Medicine Hills Joint Research Laboratory for Future Preventive Medicine and Wellness, Keio University , Japan Graduate School of Interdisciplinary Information Studies, The University of Tokyo , Japan
Shuo Wang
Affiliation:
School of Social Sciences, Tsinghua University , China
Hiromi Yokoyama
Affiliation:
Center for Data-Driven Discovery(CD3), Kavli Institute for the Physics and Mathematics of the Universe (Kavli-IPMU,WPI), The University of Tokyo , Japan
Taishiro Kishimoto*
Affiliation:
School of Medicine Center for Promotion of Interdisciplinary Research in Medicine and life Science, Keio University , Japan
*
Corresponding author: Taishiro Kishimoto; Email: tkishimoto@keio.jp
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Letter to the Editor
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of American Society of Law, Medicine & Ethics

We have read with great interest the paper authored by Aucouturier and colleagues, published in the Journal of Law, Medicine & Ethics, which discusses the development of a training module on AI ethics, including checklists and case studies.Reference Aucouturier and Grinbaum 1 The authors included in their checklist the following items: “Role of AI systems in the project,” “Explainability and reproducibility,” “Data,” “Bias and fairness,” “Cybersecurity and biosecurity,” “Human oversight and accountability,” “Beneficence, non-maleficence, and human autonomy,” and “Socioeconomic and environmental impact,” in accordance with the HELG ALTAI approach.

Although the medical field has seen a growing debate on AI ethics, and various guidelines and checklists have been created, many of the issues raised in this paper align with the long-standing debate on AI in general. For example, in a study by Fjeld et al. that analyzed in detail 36 guidelines for AI worldwide as of 2019, they found eight common themes: “Accountability,” “Fairness and non-discrimination,” “Human control of technology,” “Safety and security,” “Transparency and explainability,” “Privacy,” “Professional responsibility,” and “Promotion of human value.”Reference Fjeld 2 Centering the discussion in the context of medicine and bioethics, it is important to emphasize how patients and subjects may be disadvantaged, and such measures are particularly important in the medical field, which deals with life and health. Additionally, since some believe that there are too many different guidelines in the field of AI research, making it difficult to choose which to follow, we are now at the stage where the appropriateness of each evaluation axis should be examined.Reference Zhong 3

The checklist method also has its limitations, as noted by Aucouturier et al. Other than those mentioned by the authors, there is also the issue of how items in the checklist that cannot be answered with a “yes” or “no” should be compared and judged. In research ethics committees, where it is necessary to draw some conclusions, it becomes necessary to compare and weigh the degree of ethical compliance for items that cannot be divided into Yes or No. For such issues, for example, an attempt has been made to quantitatively evaluate and compare the people’s ethical concerns about AI technology based on the eight general AI ethics items listed by Fjeld et al.Reference Ikkatai 4 Moving forward, it will be necessary to deepen the discussion on how such checklists should be utilized in the future.

Declaration of interests

TK has received grants from Sumitomo Pharma and Otsuka Pharma; royalties or licences from Sumitomo Pharma and FRONTEO; consulting fees from TechDoctor and FRONTEO; speaker’s honoraria from Sumitomo Pharma, Boehringer Ingelheim, Takeda, Astellas, Meiji Seika, and Janssen; and stock from i2medical and TechDoctor. SK, SW, and HY declare no competing interests.

References

Aucouturier, Etienne and Grinbaum, Alexei, “Training Bioethics Professionals in AI Ethics: A Framework,” Teaching Health Law, Journal of Law, Medicine & Ethics 53, no. 1 (2025): 176183, https://doi.org/10.1017/JME.2025.57.CrossRefGoogle Scholar
Fjeld, Jessica, et al., “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI” Berkman Klein Center Research Publication No. 2020-1, (Berkman Klein Center for Internet & Society at Harvard University, January 15, 2020), https://doi.org/10.2139/SSRN.3518482.CrossRefGoogle Scholar
Zhong, Jingyu et al., “The Endorsement of General and Artificial Intelligence Reporting Guidelines in Radiological Journals: A Meta-Research Study,” BMC Medical Research Methodology 23, no. 1 (2023): 292, https://doi.org/10.1186/S12874-023-02117-X.CrossRefGoogle ScholarPubMed
Ikkatai, Yuko et al.Octagon Measurement: Public Attitudes toward AI Ethics,” International Journal of Human–Computer Interaction 38, no. 17 (2022): 15891606, https://doi.org/10.1080/10447318.2021.2009669.CrossRefGoogle Scholar