Hostname: page-component-77f85d65b8-g4pgd Total loading time: 0 Render date: 2026-03-28T08:04:07.657Z Has data issue: false hasContentIssue false

The Need for Prospective Integrity Standards for the Use of Generative AI in Research

Published online by Cambridge University Press:  27 March 2025

Kayte Spector-Bagdady*
Affiliation:
University of Michigan, Ann Arbor, MI, United States
Rights & Permissions [Opens in a new window]

Abstract

The federal government has a long history of trying to find the right balance in supporting scientific and medical research while protecting the public and other researchers from potential harms. To date, this balance has been generally calibrated differently across contexts – including in clinical care, human subjects research, and research integrity. New challenges continue to face this disparate model of regulation, including novel Generative Artificial Intelligence (GenAI) tools. Because of potential increases in unintentional fabrication, falsification, and plagiarism using GenAI – and challenges establishing both these errors and intentionality in retrospect – this article argues that we should instead move toward a system that sets accepted community standards for the use of GenAI in research as prospective requirements.

Information

Type
Symposium Articles
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of American Society of Law, Medicine & Ethics