Hostname: page-component-89b8bd64d-7zcd7 Total loading time: 0 Render date: 2026-05-07T01:39:02.464Z Has data issue: false hasContentIssue false

Trust, validation, and accountability: implementing uses of generative artificial intelligence into Health Technology Assessment practice

Published online by Cambridge University Press:  05 March 2026

Rachael Fleurence*
Affiliation:
Apodeixis Strategies LLC, USA
Jagpreet Chhatwal
Affiliation:
Center for Health Technology Assessment, Mass General Brigham, Harvard Medical School, USA
*
Corresponding author: Rachael Fleurence; Email: rachael@apodeixisstrategies.com
Rights & Permissions [Opens in a new window]

Extract

The 2025 HTAi Global Policy Forum (GPF) report offers a timely and thoughtful synthesis of the opportunities and challenges associated with the use of artificial intelligence (AI), and particularly generative AI (GenAI), in Health Technology Assessment (HTA) (1). Its emphasis on trust, human agency, and risk-based approaches reflects both the maturity of the discussion within the HTA community and a shared recognition that technical capability alone is insufficient for responsible adoption. The report succeeds in articulating a common set of principles and a broadly aligned vision across HTA bodies, life sciences, and other interest holders.

Information

Type
Dialogue
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press