Hostname: page-component-77f85d65b8-7lfxl Total loading time: 0 Render date: 2026-04-16T02:09:39.304Z Has data issue: false hasContentIssue false

Advent of artificial intelligence in patient information leaflets: a stakeholders’ survey

Published online by Cambridge University Press:  07 November 2025

Dimitrios Spinos
Affiliation:
Ear, Nose and Throat Department, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, UK
Bethan McLeish*
Affiliation:
Ear, Nose and Throat Department, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, UK
Kate Moss
Affiliation:
Ear, Nose and Throat Department, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, UK
Okechukwu Okonkwo
Affiliation:
Ear, Nose and Throat Department, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, UK
*
Corresponding author: Bethan Kate McLeish; Email: b.mcleish@nhs.net

Abstract

Objective

Artificial intelligence (AI) has the potential to revolutionise medical communication. Our aim was to investigate whether AI can be used to adapt patient information leaflets and compare their acceptability with human-generated patient information leaflets.

Methods

ChatGPT was instructed to refine four ENT-related patient information leaflets originally written by clinicians. Pairs of human-generated and AI-adapted patient information leaflets were distributed to patients alongside a questionnaire asking them to assess presentation, condition explanation, ease of understanding, and when to seek medical attention and overall preference. Readability was evaluated using the Flesch–Kincaid Readability Ease Score and Grade Level.

Results

Of 111 responses, 39.6 per cent expressed no overall preference between the AI-adapted and human-generated patient information leaflets, 27.9 per cent preferred the AI-adapted leaflet and 32.4 per cent preferred the human-generated patient information leaflet. There was a slight reduction in the readability of the AI-adapted patient information leaflets.

Conclusion

Artificial intelligence- and human-generated patient information leaflets were broadly comparable in their acceptability to patients. However, clinician oversight is essential to safeguard the quality and readability of AI-produced materials.

Information

Type
Main Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of J.L.O. (1984) LIMITED.

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

Footnotes

Bethan Kate McLeish takes responsibility for the integrity of the content of the paperPresented at the British Rhinological Society Annual Meeting and Juniors Day 8th and 9th May 2024. Cardiff, UK.

References

Sustersic, M, Gauchet, A, Foote, A, Bosson, JL. How best to use and evaluate Patient Information Leaflets given during a consultation: a systematic review of literature reviews. Health Expect 2017; 20:531–42CrossRefGoogle ScholarPubMed
Nedbal, C, Naik, N, Davis, N, Madaan, S, Tokas, T, Cacciamani, G et al. Comprehensive consent in urology using decision aids, leaflets, videos and newer technologies: empowering patient choice and shared decision-making. Ther Adv Urol 2024;16:17562872241301729CrossRefGoogle ScholarPubMed
Medicines and Healthcare products Regulatory Agency. Best practice guidance on patient information leaflets. London: MHRA, 2020. In: https://www.gov.uk/government/publications/best-practice-guidance-on-patient-information-leaflets [30 July 2024]Google Scholar
Tama, BA, Kim, DH, Kim, G, Kim, SW, Lee, S. Recent advances in the application of artificial intelligence in otorhinolaryngology-head and neck surgery. Clin Exp Otorhinolaryngol 2020;13:326–39CrossRefGoogle ScholarPubMed
Bassani, S, Santonicco, N, Eccher, A, Scarpa, A, Vianini, M, Brunelli, M et al. Artificial intelligence in head and neck cancer diagnosis. J Pathol Inform 2022;13:100153CrossRefGoogle ScholarPubMed
Petsiou, DP, Martinos, A, Spinos, D. Applications of artificial intelligence in temporal bone imaging: advances and future challenges. Cureus 2023;15:e44591Google ScholarPubMed
Frosolini, A, Franz, L, Caragli, V, Genovese, E, de Filippis, C, Marioni, G. Artificial intelligence in audiology: a scoping review of current applications and future directions. Sensors (Basel) 2024;24:7126CrossRefGoogle ScholarPubMed
Wu, Q, Wang, X, Liang, G, Luo, X, Zhou, M, Deng, H et al. Advances in image-based artificial intelligence in otorhinolaryngology-head and neck surgery: a systematic review. Otolaryngol Head Neck Surg 2023;169:1132–42CrossRefGoogle ScholarPubMed
Verran, C. Artificial intelligence-generated patient information leaflets: a comparison of contents according to British Association of Dermatologists standards. Clin Exp Dermatol 2024;49:711–14CrossRefGoogle ScholarPubMed
Haidar, O, Jaques, A, McCaughran, PW, Metcalfe, MJ. AI-generated information for vascular patients: assessing the standard of procedure-specific information provided by the ChatGPT AI-language model. Cureus 2023;15:e49764Google ScholarPubMed
Ghanem, YK, Rouhi, AD, Al-Houssan, A, Saleh, Z, Moccia, MC, Joshi, H et al. Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis. Surg Endosc 2024;38:2887–93CrossRefGoogle ScholarPubMed
Temel, MH, Erden, Y, Bağcıer, F. Information quality and readability: ChatGPT’s responses to the most common questions about spinal cord injury. World Neurosurg 2024;181:e113844CrossRefGoogle Scholar
Abou-Abdallah, M, Dar, T, Mahmudzade, Y, Michaels, J, Talwar, R, Tornari, C. The quality and readability of patient information provided by ChatGPT: can AI reliably explain common ENT operations? Eur Arch Otorhinolaryngol 2024;281:6147–53CrossRefGoogle Scholar
Shamil, E, Ko, TK, Fan, KS, Schuster-Bruce, J, Jaafar, M, Khwaja, S et al. Assessing the quality and readability of online patient information: ENT UK patient information e-leaflets versus responses by a generative artificial intelligence. Facial Plast Surg 2025;41,472–81Google ScholarPubMed
Health Education England. Health literacy ‘how to’ guide. In: https://library.nhs.uk/wp-content/uploads/sites/4/2020/08/Health-literacy-how-to-guide.pdf [27 November 2023]Google Scholar
Kincaid, JP. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Millington: Chief of Naval Technical Training, 1975CrossRefGoogle Scholar
Flesch, R. A new readability yardstick. J Appl Psychol 1948;32(3):221–33CrossRefGoogle ScholarPubMed
NHS Health Research Authority. Is my study research? In: https://www.hra-decisiontools.org.uk/research/about.html [1 August 2023]Google Scholar
Kelly, W. GPT-3.5 vs. GPT-4: biggest differences to consider. Needham: TechTarget, 2024. In: https://www.techtarget.com/searchenterpriseai/tip/GPT-35-vs-GPT-4-Biggest-differences-to-consider [5 December 2024]Google Scholar
Ticong, L. Claude 3.5 vs GPT-4: AI chatbot comparison (2025). San Francisco: eWeek, 2024. In: https://www.eweek.com/artificial-intelligence/gpt4-vs-claude/#:∼:text=Claude%20excels%20at%20maintaining%20contexts,are%20more%20important%20to%20you [10 January 2025]Google Scholar
Supplementary material: File

Spinos et al. supplementary material

Spinos et al. supplementary material
Download Spinos et al. supplementary material(File)
File 16.6 KB