No CrossRef data available.
Published online by Cambridge University Press: 07 November 2025
Artificial intelligence (AI) has the potential to revolutionise medical communication. Our aim was to investigate whether AI can be used to adapt patient information leaflets and compare their acceptability with human-generated patient information leaflets.
ChatGPT was instructed to refine four ENT-related patient information leaflets originally written by clinicians. Pairs of human-generated and AI-adapted patient information leaflets were distributed to patients alongside a questionnaire asking them to assess presentation, condition explanation, ease of understanding, and when to seek medical attention and overall preference. Readability was evaluated using the Flesch–Kincaid Readability Ease Score and Grade Level.
Of 111 responses, 39.6 per cent expressed no overall preference between the AI-adapted and human-generated patient information leaflets, 27.9 per cent preferred the AI-adapted leaflet and 32.4 per cent preferred the human-generated patient information leaflet. There was a slight reduction in the readability of the AI-adapted patient information leaflets.
Artificial intelligence- and human-generated patient information leaflets were broadly comparable in their acceptability to patients. However, clinician oversight is essential to safeguard the quality and readability of AI-produced materials.
Bethan Kate McLeish takes responsibility for the integrity of the content of the paperPresented at the British Rhinological Society Annual Meeting and Juniors Day 8th and 9th May 2024. Cardiff, UK.