Hostname: page-component-5b777bbd6c-gcwzt Total loading time: 0 Render date: 2025-06-18T11:59:09.124Z Has data issue: false hasContentIssue false

“You’re right to be skeptical!”: The Role of Legal Information Professionals in Assessing Generative AI Outputs

Published online by Cambridge University Press:  10 June 2025

Abstract

Generative AI tools, such as ChatGPT, have demonstrated impressive capabilities in summarisation and content generation. However, they are infamously prone to hallucination, fabricating plausible information and presenting it as fact. In the context of legal research, this poses significant risk. This paper, written by Sally McLaren and Lily Rowe, examines how widely available AI applications respond to fabricated case citations and assesses their ability to identify false cases, the nature of their summaries, and any commonalities in their outputs. Using a non-existent citation, we analysed responses from multiple AI models, evaluating accuracy, detail, structure and the inclusion of references. Results revealed that while some models flagged our case as fictitious, others generated convincing but erroneous legal content, occasionally citing real cases or legislation. The experiment underscores concern about AI’s credibility in legal research and highlights the role of legal information professionals in mitigating risks through user education and AI literacy training. Practical engagement with these tools is crucial to understanding the user experience. Our findings serve as a foundation for improving AI literacy in legal research.

Type
Main Features
Copyright
© The Author(s), 2025. Published by British and Irish Association of Law Librarians

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Endnotes

1 Mata v Avianca, No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023). Opinion and Order of Sanctions <https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/> accessed 4 February 2025.

2 Quoted in para 11, p.6.

3 See, for example, Murphy, J. (2023). ‘AI in the legal sector: an overview for information professionals’, 23(3), pp. 150-153; Bennett, G. (2023). ‘Is ChatGPT any good at legal research: and should we be wary or supportive of it?’, 23(4), pp. 219-224; Magrath, P. (2024). ‘How technology supports open justice and transparency’, 24(3), pp. 165-169.

5 ‘Table 1:Question-and-response experiment: legal research and generative AI’ <www.innertemplelibrary.org.uk/wp-content/uploads/2025/02/Table-1.pdf> accessed 7 February 2025.

6 Dylan Brown, ‘AI adoption soars across UK legal sector’ (LexisNexis, 25 September 2024) <www.lexisnexis.co.uk/research-and-reports/generative-ai-survey-h2-2024.html> accessed 7 February 2025.

7 The Bar Council, ‘Considerations when using ChatGPT and generative artificial intelligence software based on large language models’ <www.barcouncilethics.co.uk/documents/considerations-when-using-chatgpt-and-generative-ai-software-based-on-large-language-models/> accessed 7 February 2025.

8 The Law Society, ‘Generative AI: the essentials’ <www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials> accessed 7 February 2025.