Hostname: page-component-89b8bd64d-r6c6k Total loading time: 0 Render date: 2026-05-10T11:49:47.813Z Has data issue: false hasContentIssue false

Using generative artificial intelligence in corporate narrative reporting: Understanding risks within the lens of a “reporting chain” approach

Published online by Cambridge University Press:  11 December 2025

Iris H-Y Chiu*
Affiliation:
Faculty of Laws, UCL, London, UK
Rights & Permissions [Opens in a new window]

Abstract

This article discusses the rise in the use of Gen artificial intelligence (Gen AI) for the production of mandatory corporate reporting, particularly narrative and ESG (Environmental, Social or Governance) reports. The capabilities of Gen AI can potentially deliver many benefits, but firms are exposed to legal and regulatory risks in connection with Gen AI adoption. This article discusses how firms may address these risks, but more importantly, these risks should not be appreciated only at the firm-level but at a broader industry, market and systemic level. When viewed through the lens of the reporting chain, which is the universe of recipient entities that use such corporate reporting, including numerous financial intermediary entities, regulators and finfluencers, these risks take on new implications that require regulatory and supervisory efforts for their oversight and mitigation. The article makes specific proposals for securities and financial regulators in particular, against a broader context of the more general, cross-cutting nature of AI systems regulation and governance.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.