Hostname: page-component-77f85d65b8-t6st2 Total loading time: 0 Render date: 2026-03-28T10:28:19.437Z Has data issue: false hasContentIssue false

Risk, Reasonableness and Residual Harm under the EU AI Act: A Conceptual Framework for Proportional Ex-Ante Controls

Published online by Cambridge University Press:  20 January 2026

Fabian Teichmann*
Affiliation:
International Relations, The London School of Economics and Political Science, London, UK
Rights & Permissions [Opens in a new window]

Abstract

The EU Artificial Intelligence Act (AI Act) establishes a novel risk-based regulatory model for AI systems, categorising uses into four tiers: unacceptable (prohibited), high-risk (tightly regulated), limited-risk (transparency obligations), and minimal-risk (largely unregulated). This article develops a rigorous conceptual framework to analyse the Act’s logic of risk, reasonableness, and residual harm. It explains how the principles of precaution and proportionality shape the AI Act’s ex ante controls, requiring providers to anticipate reasonably foreseeable misuse and apply measures that reflect the state of the art.1 We propose criteria for calibrating key requirements (data governance, transparency, human oversight, robustness or cybersecurity) to the severity and uncertainty of risks, drawing on risk-regulation theory (e.g., Baldwin and Black’s responsive regulation and Sunstein’s cost-benefit rationality). The analysis also situates the EU approach within a comparative context, noting alignments and divergences with US and OECD AI frameworks – for example, the EU’s precautionary bans on biometric mass surveillance contrast with the US reliance on voluntary risk management guidelines. Specific high-impact use cases (biometric identification in public spaces, AI in critical infrastructure) illustrate how risk severity triggers stricter controls. The article concludes by discussing policy implications for implementation, including the role of harmonised standards and presumptions of conformity, the interface with parallel cybersecurity regimes (NIS2, DORA) as “risk multipliers,” and the need for further guidance and delegated acts to ensure that the AI Act’s proportional safeguards remain effective in the face of technological change.

Information

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press