Hostname: page-component-75d7c8f48-cp9qn Total loading time: 0 Render date: 2026-03-24T05:06:10.790Z Has data issue: false hasContentIssue false

Adaptive Regulation

Published online by Cambridge University Press:  24 March 2026

Thibault Schrepel*
Affiliation:
Faculty of Law, Vrije Universiteit Amsterdam, Amsterdam, Netherlands CodeX – The Stanford Center for Legal Informatics, Stanford Law School, Stanford University, Stanford, CA, USA Cornell Tech, Cornell University, New York, NY, USA
Rights & Permissions [Opens in a new window]

Abstract

What is adaptive regulation? Why does it matter? How can it be measured, and how can regulation be made more adaptive? I answer each of these four questions.

Information

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press or the rights holder(s) must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2026. Published by Cambridge University Press

I. Introduction

When addressing then-President George W. Bush at the 2006 White House Correspondents’ Association Dinner, Stephen Colbert noted jokingly that “[t]he greatest thing about this man is he’s steady. He knows where he stands. He believes the same thing Wednesday that he believed on Monday, no matter what happened Tuesday. Events can change; this man’s beliefs never will.”Footnote 1 Transposed to the realm of digital regulation, Colbert’s quip illuminates the fundamental limitation of “future-proof” regulation, i.e., rules drafted on Monday that purport to remain optimal on Wednesday, no matter what happens Tuesday.Footnote 2

This aspiration for regulatory stability stems from legitimate anxieties. The velocity of technological change in digital markets far exceeds the glacial pace of traditional legislative processes. Because regulations risk becoming obsolete before implementation, policymakers have responded by crafting deliberately expansive definitions intended to capture future technological paradigms.Footnote 3 The European Union’s approach to artificial intelligence exemplifies this strategy as it eschews defining AI under narrow technical specifications like “deep neural networks” in favour of broader conceptualisations meant to encompass computational models yet to be invented. The theory holds that such linguistic elasticity enables law to transcend temporal boundaries, shaping rather than chasing technological evolution.

One might conclude that adaptive regulation is unnecessary, that sufficiently broad principles and definitions already provide the requisite flexibility. If lawmakers draft rules using broad principles and avoid technical specifications, the objection runs, adaptation becomes unnecessary. Also, the argument holds that capacious language naturally accommodates unforeseen developments through interpretation.Footnote 4 Where regulation speaks in general terms, adaptive mechanisms add procedural complexity without functional benefit.

The AI Act’s legislative history refutes this claim. The Regulation’s original draft, circulated April 2021, organised regulatory obligations around a risk-based taxonomy.Footnote 5 AI systems were assumed to be designed for a specific purpose: healthcare, educational, law enforcement and so forth. Risk levels attached to each category, with corresponding compliance requirements calibrated to the anticipated harms within that domain. The framework deliberately eschewed narrow technical specifications in favor of functional categories designed to remain stable across technological iterations. High-risk applications would face stringent oversight. Low-risk applications would receive proportionate treatment.

The release of ChatGPT in November 2022 exposed a structural flaw in this design. Foundation models cannot be classified by designated purpose. A single model traverses risk categories depending on deployment context. The same system performs medical diagnosis (high-risk), generates creative content (low-risk), and produces software code (risk varies by application). The original taxonomy became incoherent. No degree of interpretive flexibility could reconcile purpose-based classification with general-purpose capability.

The European Parliament responded by introducing Chapter V, which regulates general-purpose AI models through capability-based thresholds rather than application-specific risk assessment.Footnote 6 Models exceeding defined computational benchmarks trigger regulatory obligations regardless of intended use. This represented not evolutionary interpretation of existing principles but architectural replacement. The foundational logic of the AI Act required redesign.Footnote 7

Timing matters here. This adaptation succeeded only because the AI Act remained unadopted when ChatGPT emerged. Had the Regulation been adopted before November 2022, its legal architecture would have ossified before foundation models became commercially significant.

The episode demonstrates a structural limit of principle-based regulation in complex technological domains. Broad language accommodates unforeseen variations within a stable conceptual framework. It cannot accommodate frameworks whose foundational assumptions are invalidated by technological development. Adaptive mechanisms address precisely this category of regulatory failure: not the need to update technical details within a sound framework, but the need to revise frameworks whose organising principles no longer correspond to technological reality.

The AI Act episode illustrates the insurmountable limitations, both practical and theoretical, that afflict future-proof regulatory design. At the practical level, regulations sufficiently capacious to encompass unknown future states necessarily sacrifice precision in scoping and enforcement.Footnote 8 Yet this sacrifice purchases no guarantee of continued relevance. More fundamentally, the future-proof paradigm rests on the dubious assumption that regulatory responses optimal for present technological configurations will remain appropriate for future ones. While certain normative principles (the prohibition of slavery, for instance) rightly claim temporal universality, the translation of this permanence to technical regulation of digital markets proves far more problematic. When regulation concerns itself with digital objects, the salience of intervening technological developments (i.e., Colbert’s “Tuesdays”) cannot be assumed away.

The neoclassical economic framework underlying much contemporary regulation compounds these difficulties. Built upon assumptions of predictable equilibrium states, and fungible production factors, the “future-proof” framework worked adequately for the industrial economy of the twentieth century. But in today’s knowledge economy, neoclassical methods fail to capture the complex, evolutionary nature of digital markets.Footnote 9 So unless regulatory theory makes a “complexity leap,” it risks becoming increasingly disconnected from the realities it seeks to govern.Footnote 10

This article proposes a paradigmatic shift from “future-proof” to “future-responsive” regulation (i.e., adaptive regulation).Footnote 11 The distinction is critical. Whereas future-proof regulation seeks resistance to change (analogous to waterproof materials resisting water), adaptive regulation embraces adjustability to evolving conditions.Footnote 12 So if making regulation adaptive might superficially appear to achieve the goal of future-proofing, this conflates fundamentally different regulatory philosophies. When an adaptive regulation is modified in response to Tuesdays, what persists is not the same regulation enduring through time but, in substance, a different regulation altogether. The future-proof ideal of regulatory permanence thus gives way to a more modest aspiration: creating regulatory frameworks capable of orderly transformation.

The promise of adaptive regulation, however, introduces its own pathologies. If regulations can be modified at will to accommodate technological developments or, in fact, any other event, the legal certainty essential to the rule of law is threatened.Footnote 13 A regulatory system capable of unconstrained morphing risks enabling arbitrary enforcement. The critical question thus becomes not whether regulation should be adaptive, but how to structure adaptation mechanisms in a way that preserves legal certainty (diminishes the specter of arbitrariness).

To address this question, the present article undertakes a systematic examination of adaptive mechanisms within recent European Digital Acts.Footnote 14 Title II first documents the various adaptive instruments embedded in these Acts. It offers a detailed analysis of their scope, triggers and procedural safeguards. Drawing on complexity science and institutional design literature, it proceeds to evaluate these mechanisms against normative criteria for effective adaptive regulation. The analysis reveals that while EU lawmakers have incorporated numerous adaptive elements (from delegated acts to periodic review clauses), these mechanisms remain wedded to neoclassical assumptions about predictable technological evolution. They also fail to embrace the genuine uncertainty that characterises complex digital ecosystems. In reaction, Title III proposes a framework for designing regulatory instruments that balance stability with responsiveness for governing rapidly evolving digital markets.

II. An empirical look at EU regulatory adaptiveness

In what ways has the European Union integrated adaptive mechanisms into its recent regulatory endeavors? To answer this question, I first introduce a framework for measuring regulatory adaptiveness across eight major EU digital laws passed between 2022 and 2024 (Section 1). I then analyse the results to identify which adaptive features recur across regulations and which are largely absent (Section 2).

1. Analytical framework

What follows introduces the study’s empirical foundation. It defines the scope of analysis (Section a), presents the analytical framework for assessing adaptive mechanisms, (Section b) and provides a first overview of the results (Section c).

a. Scope

This study covers all eight European Acts regulating the digital economy (i.e., the “European Digital Acts”) and adopted within the past five years: the Data Governance Act (May 2022, also called “DGA”),Footnote 15 the Digital Markets Act (September 2022, also called “DMA”),Footnote 16 the Digital Services Act (October 2022, also called “DSA”),Footnote 17 the Digital Operational Resilience Act (December 2022, also called “DORA”),Footnote 18 the Chips Act (September 2023),Footnote 19 the Data Act (December 2023),Footnote 20 the Artificial Intelligence Act (June 2024, also called “AI Act”),Footnote 21 and the Cyber Resilience Act (October 2024, also called “CRA”).Footnote 22

These regulations were selected for three reasons. First, they collectively form the backbone of the EU’s digital regulatory framework. Second, they span a wide range of regulatory objectives, from competition (DMA) and content moderation (DSA) to infrastructure (DORA, Chips Act), data governance (DGA, Data Act), and technological safety (AI Act, CRA). Third, all were adopted within a three-year window (2022–2024), which allows for a meaningful comparative analysis of how adaptiveness has (or has not…) been built into the EU’s most recent legislative efforts. These Acts, in other words, have been chosen for their shared ambition to regulate an evolving technological landscape, precisely where adaptive regulation matters most.

b. Methodology

To gauge the adaptiveness of a regulation, I rely on a structured framework assessing whether, and how, the European Digital Acts embed mechanisms to evolve over time. The framework rests on four dimensions aligned with the life cycle of legal adaptation. First, the capacity to observe regulatory performance in practice. Second, the presence of triggers that determine when observations should prompt change. Third, once revision is warranted, the availability of clear procedural and substantive routes for enacting it. Fourth, institutional arrangements that foster reflection, learning, and course correction. What follows is a breakdown of this framework.

i. Monitoring infrastructure

Data collection obligation: Are regulators under an obligation (i.e., beyond a mere possibility) to monitor the real-world effects of the regulation? This question goes to the heart of regulatory adaptiveness. Monitoring is a precondition to adaptation. Without systematic data collection, regulators cannot detect unintended consequences, gaps or emerging challenges. In the absence of such an obligation, updates risk being arbitrary or blind to actual market dynamics.

Real-time monitoring: Are regulated entities required to feed data streams to regulators so they can track its effects in real-time (e.g., using APIs)? This criterion tests whether regulators are equipped to move from periodic review to continuous oversight. Real-time inputs enable quicker detection of regulatory gaps and ensure that interventions are timely. Without such mechanisms, even the best intentions to adapt may arrive too late to matter.

Machine-readable format: Does the regulation require regulated entities to structure data they send to regulators in a way to enable automated processing? The potential for adaptation depends not only on the availability of data, but also on its usability. Structured, machine-readable data allows for automated analysis. It enables regulators to evaluate outcomes and update rules efficiently. Without it, adaptation remains a manual, resource-intensive process.

ii. Triggering logic

Indicators defined: Are there metrics that guide when to adapt the regulation (e.g., exogenous events, thresholds, complaints, Commission reports, etc.)? This question probes whether the law offers a signal function. Defined triggers anchor revisions in observable conditions, rather than political timing or administrative convenience. They make regulatory updates to be evidence-based.

Update frequency: Is there a predefined review timeline (e.g., every 3 years)? Built-in review cycles institutionalise regulatory learning. A predefined timeline not only signals that change is expected, but it also creates procedural expectations within the administration and the market. In turn, this predictability strengthens legitimacy and compliance.

Discretion: Is the regulation subject to a mandatory review, or is it only optional (e.g., does it use language like “shall assess” or “may consider”)? Whether revision is framed as an obligation or a possibility greatly affects regulatory responsiveness. Mandatory review clauses impose a duty to reassess; discretionary ones merely authorise it and may leave adaptive processes underused or politically delayed.

iii. Adaptation

Mechanism type: What type of mechanism governs how regulatory changes are made? Some Acts require going through the full legislative process to make any change. Others allow the Commission to update parts of the regulation through delegated or implementing acts. Legislative changes tend to be slower, but they make it possible to revise the regulation more fundamentally. Secondary instruments, by contrast, are faster and more flexible, but typically limited to technical or procedural updates.

Secondary instruments: In which areas is the Commission empowered to use secondary instruments to adapt the regulation without resorting to full legislative amendments? Secondary instruments allow the Commission to adapt specific aspects of the regulation more swiftly and flexibly than full legislative procedures. They make timely updates more feasible, although they are limited to changing non-essential elements and/or simply setting up conditions for uniform implementation across the EU.Footnote 23

Scope of modification: What parts of the regulation can be modified: only annexes, or also substantive provisions and core definitions? If only technical annexes can be revised, then the core legal architecture remains frozen. Genuine adaptiveness requires the capacity to update substantive rules and core definition, especially in dynamic fields like AI and digital infrastructure. But changing the core aspects of regulation can also have systemic, unintended consequences that beg strict procedures.

Type of modification: What types of modifications are permitted: can the regulation be made more stringent, more lenient, or both? Adaptation is a two-way street. A framework that only permits tightening risks over-regulation; one that only allows relaxation may lead to capture. Enabling both directions allows regulation to remain proportional to actual risk.

iv. Institutional learning

Learning institutions: Does the regulation create or mandate a body responsible for proposing adaptations? Adaptive regulation is more likely to materialise when a specific institution is charged with monitoring and proposing change. Without such a learning actor with specific (often technical) expertise, updates may depend on ad hoc political will rather than structured institutional feedback.

Actors involved: Who has the authority to initiate revisions: only the Commission, or also Member States and independent agencies? The diversity of initiators influences both the speed and inclusiveness of adaptation. Broadening the set of actors who can trigger revisions increases the chance that emerging problems are surfaced early and acted upon.

Stakeholder input: Is stakeholder engagement mandated during reviews or updates? Including stakeholders in the revision process anchors regulation in operational reality. It reduces blind spots, enhances legitimacy and increases the quality of updates by injecting on-the-ground knowledge into the legal process. But it also comes with a risk of regulatory capture that must be addressed by institutional design.

Impact assessment: Does the regulation require an impact assessment before changes can be made? Updating regulation without anticipating its consequences may do more harm than good. Mandated impact assessments act as a filter for ill-considered changes and strengthen the empirical foundation of legal evolution.

c. Overview

The table in Appendix 1 provides a snapshot of how each of the eight EU Digital Acts performs across the 14 adaptivity criteria, along with the relevant legislative references.

2. Findings

The results reveal systematic patterns across the regulatory corpus. I first distinguish between common features that enable adaptation and those that constrain it (Section a). Then, I highlight notable outliers, i.e., regulations that demonstrate exceptional innovation in specific adaptive features or surprising rigidity in areas where flexibility is generally observed (Section b).

a. Patterns

i. Adaptive traits

Data collection obligation. A vast majority of EU Digital Acts impose formal obligations on regulators to monitor (some of) the real-world effects of the rules they enforce. The Digital Markets Act requires the Commission to monitor gatekeepers’ effective compliance with their obligations under Articles 5, 6 and 7, as well as relevant decisions under Articles 8, 18, 24, 25 and 29 (Article 26.1).Footnote 24 The DSA goes further by establishing a system of multi-layered reporting. Digital Services Coordinators must publish annual activity reports (Article 55),Footnote 25 and the European Board for Digital Services must issue a yearly risk mitigation assessment for very large platforms and search engines (Article 35).Footnote 26 The Commission must also evaluate the broader societal impact of the DSA every five years, including on SMEs and fundamental rights (Article 91),Footnote 27 and report on crisis response measures where applicable (Articles 36.7 and 36.11).Footnote 28

The Digital Operational Resilience Act (DORA) similarly mandates annual reporting on ICT-related incidents (Article 22.2)Footnote 29 and requires the Oversight Forum to assess results of critical third-party provider supervision (Article 32.2).Footnote 30 Additionally, a joint report is to be submitted every five years by the European Supervisory Authorities on third-country ICT risks and their systemic impact (Article 44.2).Footnote 31 The Chips Act obliges the Commission to regularly inform the European Semiconductor Board of the outcomes of strategic mapping (Article 19.2),Footnote 32 to maintain and update a list of early warning indicators (Article 19.3),Footnote 33 and to monitor disruptions to the semiconductor value chain (Article 20.1 and 20.7).Footnote 34

Lastly, while the AI Act is more limited in scope, it requires the Commission to publish annual reports on the use of real-time biometric identification systems by law enforcement (Article 5.7)Footnote 35 and mandates national authorities to report annually on sandbox activities (Article 57.16).Footnote 36 The Advisory Forum must also issue an annual report on its own activities (Article 67.10).Footnote 37 These recurring information flows enable regulators to evaluate implementation and to ground future adaptations in empirical observation.

Update frequency. One of the more structurally promising traits across the EU Digital Acts is the prevalence of predefined review timelines which scaffold regulatory adaptiveness. Nearly all eight Acts mandate evaluation cycles typically spaced every three to five years. This provides predictability, creates bureaucratic discipline, and introduces feedback loops essential for learning.

The Chips Act mandates quadrennial reviews beginning in 2026 (Article 40),Footnote 38 while the Cyber Resilience Act sets a similar rhythm, albeit starting only in 2030 (Article 70.1).Footnote 39 Even the Data Act, though more modest in ambition, requires a one-off review by 2028 covering not just implementation but market impact and competitiveness dimensions (Articles 49.1 and 49.2).Footnote 40

Some other Acts, like the Digital Services Act, opt for a layered review architecture. It requires the Commission to assess specific provisions, such as Article 33, within three years (Article 91.1),Footnote 41 conduct an initial overall evaluation by early 2027 (Article 91.2),Footnote 42 and then repeat full assessments every five years thereafter (Article 91.3).Footnote 43 Others, such as the Digital Markets Act, supplement a triennial review (Article 53.1)Footnote 44 with an annual implementation report (Article 35.1),Footnote 45 ensuring both short- and long-cycle feedback. The AI Act pushes the frontier even further by integrating multiple reviews tailored to distinct regulatory components. Annex III (on high-risk AI systems) and Article 5 (on prohibited AI systems) must be assessed annually (Article 112.1),Footnote 46 while broader components (including governance, transparency, and enforcement) follow a four-year cadence (Articles 112.2, 112.3, 112.5, and 112.13).Footnote 47 Such a multi-track review mechanism is rare and, arguably, unprecedented in EU digital regulation.

Taken together, these provisions show that update frequency is not treated as an afterthought. Rather, it is embedded as a constitutional feature of the new digital rulebooks. They give legal expression to the need for a regulation that learns over time, rather than ossifies.

Discretion. A synoptic reading of the EU Digital Acts reveals a regulatory grammar where the Commission is endowed with broad discretion to use secondary instruments, yet it must issue delegated and implementing acts in other instances. In the Digital Markets Act, for instance, the Commission may adopt delegated acts to adjust gatekeeper obligations (Article 12.1 and 12.3),Footnote 48 or implementing acts to clarify procedural aspects (Article 46.1).Footnote 49 These include the forms and content for notifications and reports, audit methodologies for consumer profiling, rules for market investigations, hearing rights, cooperation between authorities and even deadline calculations. Similarly, under the Data Act, the Commission may define common specifications for interoperability (Article 33.5) or support open standards through guidelines (Article 33.11).Footnote 50

But this discretion is not unfettered. Numerous provisions across the AI Act and the Cyber Resilience Act contain mandatory instructions where the Commission shall adopt implementing acts to define transparency obligations (AI Act, Article 60.1),Footnote 51 or to establish simplified documentation templates for SMEs (CRA, Article 33.5).Footnote 52 The Data Governance Act offers a comparable mix, as it requires implementing acts on logos for registered data intermediation service providers (Article 11.9)Footnote 53 and EU-wide consent form for data altruism (Article 25.4)Footnote 54 while allowing discretion elsewhere. What emerges is a fine-grained typology of delegation. Some secondary instruments are hardwired into the legislation as required steps, others remain discretionary levers available to the Commission. This design allows EU digital regulation to layer predictable institutional oversight with flexible executive calibration. That structure blends rule-bound responsiveness with agility.

Scope of modification. A comparative reading of the EU Digital Acts suggests a broad perimeter of regulatory malleability. Nearly all regulations grant latitude (explicit or implicit) for future reengineering, including modification of core definitions, substantive provisions and technical annexes. This juridical plasticity is a latent strength of the EU Digital acquis.

In formal terms, six of the eight EU Digital Acts (DGA, DMA, DSA, DORA, Chips Act, Data Act) exhibit no material constraint on what may be modified, subject only to political will and procedural compliance. The governing review articles (e.g., Article 35 DGA;Footnote 55 Article 53 DMA;Footnote 56 Article 91 DSA;Footnote 57 Article 58 DORA;Footnote 58 Article 40 Chips Act;Footnote 59 Article 49 Data ActFootnote 60 ) operate as open mandates, without restricting the scope of legislative amendments. Notably, the DMA explicitly anticipates adjustments to core platform service classifications and behavioral obligations (Article 19),Footnote 61 which signals comfort with revisiting foundational categories. The Data Act, although containing a list of review priorities (Article 49.1),Footnote 62 frames them as particular rather than exclusive, thereby maintaining an open horizon for adaptation including on definitions and institutional roles.

The AI Act presents a more nuanced configuration. While its architecture delegates substantial authority to the Commission to amend annexes and procedural regimes (e.g., conformity assessments, documentation standards), the core definitional framework such as the notion of “AI system” in Article 3 remains insulated from delegated modification. This raises a potential rigidity concern, as semantic drift in technical advances may outpace legislative inertia. Still, Article 112.10 opens a backchannel, as it allows the Commission to propose legislative revisions on any part of the Regulation, implicitly including core definitions.Footnote 63

In sum, the EU Digital Acts largely endorse a model of adaptive completeness, whereby even foundational elements are structurally open to revision. This breadth of scope is a key enabler of adaptive regulation. It gives institutions room to revisit the very framing concepts upon which they rest.

ii. Anti-adaptive traits

Real-time monitoring. The continuous transmission of compliance data via APIs or equivalent systems remains a notable absentee from most EU Digital Acts. The Digital Services Act is the lone exception, imposing real-time-like duties on very large online platforms and search engines through Article 40.7, which grants regulators API-level access to ensure continuous oversight.Footnote 64 Elsewhere, the commitment to immediacy is more aspirational than operational. For instance, the Cyber Resilience Act, the AI Act, and the Digital Operational Resilience Act adopt structured but lagged notification frameworks. Critical incidents must be reported within 24 to 72 hours, but not as they occur (DORA, Article 19.4).Footnote 65 The Data Governance Act and the Chips Act require swift, yet not instantaneous reporting by competent authorities or the Commission itself (Article 11.10 DGA;Footnote 66 Article 12.5 Chips ActFootnote 67 ). Meanwhile, the Data Act and the Digital Markets Act remain conspicuously disengaged from real-time oversight, as they offer no structured mechanisms for continuous monitoring whatsoever. These piecemeal efforts reflect a broader pattern: the lack of real-time responsiveness undercuts the ambition for a truly adaptive regulatory model. If regulation is to become a living system, its sensory apparatus must be wired to detect changes as they happen, not after they have had systemic consequences.

Stakeholder input. A review of the EU Digital Acts reveals a pattern of modest stakeholder involvement, largely confined to downstream stages of regulatory production. In most regulations, stakeholder input is formally acknowledged, but typically through indirect or advisory mechanisms, with little in the way of structured co-decision or institutionalised deliberation.

The Digital Services Act, Digital Operational Resilience Act, and Digital Markets Act confine themselves to tepid invocations of the Better Regulation principles which asserts that stakeholders may be consulted “whenever broader expertise is needed.”Footnote 68 That formulation invites bureaucratic circumvention. Even the Data Act, which is relatively dense in implementing acts, mentions formal consultation in only a few targeted provisions (e.g., Articles 33.7, 35.6, 36.8),Footnote 69 and with limited procedural depth. Here, consultation provisions are either discretionary or tokenistic.

The Data Governance Act and Chips Act offer intermediate configurations, as they combine ad hoc obligations to consult experts or sectoral actors with the establishment of institutional footholds for stakeholders such as the European Data Innovation Board’s stakeholder subgroup (Article 29.2.c DGA)Footnote 70 or the representative organizations invited to assess semiconductor supply disruptions (Article 20.7 Chips Act).Footnote 71 But these venues fall short of meaningful co-regulatory structures.

Perhaps most advanced, the AI Act displays a relatively elaborate consultative apparatus. Articles 15, 40, 56 and 62 cumulatively orchestrate multistakeholder interactions in the shaping of benchmarks, standardisation, and codes of practice.Footnote 72 Similarly, the Cyber Resilience Act goes furthest in formalising consultation as it mandates regular stakeholder sessions (Article 9.2),Footnote 73 targeted engagements during the preparation of implementing acts (Articles 8.1, 8.2, 27.4, 30.6),Footnote 74 and tailored support to SMEs and open-source developers (Article 26.3).Footnote 75 This level of stakeholder formalism is the exception rather than the rule. It remains limited in scope and unevenly embedded across the EU’s digital regulatory corpus.

Overall, stakeholder engagement in the EU Digital Acts is more gestural than generative. It reflects a regulatory monologue with participatory punctuation, rather than a structured dialogue between regulators and regulated entities. EU institutional design is deeply centralised, with the Commission operating as both gatekeeper and gate-opener of participatory processes.

Machine-readable format. A curious paradox haunts the EU Digital Acts: the regulatory regimes governing data-rich, machine-mediated ecosystems are themselves strikingly resistant to machinic parsing. With the notable exception of the Digital Services Act, machine-readable obligations remain sparse or conspicuously absent. The DSA breaks rank by requiring that very large platforms and search engines maintain an ad repository with multicriteria queries and API access (Article 39.1),Footnote 76 and that intermediary services submit structured moderation reports and user restriction decisions (Articles 15.1, 24.5, 42.1).Footnote 77 This is regulatory information designed not merely to be read, but to be processed. And yet, it is an isolated island of computability in a largely analogue sea.

Elsewhere, the commitments to machine readability are less a strategy than an afterthought. The Data Governance Act gestures weakly toward automation: single information points may use automated means (Article 8.1)Footnote 78 and shall, “where appropriate,” transmit reuse requests by such means (Article 8.2).Footnote 79 There is a whiff of structure in the application forms for data altruism entities (Article 19.4)Footnote 80 and a formal nod in the requirement for a machine-readable consent form (Article 25.4),Footnote 81 but these fragments serve more as user-facing enhancements than as tools for regulatory computation. DORA and the AI Act similarly dabble in templates and declarations (DORA Articles 11.11;Footnote 82 AI Act Article 47.1).Footnote 83 They stop short of mandating machine-readable reporting to oversight bodies. Most striking, perhaps, is the silence of the Chips Act and Data Act, two instruments that one might reasonably expect to be showcases of data structuration. Their absence from this terrain speaks volumes. In Europe, the future of data governance remains readable, but not yet computable.

Indicators defined. One of the more elusive elements in the architecture of the EU Digital Acts is the use of predefined indicators to trigger regulatory adaptation. With the exception of the Chips Act and the AI Act, most instruments remain agnostic on this front. Provisions requiring reviews, such as Articles 35 of the DGA,Footnote 84 53 of the DMA,Footnote 85 91 of the DSA,Footnote 86 and 49 of the Data Act,Footnote 87 typically instruct the Commission to evaluate impact “where appropriate,” with no reference to structured metrics or thresholds. Even where domains are data-intensive by nature (such as platform markets or data governance), adaptive triggers are either entirely discretionary or simply omitted. This reliance on general impact assessments, absent concrete benchmarks, severely limits the operationalisation of feedback loops and undermines the ambition of adaptiveness.

Two exceptions confirm the rule. The Chips Act provides a structured framework through Annex II, which details a dashboard of performance metrics (such as SME participation, infrastructure access and investment flows) which must be reported annually (Article 12.4).Footnote 88 Likewise, the AI Act deploys a multi-tiered approach, with Article 112.11 mandating the AI Office to develop a risk-based methodology to inform future adaptations.Footnote 89 Notably, the AI Act also incorporates exogenous variables (e.g., changes in the information society) as grounds for review (Article 112.10).Footnote 90 These examples suggest that when adaptiveness is taken seriously, it is anchored in quantifiable signals, not vague gestures toward impact. Yet in the broader corpus of EU Digital Acts, the technocratic capacity to “listen” remains more aspirational than systematic.

Actors involved. The gate to regulatory adaptation across the EU Digital Acts remains firmly guarded by a single actor: the European Commission. From the Data Governance Act to the Cyber Resilience Act, no regulation confers formal revision initiative powers to Member States, national agencies, or independent authorities. While consultative frameworks are widespread (e.g., the European Data Innovation Board in the DGA, the Semiconductor Board in the Chips Act, or national experts designated under Regulation (EU) No 182/2011),Footnote 91 their role is advisory. These bodies may assist, advise and opine, but the final prerogative to propose legislative revision or adopt delegated acts is indeed monopolised by the Commission. Even in domains where national specificity abound, subsidiarity finds no procedural reflection.

This architecture reflects a structural asymmetry. Centralisation may promote coherence, but it imposes high reliance on the expertise, bandwidth, and political will of a single supranational institution. No regulation introduces a “multichannel” mechanism allowing bottom-up regulatory feedback from national authorities or decentralised bodies with sectoral expertise. At most, Member States can raise objections on specific implementing acts (as in the Data Act) but cannot trigger full-scale reviews. In this sense, the EU Digital Acts instantiate what one might call a “monologic adaptiveness”: input flows in, but only the Commission speaks. If adaptive regulation aims to function as a living system, this tight bottleneck at the point of initiative remains its primary organ failure.

Learning institutions. One of the striking structural blind spots across the EU Digital Acts lies in the near-total absence of dedicated bodies responsible for proposing adaptations. While many Acts establish new agencies (Digital Services Coordinators, Data Innovation Boards, Notifying Authorities, Oversight Forums…), these entities lack the mandate to turn accumulated knowledge into policy change. The result is a structural disconnect: implementation is distributed, but adaptation is not. No institutional relay connects operational experience to legislative evolution. The result is a brittle setup: a single actor, the Commission, must sense, learn, and act, alone. That is not a learning system. That is a bottleneck.

b. Notable outliers

i. Exceptionally adaptive traits

Learning institutions in the AI Act. The Artificial Intelligence Act departs from the general learning inertia of its counterparts. It creates a triad of institutions (the AI Office, the European Artificial Intelligence Board, and the Advisory Forum) designed to feed enforcement experience directly into the process of regulatory refinement. The AI Board, in particular, is vested with the power to issue recommendations on delegated acts, contribute to the evaluation and review of the Regulation (Article 66.e.ii),Footnote 92 and propose amendments to substantive provisions, including Annex III and Article 5 (Article 66.e.vii).Footnote 93 This is not mere consultation, it is an institutionalised channel for updating the core regulatory framework based on observed practice.

The Advisory Forum complements this architecture by injecting technical expertise and producing written contributions and opinions to support both the Board and the Commission (Article 67.8).Footnote 94 Meanwhile, the AI Office plays a crucial role in monitoring compliance, especially with respect to general-purpose AI models (Article 89.1).Footnote 95 Together, these bodies form a procedural ecosystem of regulatory learning, where oversight and revision are formally linked, and where the regulation is not only implemented but also continuously interrogated. This design moves beyond static rulemaking and signals an institutional commitment to adaptiveness that is structurally rare across the EU digital acquis.

Stakeholder engagement in the AI Act. The AI Act stands out as the most comprehensive EU Digital Act in terms of stakeholder engagement. It not only mandates consultation but embeds participatory mechanisms across multiple regulatory functions. The Commission is required to cooperate with stakeholders when developing benchmarks and methodologies for assessing accuracy and robustness (Article 15.2).Footnote 96 Standardisation requests must also be preceded by stakeholder consultation (Article 40.2),Footnote 97 which ensures that technical norms reflect a wide array of expert inputs.

The AI Office, a central node in the Act’s governance structure, may involve providers of general-purpose AI models and national authorities in co-drafting Union-level codes of practice. In addition, it may draw on civil society, industry, academia, downstream providers, and independent experts for broader support (Article 56.3).Footnote 98 At the Member State level, obligations go beyond symbolic gestures. Public authorities must actively facilitate stakeholder access to regulatory sandboxes, deliver targeted training and awareness-raising initiatives, and establish structured communication channels to promote engagement with standard-setting processes (Article 62.1).Footnote 99 Altogether, the AI Act weaves stakeholder engagement into both the formulation and implementation of regulatory norms. It turns stakeholder engagement from a rhetorical accessory into a structural feature.

Machine-readable formats in the DSA. The Digital Services Act carves out a singular position among the EU Digital Acts by embedding machine-readability as a core feature of its transparency infrastructure. Unlike other regulations, which often omit the issue altogether, the DSA mandates specific technical affordances to facilitate automated scrutiny. Providers of very large online platforms and search engines must maintain a “searchable and reliable” repository, accessible via APIs and enabling multicriteria queries (Article 39.1).Footnote 100 This is a notable step toward computational accountability.

Further, intermediary service providers must report user restriction decisions in a machine-readable database submitted to the Commission (Article 24.5),Footnote 101 and publish biannual content moderation reports in machine-readable format (Article 15.1 and Article 42.1).Footnote 102 This set of obligations elevates the DSA beyond manual, prose-based compliance logic and moves toward an ecosystem where regulatory oversight is capable of automation and real-time querying. In short, the DSA operationalises transparency not just as a legal requirement, but as a format-compatible data layer, amenable to computational enforcement.

Indicators for assessing the regulation in the Chips Act and the AI Act. Among the EU Digital Acts, only the Chips Act and the AI Act offer a fully fleshed-out system of indicators for assessing regulatory performance, a notable outlier in an otherwise vague landscape. The Chips Act includes a detailed Annex II that enumerates specific metrics for success, including the number of participating entities, tools developed, levels of private co-investment, infrastructure access rates, training outcomes, the activation of competence centers, and the volume of venture capital received by start-ups and SMEs along the semiconductor value chain (Annex II).Footnote 103 These indicators are not simply illustrative. The Chips Joint Undertaking is mandated to report on them annually, which anchors metrics firmly in the institutional machinery of evaluation (Article 12.4).Footnote 104

The AI Act is equally, if not more, granular. It defines clear input, process, and outcome metrics to guide regulatory adaptation. That includes Commission reports (Articles 112.1–112.7),Footnote 105 adequacy of national enforcement resources, sanctioning patterns, market entry by SMEs, and the state of standardisation initiatives (Article 112.4).Footnote 106 It goes further by incorporating exogenous triggers into the review logic, i.e., technological breakthroughs, evolving risks to health, safety or fundamental rights, and shifts in the information society (Article 112.10).Footnote 107 The AI Office, in parallel, is charged with building a risk-based evaluation framework to guide updates to the Regulation’s annexes and substantive provisions (Article 112.11).Footnote 108 These twin efforts instantiate an adaptive logic grounded not in generic impact language, but in traceable indicators that transform the assessment of these acts from discretion to method.

ii. Exceptionally non-adaptive traits

Data collection in the Data Governance Act. The paradox of the Data Governance Act lies in its silence on structured data reporting. Despite its central ambition to foster trust in data sharing and promote the re-use of public sector data by laying down a framework for intermediaries and data altruism organizations, the regulation does not impose systematic data collection obligations on stakeholders or national agencies. There is no requirement for machine-readable transparency reporting, no metrics to assess compliance with data altruism standards, no standardised templates to monitor the activities of data intermediation services. The Act aspires to coordinate a European data space, yet it does so without wiring the system for feedback, which leaves its adaptive capacity dependent on anecdote rather than evidence.

General revision in the Chips Act and Cyber Resilience Act. Both the Chips Act and the Cyber Resilience Act institutionalise delegated and implementing acts as operational levers of regulatory adaptation (Article 37 Chips Act;Footnote 109 Article 61 CRA).Footnote 110 Yet neither regulation empowers the Commission, in their respective review clauses, to submit amendment proposals to the European Parliament and the Council (Article 40 Chips Act;Footnote 111 Article 70 CRA).Footnote 112 This omission is not trivial. It deprives the review exercise of its natural endpoint (i.e., legislative recalibration) and confines adaptation to secondary instruments alone. The result is a formalised stasis cloaked in procedural activity.

Scope of modification in the Cyber Resilience Act. The Cyber Resilience Act confines the Commission’s adaptive authority to the terrain of secondary instruments, which leaves the architecture of core definitions untouched. Delegated acts may recalibrate the scope of the Regulation by modifying Annex III and IV to include or remove product categories (Articles 7.3 and 8.2)Footnote 113 and adjust certification requirements, conformity procedures, and enforcement modalities. Technical standards may also be updated to follow technological evolution. Yet nowhere does the text empower the Commission to revise foundational legal concepts. This asymmetry reflects a selective plasticity. The CRA is adaptive where the terrain is operational, inert where it is more fundamental.

III. Principles to making regulation more adaptive

The shortcomings of adaptive mechanisms in the EU Digital Acts stem less from faulty execution than from the conceptual lens through which they are designed. Remedying this demands two steps. First, show why complexity science provides a superior theoretical base for adaptive regulation (Section 1). Second, distil this theory into operational design principles capable of producing genuinely future-responsive instruments (Section 2).

1. Why complexity science

Title II revealed a paradox at the heart of EU Digital Acts. While lawmakers have incorporated adaptive mechanisms, these instruments remain anchored in neoclassical assumptions about predictable technological trajectories and equilibrium states. The AI Act exemplifies this tension. Despite its rather sophisticated institutional architecture, its adaptive mechanisms are designed for a world where AI evolves predictably and incrementally, not the world we actually inhabit, where new AI capabilities emerge unexpectedly. Regulatory architecture presupposes linear evolution, discretionary triggers and centralised adaptation pathways. It assumes technology moves in linear arcs toward equilibrium, and that feedback can wait for the next reporting deadline. The Acts collect data, but mostly in lagged, annual or multi-year batches. They invite stakeholders, but sporadically and on the regulator’s terms. They allow modification, but only through channels controlled by a single institutional gatekeeper. The machinery to learn exists; the mandate and wiring to act on that learning, much less so.

These design choices are increasingly untenable in markets characterised by increasing returns,Footnote 114 path dependenciesFootnote 115 and emergent properties.Footnote 116 If adaptive regulation is to transcend its current limitations, it requires a methodological foundation capable of embracing genuine uncertainty while maintaining legal predictability. Complexity science offers precisely such a foundation.

For readers unfamiliar with the field, complexity science refers to the study of systems composed of many interconnected elements whose interactions generate collective patterns that cannot be fully deduced from their individual parts.Footnote 117 In such systems, ranging from ecosystems and financial markets to online platforms, small changes can produce large, sometimes abrupt effects;Footnote 118 feedback loops can amplify or dampen dynamics;Footnote 119 and the system’s overall behavior emerges from continuous adaptation rather than settling into a fixed equilibrium.Footnote 120

The case for relying on complexity science to build adaptive regulation rests on three propositions. First, digital markets are complex adaptive systems, and complexity science is, by definition, the discipline devoted to understanding such systems. The European Digital Acts regulate ecosystems where multiple agents interact, adapt and co-evolve in response to changing conditions, i.e., the phenomena that complexity science has studied for decades across scientific fields.Footnote 121 When the Digital Markets Act addresses platform ecosystems, or when the AI Act governs machine learning systems, regulators confront not static market structures but dynamic systems characterised by non-linear relationships and emergent behaviors.Footnote 122 The conceptual apparatus of neoclassical economics and Newtonian science, with its emphasis on equilibrium and linearity, proves inadequate for capturing these dynamics.Footnote 123 Complexity science, by contrast, offers a theoretical framework for understanding how systems evolve. Applying a science developed precisely to understand adaptive systems ensures conceptual alignment between the object of regulation and the framework used to regulate it.

Second, complexity science provides not abstract theory, but empirically grounded insights derived from decades of rigorous research. Since the establishment of the Santa Fe Institute in 1984, complexity scientists have developed robust methodologies for studying adaptive systems across disciplines, spanning physics,Footnote 124 biology,Footnote 125 economicsFootnote 126 and computer science.Footnote 127 Research in complexity science has led to the creation and fostering of scientific instruments, including agent-based modeling,Footnote 128 network analysisFootnote 129 and evolutionary game theory.Footnote 130 They have yielded reproducible findings about how complex systems such as ant colonies,Footnote 131 neural networksFootnote 132 and financial marketsFootnote 133 behave.

Third, and perhaps most compelling, legal scholars have already shown that legal systems can learn from complexity insights. The work of Ruhl and Katz on legal complexity (macro),Footnote 134 but also legal scholarship in fields such as environmental regulation,Footnote 135 financial oversight,Footnote 136 administrative lawFootnote 137 and public health governance (micro)Footnote 138 have operationalised complexity concepts for legal purposes.

Yet, this is not to suggest that complexity science offers a panacea. Complex systems are, by definition, difficult to predict and control.Footnote 139 Several limitations warrant explicit acknowledgment. First, complexity science excels at identifying patterns and dynamics but offers limited guidance on normative choices. When feedback loops amplify market concentration, complexity science can explain the mechanism but cannot determine whether the resulting market structure is socially desirable. These value judgements remain irreducibly political and require democratic deliberation rather than scientific resolution. Second, if complexity merely becomes another way of saying “it is complicated,” it adds nothing to regulatory design. Third, and this bears repeating, the dynamics of complex systems, shaped by emergence, scaling effects, chaos and feedback loops, can disorient regulated entities if the legal order seeks to emulate the restless dynamics of living systems. Mechanisms that track system dynamics can produce a landscape where regulatory requirements are in constant flux, which would undermine business planning and chill innovation. The cure for regulatory inadaptability must not become regulatory volatility.

Paradoxically, these very limitations are what make complexity science worth the trouble. Complexity starts from an acknowledgment of irreducible uncertainty and seeks to design governance frameworks that remain effective under such conditions. In an era where technological change routinely outpaces regulatory response, where digital markets exhibit increasing returns and winner-take-all dynamics, where biotechnology yields unforeseen breakthroughs, and where some artificial intelligence systems evolve beyond their creators’ foresight, regulatory frameworks grounded in assumptions of predictability (i.e., future-proof systems premised on the enduring validity of their methods and objectives) are not inadequate, they are dangerous. Complexity science offers a more honest and ultimately more effective foundation for adaptive regulation in genuinely complex domains.Footnote 140 My task, now, is to provide a comprehensive framework that translates complexity insights into concrete, predictable regulatory design principles. That is the purpose of what follows next.

2. Principles for future-responsive regulation

European courts have long recognised that regulation must stay proportionate to its aims and that lighter alternatives take precedence whenever they can achieve the same outcome.Footnote 141 In domains marked by uncertainty, the case law has increasingly emphasised that proportionality review cannot be confined to the evidence available at the moment of adoption; courts assess whether the measure remains justified in light of current and evolving information.Footnote 142 What emerges is a process-oriented form of proportionality review. Taken together, these strands support a simple proposition. In fast-changing digital markets, designing regulation as adaptive is a structural way of complying with proportionality throughout the regulatory life cycle, not only at the moment of enactment.

Building on complexity insights, I articulate concrete principles for designing regulatory instruments capable of orderly transformation.Footnote 143 These principles address the central challenge identified in Title II, i.e., how to structure adaptation mechanisms that preserve legal certainty while enabling necessary evolution in rapidly changing digital markets. They also echo the sequencing of this paper’s empirical inquiry.

a. Blueprint for adaptive regulation

i. Principle #1: modular regulatory architecture

The first principle concerns the scope of adaptation. Complex adaptive systems maintain resilience through modular components that can evolve independently without destabilising the whole.Footnote 144 Transposing this insight to regulation, future-responsive frameworks would separate essential from non-essential elements, and would thus create distinct layers subject to different revision mechanisms.

Each legislative act should explicitly identify which provisions are essential and which are not. This distinction clarifies the Commission’s scope for adaptation through delegated and implementing acts; it prevents legal uncertainty and modifications that exceed the legislative framework.Footnote 145

Essential elements (e.g., core definitions, fundamental rights protections…) remain stable under this principle, as they are modifiable only through legislative procedures. Non-essential elements (e.g., technical standards, operational procedures…) are delegated to faster revision mechanisms such as implementing acts or technical specifications.Footnote 146 This layered approach enables incremental adaptation without wholesale reform, and allows regulation to respond to technological change while maintaining legal predictability.Footnote 147 This mirrors how biological systems achieve robustness through hierarchical organization, where fast-changing peripheral elements adapt while core functions remain stable.Footnote 148

The AI Act’s architecture already gestures toward such modularity, though incompletely. It delegates substantial authority to amend annexes and procedural regimes, but core definitional frameworks remain frozen. A modular approach permits controlled evolution even of fundamental concepts, subject to heightened procedural safeguards rather than complete insulation from change.

With the scope of adaptability set, the question shifts from what can change to how we know when change is needed. This is where Principle #2 enters. If modularity provides the skeleton of an adaptive regime, sensing is its nervous system.

ii. Principle #2: distributed sensing

Adaptive systems require continuous feedback from their environment to remain responsive.Footnote 149 Current EU digital regulation suffers from what might be termed “sensory poverty,” reliance on periodic reviews and prose-based reporting that capture only fractional glimpses of market dynamics. Future-responsive regulation demands comprehensive monitoring infrastructure across three dimensions.Footnote 150

First, mandatory monitoring obligations require regulators to collect implementation data systematically across all major regulatory functions, not merely in response to specific concerns.Footnote 151 Second, machine-readable formats must structure reporting so regulators can process data automatically.Footnote 152 Third, real-time data pipelines must be established where feasible, particularly in critical, fast-moving markets, where delayed feedback risks regulatory obsolescence.Footnote 153

The Digital Services Act’s requirement for API-based monitoring of very large online platforms provides a (partial) template. Machine-readable reporting should become the default across all digital regulations, as it turns oversight from manual assessment into computational pattern recognition. This shift from episodic to continuous monitoring will also enable detection of phase transitions (moments when events trigger systemic shifts) before they cascade through the regulatory domain.Footnote 154 Fundamentally, machine-readable reporting has a crucial proportionality function. Requiring firms to generate data that regulators cannot meaningfully process drift into disproportionate administrative load, detached from regulatory purpose. The inverse is also true. Once authorities build the infrastructure to exploit standardised data, compliance costs convert into institutional learning. Proportionality is then restored, means and ends realign. In this light, machine-readable data analysis is not a procedural embellishment but a constitutional baseline.

Now, one word of caution. These monitoring capacities, however advanced, cannot overcome a structural limitation that confronts any monitoring system, i.e., the difficulty of detecting foregone economic activity. Regulatory frameworks can track compliance costs, identified harms, etc. They cannot observe counterfactual innovation that regulation prevented from emerging. Companies never founded, products never developed, and business models never attempted leave no trace for regulators to monitor.Footnote 155

Addressing this limitation requires indirect measurement strategies. Comparative analysis across jurisdictions with varying regulatory stringency can illuminate innovation differentials attributable to regulatory choices. Surveys of entrepreneurs and investors about abandoned projects due to regulatory barriers provide signals about foregone opportunities, though such evidence remains inherently incomplete. Regulatory sandbox programs that temporarily exempt participants from specified requirements generate controlled experiments revealing innovation that baseline rules might suppress. These approaches mitigate rather than solve the counterfactual problem. The asymmetry between observable compliance costs and invisible opportunity costs persists. It biases adaptive systems toward detecting regulatory excess while missing regulatory harm in the form of stifled innovation.

Yet imperfect monitoring proves superior to its absence. Title II documented that most EU Digital Acts lack systematic data collection obligations, machine-readable reporting requirements, or real-time monitoring infrastructure. The regulatory frameworks analysed operate with minimal observational capacity of any kind. Distributed sensing mechanisms, even with their structural limitations in detecting foregone opportunities, represent a substantial improvement over regulatory systems that proceed without empirical feedback. The choice is not between perfect observation and imperfect observation, but between structured monitoring that captures some regulatory effects and ad hoc assessment that captures too few of them.

With the sensors switched on, the question shifts from detecting change to acting on it. This is where Principle #3 enters. If sensing is the nervous system of an adaptive regime, triggering mechanisms are its reflexes.

iii. Principle #3: pluralistic triggering mechanisms

The Commission’s current monopoly on adaptation initiatives creates a single point of failure in regulatory evolution.Footnote 156 Complex systems achieve resilience through redundancy, a fundamental principle in complex systems where multiple pathways to the same function prevent single points of failure from collapsing the entire system.Footnote 157 Future-responsive regulation should therefore establish multiple channels for triggering reviews. Institutional learning is indeed stimulus dependent. Different triggers generate different cognitive responses, whether epistemic, bargaining-based, hierarchical or reflexive.Footnote 158 These responses are not functionally equivalent. Each embeds its own distortions. A single triggering mechanism therefore hardwires one mode of learning into the system and suppresses alternative adaptive pathways.

Under this principle, Member States, national agencies and sectoral bodies possess formal powers to initiate regulatory reviews based on observed implementation challenges.Footnote 159 This does not require that multiple actors reach consensus for every adaptation, but rather that each can independently trigger review processes when specific indicators are met. The AI Act’s multi-tiered institutional architecture, with its AI Board empowered to propose amendments, offers a partial model, though one still requiring expansion to include national-level triggers. For example, a qualified minority of Member States representing at least 35% of the EU population could petition for review when they detect implementation failures. The mechanism would set a clear threshold and creates a disciplined channel through which signals from the periphery travel back to the centre.

Beyond diversifying the actors who can initiate change, triggering mechanisms must operate through three distinct modalities. First, indicator-based triggers should be anchored in predefined metrics rather than discretionary assessments.Footnote 160 Quantifiable thresholds (e.g., market concentration ratios exceeding specified levels, compliance costs surpassing projected baselines, harm incidents reaching critical frequencies, or innovation rates declining below historical averages) must automatically mandate review. The Chips Act’s Annex II performance metrics exemplify this approach, though their application remains limited to reporting rather than triggering adaptation.Footnote 161 Similarly, the AI Act would gain from a rule that obliges high-risk system providers to keep API endpoints open to national enforcers so that performance metrics, incident rates and distributional results become directly observable. The DMA would log innovation indicators to detect when the flow of new entrants in a digital sector deviates from its historical range. The EU Digital Acts would also record compliance costs borne by SMEs and ease the burden once these costs climb above a defined share of revenue.

Second, systematic review cycles must embed mandatory policy iteration protocols that go beyond mere reporting.Footnote 162 Each regulation must incorporate tiered review periods, perhaps annual for technical annexes, triennial for operational provisions and quinquennial for core frameworks, that require not just assessment but concrete recommendations for change.Footnote 163 These reviews are structured as policy iteration exercises, where regulators must either propose specific adaptations based on accumulated evidence or provide reasoned justification for maintaining the status quo.

Third, emergency triggers must enable rapid response to unforeseen developments without waiting for scheduled reviews or indicator thresholds.Footnote 164 When new technologies fundamentally alter market dynamics or when regulatory failures create immediate risks, streamlined procedures must allow expedited adaptation while maintaining appropriate safeguards. This requires the use of exogenous event clauses that require reassessment following pre-defined emergency situations.

This multi-modal, multi-actor approach ensures that adaptation can occur through routine learning, responsive adjustment, or crisis intervention as circumstances demand. This creates what complexity scientists term “distributed control” where no single agent commands the system, yet coherent behavior emerges from the interaction of multiple decision-makers responding to local information.Footnote 165 It transforms the regulatory system from one dependent on a single decision-maker to a distributed network capable of detecting and responding to signals from multiple sources. It prevents both regulatory sclerosis and capture. Importantly, the transparency of these trigger mechanisms resolves the apparent tension between adaptiveness and legal certainty. The EU legal principle of legal certainty does not demand regulatory stasis. As the Court of Justice has consistently held, it requires that “EU rules enable those concerned to know precisely the extent of the obligations imposed on them.”Footnote 166 Transparent triggering mechanisms satisfy this requirement not by freezing rules in time, but by making the conditions and processes of change themselves predictable. Regulated entities can anticipate when reviews will occur, what indicators will prompt adaptation, and through what procedures changes will be implemented.Footnote 167

Once a trigger is pulled, the question shifts from whether to adapt to how adaptation is remembered and refined over time. This is where Principle #4 enters. If triggers are the reflexes of an adaptive regime, institutional memory is its long-term intelligence.

iv. Principle #4: networked institutional memory

Adaptation without learning merely replaces one static configuration with another.Footnote 168 Future-responsive regulation demands institutional structures capable of both recursive evaluation (a hallmark of complex adaptive systems that modify their behavior based on accumulated experience, creating what complexity scientists call “path-dependent” institutional learningFootnote 169 ) within individual regulations and coordination across the regulatory corpus. This requires a two-tier institutional architecture that addresses both vertical learning and horizontal coherence.

At the first tier, dedicated learning bodies must bridge operational experience with policy evolution within each regulatory domain. Following the AI Act model, each major digital regulation must establish an expert body with explicit mandates to synthesise implementation data and maintain institutional memory. These bodies serve as the institutional memory of their respective regulations. They document failures and successes to inform future adaptations. Take the DMA. A “Digital Markets Implementation Board” would consolidate quarterly compliance reports, maintain public machine-readable logs of all enforcement actions including violation patterns and sanctions, conduct annual evaluations of whether gatekeeper obligations achieve intended effects, and issue binding recommendations for regulatory adaptations when evidence warrants.

At the second tier, an inter-regulatory coordination body must ensure systemic coherence as individual regulations evolve.Footnote 170 Digital markets operate as interconnected systems where changes in one domain cascade across others. They exhibit “tight coupling,” where perturbations in one component rapidly propagate through interdependencies.Footnote 171 When the DMA adapts its gatekeeper obligations, for instance, corresponding adjustments in the DSA’s platform responsibilities might be necessary to maintain alignment. Yet current EU Digital Acts evolve in isolation, risking fragmentation and conflicting requirements. An “European Coordination Board” bringing together the AI Office, the European Board of Digital Services, the DORA Oversight Forum, and other expert bodies would track spillovers across regimes. It would identify friction points early and ensure that knowledge moves across Acts. Every three years, the Board would run a meta-review and publish a state of digital regulation report that assesses whether the EU Digital Acts evolve on a coherent path or start pulling apart.

Under this principle, this coordination body does not possess direct regulatory powers but monitors spillover effects and proposes harmonising adjustments. It serves as the connective tissue between domain-specific learning bodies, thus facilitating knowledge transfer and preventing regulatory silos. When patterns observed in one domain have implications for others, such as emerging business models that cut across regulatory boundaries, this body ensures coordinated response rather than fragmented reaction.

Critically, both tiers must be empowered to convene joint reviews when cross-regulatory issues arise. They also document every regulatory adaptation in public, machine-readable logs that show triggering conditions, stakeholder inputs, impact assessments, and decisional rationales.Footnote 172 This record builds accountability and learning. It allows pattern recognition across cycles of adaptation and secures legal certainty through transparency.

This principle gains salience, as EU ex ante instruments move deeper into terrain that used to fall under ex post enforcement. Under Article 102 TFEU, for example, institutional memory grows case-by-case. Judgments generate evidence about market effects, remedies, and enforcement errors. Ex ante regimes like the Digital Markets Act short-circuit this learning cycle. By intervening before harm materialises, and by diverting conduct away from traditional enforcement, the system produces fewer decisions and less factual record. The result is a structural memory deficit. Ex ante rules suppress the jurisprudence that would normally replenish the historical record of regulatory performance. Systematic documentation and evaluation are essential substitutes for the learning that ex post enforcement once generated.

b. Implementation

These principles collectively enable “adaptive capacity,” i.e., the ability of a system to maintain core functions while evolving in response to changing environments.Footnote 173 By embedding variation and retention mechanisms directly into regulatory architecture, future-responsive regulation transforms a static rule system into a complex adaptive system capable of co-evolving with the realities it governs.Footnote 174

These principles do not constitute a complete blueprint but rather foundational elements for constructing future-responsive regulation. They recognise that in complex, rapidly evolving markets, the aspiration for permanent rules must yield to modest ambitions of orderly transformation. Yet their implementation confronts objections at two levels. One is conceptual, about the appropriateness of (even adaptive) regulation in domains of genuine uncertainty. The other is operational; it pertains to compliance costs for regulated entities. Both warrant examination.

The case for adaptive regulation presupposes that regulation itself proves necessary. This assumption requires interrogation.Footnote 175 In domains characterised by rapid technological change and genuine uncertainty about welfare effects, premature regulation, however adaptive in design, risks stifling beneficial innovation whose contours cannot yet be anticipated. The AI Act illustrates this tension. Adopted in 2024, the Regulation codified regulatory responses to general-purpose AI models barely two years after their commercial emergence. A public choice perspective suggests caution about framing adaptive regulation as universally superior to regulatory restraint. That said, adaptive mechanisms are specifically designed to compensate for foundational design choices made under conditions of limited empirical evidence. This capacity for correction distinguishes adaptive from static regulation and, in a sense, limits the negative effects of (too) early regulation.

Applied to the EU Digital Acts, this article takes them for granted. This methodological choice reflects political realism rather than normative agnosticism. European policymakers have been regulating digital markets and show no sign of slowing down. Given that regulation will occur, my analysis focuses on examining how regulatory frameworks can be designed to manage this existing reality. The choice is between adaptive regulation and static regulation, between frameworks capable of learning from implementation experience and frameworks that ossify around initial design choices made under conditions of limited information. Where regulators exercise forbearance, the principles articulated here become unnecessary. Where they do not, these principles offer guidance for optimising regulation.

The second objection concerns operational feasibility. A recurring objection to adaptive regulation concerns the compliance burden on small firms. The argument holds that adaptative regulation requires companies to monitor legal changes and recalibrate internal compliance systems. These administrative tasks impose greater relative costs on entities lacking dedicated legal departments or sophisticated compliance infrastructure.

The objection carries weight but founders on closer inspection as it misconstrues the relevant comparison. Static regulation that ossifies into irrelevance imposes its own costs. Firms operating under obsolete rules face uncertainty about enforcement priorities, competitive disadvantages relative to non-compliant actors, and potential liability when regulators belatedly update frameworks through enforcement rather than transparent revision. A pharmaceutical company adhering to outdated safety protocols, for instance, may find itself simultaneously over-investing in unnecessary procedures and under-investing in newly understood risks. The costs of regulatory drift, though less visible than compliance expenditures, are real. The choice is not between adaptive regulation and costless stasis. It is between the costs of orderly revision and the costs of accumulated obsolescence.

Concerns about disproportionate impact on small firms can also be addressed through regulatory design rather than abandoned as insurmountable. Exemptions for enterprises below specified size or users’ thresholds reduce burden without sacrificing adaptive capacity. The AI Act establishes reduced documentation requirements for small and medium enterprises and provides regulatory sandbox access to facilitate compliance. Similar provisions appear across the EU Digital Acts corpus. Simplified reporting templates, extended transition periods and technical assistance programs targeted at smaller entities can further mitigate adjustment costs. These mechanisms require conscious incorporation into adaptive frameworks, but they pose no conceptual obstacle.

A final consideration undermines the objection entirely. Static regulation tends toward irreversible stringency. Once enacted, relaxation of the provisions (if proven necessary) requires the same legislative effort as initial adoption, and political economy dynamics favour retention over repeal. Adaptive regulation, by contrast, permits adjustments in two directions. Revision mechanisms tighten obligations where evidence reveals harm. They also relax requirements where implementation demonstrates rules prove unnecessary or counterproductive. This benefit disproportionately favours small firms, precisely because static compliance costs fall heaviest on them.

All in all, these objections circumscribe but do not invalidate the case for adaptive regulation. A more fundamental objection resides in the fact that implementing the principles articulated here requires not just new regulatory instruments but a fundamental shift in regulatory philosophy, from physics to gardening, from engineering static outcomes to cultivating dynamic processes.

IV. Conclusion

This paper set out to define adaptive regulation, to explain why it matters, to measure it across eight recent EU Digital Acts, and to propose design principles to make it work in practice. The empirical analysis revealed that adaptive mechanisms are present, but they are incomplete and still tethered to neoclassical assumptions. Building on complexity science, I distilled four principles for future-responsive regulation that, together, form an integrated system. Modular architecture defines what can change, distributed sensing detects when change is needed, pluralistic triggering determines who can initiate it, and networked institutional memory ensures that each adaptation compounds rather than resets.

All of which points here: adaptive regulation is not an academic indulgence. It is the only way to steer between two cliffs. On one side lies static law, rules that age badly, drift out of sync with their subject, and end up ineffective, counterproductive, or even dangerous. On the other side lies the temptation to ban regulation altogether, on the theory that if static law is harmful, no law must be better. That is the logic behind the calls in the United States to keep AI regulation off the books for a decade.Footnote 176 Both extremes are bad bets. Static law locks you into Monday’s answers for Tuesday’s questions. No law leaves you with no answers at all. Adaptive regulation keeps the middle ground open. It builds the capacity to intervene if and when necessary, because Tuesdays happen.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/err.2026.10087

Funding Statement

Open access funding provided by Vrije Universiteit Amsterdam.

References

1 S Colbert, “Stephen Colbert at the 2006 White House Correspondents’ Association Dinner [Video]” (2006) YouTube. https://www.youtube.com/watch?v=IJ-a2KeyCAY.

2 S Ranchordás and M Van’t Schip, “Future-Proofing Legislation for the Digital Age” in Time, Law, and Change: An Interdisciplinary Study (2020) (define future-proof regulation as regulation that is “resilient to novel events”). PI Colomo, “Future-Proof Regulation against the Test of Time: The Evolution of European Telecommunications Regulation” (2022) 42 (4) Oxford Journal of Legal Studies (exploring the history of future-proof regulation in the EU).

3 A Ojanen, “Technology Neutrality as a Way to Future-Proof Regulation: The Case of the Artificial Intelligence Act” (2025) European Journal of Risk Regulation 1–16; BJ Koops, (2006); BJ Koops, Should ICT Regulation Be Technology-Neutral? (2006).

4 NN Taleb, Antifragile: Things That Gain from Disorder (Random House 2012) 24 (arguing that principle-based regulation proves more robust than rule-based regulation that are fragile).

5 European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) COM(2021) 206 final https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 [https://perma.cc/ZPZ2-AMP9].

6 European Union, “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending certain Union legislative acts (Artificial Intelligence Act [AI Act])” (2024) Official Journal of the European Union, L 2024/1689, Chapter V (General-purpose AI models) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 [https://perma.cc/UV5F-J4TN].

7 Y Yousefi, M Billi and A Rotolo, “Agentic AI: An EU AI Act Paradigm Shift?” (2025) Available at SSRN 5731424 (agentic AI compounds the instability of intended-purpose classification. When functional autonomy permits systems to exceed their predefined scope, the coherence of the AI Act’s risk architecture may again be called into question).

8 Future-proof regulation typically relies on “open-ended norms, goal-based regulation, framework legislation and principle-based rules”, see S Ranchordás and M Van‘t Schip, “Future-Proofing Legislation for the Digital Age” in Time, Law, and Change: An Interdisciplinary Study (2020); also A Chander, “Future-Proofing Law” (2017) UC Davis Law Review. (explaining how future-proofing law often means “encompass[ing] all future technological developments peremptorily”).

9 WW Powell and K Snellman, “The Knowledge Economy” in Annual Review of Sociology (Vol. 30) (2004); C Perez, “The Double Bubble at the Turn of the Century: Technological Roots and Structural Implications” (2009) 33 (4) Cambridge Journal of Economics 779–805; DW Allen, C Berg and J Potts, Institutional Acceleration: The Consequences of Technological Change in a Digital Economy (Cambridge University Press, 2025); D Colander, RPF Holt and JB Rosser, “The Changing Face of Mainstream Economics” in Review of Political Economy (Vol. 16, Issue 4) (2004).

10 In response to the increasing complexity of regulatory matters, the European Commission has engaged in even more “future-proofing”; see European Commission. REFIT – making EU law simpler, more efficient and future-proof. Retrieved August 15, 2025, from https://commission.europa.eu/law/law-making-process/evaluating-and-improving-existing-laws/refit-making-eu-law-simpler-more-efficient-and-future-proof_en [https://perma.cc/K4B4-8MY4]. The EU’s Better Regulation Toolbox includes future-proof as a core principle. Also, from 2021 to 2024, the Fit for Future Platform (a high-level expert group comprising stakeholders and representatives from all EU Member States) supported the European Commission for the purpose, see European Commission. Fit for Future Platform (F4F) 2021–2024. https://commission.europa.eu/law/law-making-process/evaluating-and-improving-existing-laws/refit-making-eu-law-simpler-more-efficient-and-future-proof/fit-future-platform-f4f-2021-2024_en [https://perma.cc/T4PJ-UQ2P].

11 While this paper focuses on making regulation future-responsive, others have explored how to make broader public management more responsive and dynamic, see UCL Institute for Innovation and Public Purpose. (2023). IIPP to establish Public Sector Capabilities Index. https://www.ucl.ac.uk/bartlett/news/2023/may/iipp-establish-public-sector-capabilities-index [https://perma.cc/R9C3-Z7AS].

12 Generally, future-proofing is “the process of anticipating the future”, see BD Rich, The Principles of Future-Proofing (2017) [Website]. Retrieved 15 August 2025, from https://principlesoffutureproofing.com/ [https://perma.cc/BJ66-6TVY]. Brian Rich, an architect, conceived of future-proofing as creating buildings capable of maintaining their structural integrity regardless of unanticipated future developments, see Kelley, P. (2014, November 4). ‘Future proofing’: Present protections against challenges to come. UW News. Retrieved 15 August 2025, from http://www.washington.edu/news/2014/11/04/future-proofing-present-protections-against-challenges-to-come/ [https://perma.cc/TYJ7-CPRX].

13 J Raitio, The Principle of Legal Certainty in EC Law (Vol. 64). (Springer Science & Business Media, 2003); E Paunio, “Beyond Predictability – Reflections on Legal Certainty and the Discourse Theory of Law in the EU Legal Order” (2009) 10 (11) German Law Journal.

14 As defined in 2.1.1.

15 European Union. (2022). Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act [DGA]). Official Journal of the European Union, L 152, 1–44. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868 [https://perma.cc/J4SV-GXMR].

16 European Union. (2022). Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act [DMA]). Official Journal of the European Union, L 265, 1–66. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R1925 [https://perma.cc/Q56Q-NRGL].

17 European Union. (2022). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act [DSA]). Official Journal of the European Union, L 277, 1–102. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2065 [https://perma.cc/Y4FA-LHNG].

18 European Union. (2022). Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector and amending Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014, (EU) No 909/2014 and (EU) 2016/1011 (Digital Operational Resilience Act [DORA]). Official Journal of the European Union, L 333, 1–102. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2554 [https://perma.cc/MY3X-RGWX].

19 European Union. (2023). Regulation (EU) 2023/1781 of the European Parliament and of the Council of 13 September 2023 establishing a framework of measures for strengthening Europe’s semiconductor ecosystem and amending Regulations (EU) 2021/694, (EU) 2021/241 and (EU) 2021/1058 (Chips Act). Official Journal of the European Union, L 229, 1–55. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32023R1781 [https://perma.cc/6NBQ-LGY7].

20 European Union. (2023). Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data (Data Act). Official Journal of the European Union, L 2023/2854, 1–90. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32023R2854 [https://perma.cc/2FNZ-FMMT].

21 European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending certain Union legislative acts (Artificial Intelligence Act [AI Act]). Official Journal of the European Union, L 2024/1689, 1–251. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 [https://perma.cc/UV5F-J4TN].

22 European Union. (2024). Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on horizontal cybersecurity requirements for products with digital elements and amending Regulation (EU) 2019/1020 (Cyber Resilience Act [CRA]). Official Journal of the European Union, L 2024/2847, 1–62. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R2847 [https://perma.cc/9MWA-5U32].

23 Delegated acts (European Union. (2012). Treaty on the Functioning of the European Union (Consolidated version), Art. 290, Oct. 26, 2012, 2012 O.J. (C 326) 171.) allow the Commission to supplement or amend non-essential elements of a legislative act. This includes technical updates or adjustments, but not the core (essential) parts of the legislation. Implementing acts (European Union. (2012). Treaty on the Functioning of the European Union (Consolidated version), Art. 291, Oct. 26, 2012, 2012 O.J. (C 326) 171.), by contrast, empower the Commission to set uniform conditions for implementing legally binding EU acts, but do not permit any amendments to the act itself, even to non-essential elements. They are typically used to detail how the law should be applied in practice.

24 DMA, 2022, Art. 26(1).

25 DSA, 2022, Art. 55.

26 DSA, 2022, Art. 35.

27 DSA, 2022, Art. 91.

28 DSA, 2022, Arts. 36(7) & 36(11).

29 DORA, 2022, Art. 22(2).

30 DORA, 2022, Art. 32(2).

31 DORA, 2022, Art. 44(2).

32 Chips Act, 2023, Art. 19(2).

33 Chips Act, 2023, Art. 19(3).

34 Chips Act, 2023, Arts. 20(1) & 20(7).

35 AI Act, 2024, Art. 5(7).

36 AI Act, 2024, Art. 57(16).

37 AI Act, 2024, Art. 67(10).

38 Chips Act, 2023, Art. 40.

39 Chips Act, 2023, Art. 70(1).

40 Data Act, 2023, Arts. 49(1) & 49(2).

41 DSA, 2022, Art. 91(1).

42 Ibid., Art. 91(2).

43 Ibid., Art. 91(3).

44 DMA, 2022, Art. 53(1).

45 Ibid., Art. 35(1).

46 AI Act, 2024, Art. 112(1).

47 Ibid., Arts. 112(1), 112(3), 112(5) & 112(13).

48 DMA, 2022, Arts. 12(1) & 12(3).

49 Ibid., Art. 46(1).

50 Data Act, 2023, Art. 33(11).

51 AI Act, 2024, Art. 60(1).

52 CRA, 2024, Art. 33(5).

53 DGA, 2022, Art. 11(9).

54 Ibid., Art. 25(4).

55 DGA, 2022, Art. 35.

56 Ibid., Art. 53.

57 Ibid., Art. 91.

58 DORA, 2022, Art. 58.

59 Chips Act, 2023, Art. 40.

60 Data Act, 2023, Art. 49.

61 DMA, 2022, Art. 19.

62 Data Act, 2023, Art. 49(1).

63 AI Act, 2024, Art. 112(10).

64 DSA, 2022, Art. 40(7).

65 DORA, 2022, Art. 19(4).

66 DGA, 2022, Art. 11(10).

67 Chips Act, 2023, Art. 12(5).

68 European Parliament, Council of the European Union, & European Commission. (2016). Interinstitutional Agreement on Better Law-Making (para. 28). Official Journal of the European Union, L 123, 1–14. https://data.europa.eu/eli/agree_interinstit/2016/512/oj [https://perma.cc/7DCN-PCTC].

69 Data Act, 2023, Art. 33(7), 35(6) & 36(8).

70 DGA, 2022, Art. 29(2c).

71 Chips Act, 2023, Art. 20(7).

72 AI Act, 2024, Arts. 15, 40, 56, & 62.

73 CRA, 2024, Art. 9(2).

74 CRA, 2024, Arts. 8(1), 8(2), 27(4) & 30(6).

75 Ibid., Art. 26(3).

76 DSA, 2022, Art. 39(1).

77 Ibid., Arts. 15(1), 24(5) & 42(1).

78 DGA, 2022, Art. 8(1).

79 Ibid., Art. 8(2).

80 Ibid., Art. 19(4).

81 Ibid., Art. 25(4).

82 DORA, 2022, Art. 11(11).

83 AI Act, 2024, Art. 47(1).

84 DGA, 2022, Art. 35.

85 DMA, 2022, Art. 53.

86 DSA, 2022, Art. 91.

87 Data Act, 2023, Art. 49.

88 Chips Act, 2023, Art. 12(4).

89 AI Act, 2024, Art. 112(11).

90 Ibid., Art. 112(10).

91 European Parliament, & Council of the European Union. (2011, February 16). Regulation (EU) No 182/2011 of the European Parliament and of the Council laying down the rules and general principles concerning mechanisms for control by Member States of the Commission’s exercise of implementing powers. Official Journal of the European Union, L 55, 13–18. http://data.europa.eu/eli/reg/2011/182/oj [https://perma.cc/L2BG-AFND].

92 AI Act, 2024, Art. 66(e.ii).

93 Ibid., Art. 66(e.vii).

94 Ibid., Art. 67(8).

95 Ibid., Art. 89(1).

96 Ibid., Art. 15(2).

97 Ibid., Art. 40(2).

98 Ibid., Art. 56(3).

99 Ibid., Art. 62(1).

100 DSA, 2022, Art. 39(1).

101 Ibid., Art. 24(5).

102 Ibid., Art. 42(1).

103 Chips Act, 2023, Annex II.

104 Ibid., Art. 12(4).

105 AI Act, 2024, Arts. 112(1)–112(7).

106 Ibid., Art. 112(4).

107 Ibid., Art. 112(10).

108 Ibid., Art. 112(11).

109 Chips Act, 2023, Art. 37.

110 CRA, 2024, Art. 61.

111 Chips Act, 2023, Art. 40.

112 CRA, 2024, Art. 70.

113 CRA, 2024, Arts. 7(3) & 8(2).

114 WB Arthur, Increasing Returns and Path Dependence in the Economy. (University of Michigan Press, 1994).

115 SJ Liebowitz and SE Margolis, “Path Dependence, Lock-In, and History” (1995) 11 (1) Journal of Law, Economics, and Organization.

116 JF Mercure, H Pollitt, AM Bassi, JE Viñuales and NR Edwards, “Modelling Complex Systems of Heterogeneous Agents to better Design Sustainability Transitions Policy” (2016) Global Environmental Change 37 (critiques equilibrium-based models and emphasises agent heterogeneity, feedback loops and path-dependence in policymaking); DT Hornstein, “Complexity Theory, Adaptation, and Administrative Law” (2005) 54 (4) Duke Law Journal (provides a synthesis of complexity features and their relevance to administrative law).

117 Generally, see the research coming out of the Santa Fe Institute. Also, M Mitchell, Complexity: A Guided Tour (Oxford University Press, 2009); MM Waldrop, Complexity: The Emerging Science at the Edge of Order and Chaos (Simon and Schuster, 1993).

118 JD Farmer, Making Sense of Chaos: A Better Economics for a Better World (Yale University Press, 2024).

119 JD Sterman, “Learning in and about Complex Systems” (1994) 10 (2–3) System Dynamics Review 291–330.

120 B Zimmerman, “A Complexity Science Primer” (2009) NAPCRG Resources (defines complex adaptive systems and their features such as emergence, self-organization, and feedback loops); MA Ahmad, G Baryannis and R Hill, “Defining Complex Adaptive Systems: An Algorithmic Approach” (2024) 12 (2) Systems (offers a concise definition and attributes of complex adaptive systems.)

121 See Santa Fe Institute. Home | Santa Fe Institute. https://www.santafe.edu/ [https://perma.cc/TAW7-23LM].

122 N Kolt, M Shur-Ofry and R Cohen, “Lessons from Complex Systems Science for AI Governance” (2025) 6 (8) Patterns 101341 (matches CAS characteristics to AI and digital governance; highlights the mismatch between AI unpredictability and static regulation).

123 T Schrepel, “The Evolution of Economies, Technologies, and other Institutions: Exploring W. Brian Arthur’s Insights” (2024) Journal of Institutional Economics 20.

124 CH Bennett, “How to Define Complexity in Physics, and Why” in Complexity, Entropy and the Physics of Information: The Proceedings of the Workshop on Complexity, Entropy, and the Physics of Information Held May-June, 1989 in Santa Fe, New Mexico (2018).

125 C Adami, C Ofria and TC Collier, “Evolution of Biological Complexity” (2000) 97 (9) Proceedings of the National Academy of Sciences of the United States of America.

126 WB Arthur, “Foundations of Complexity Economics” (2021) 3 (2) Nature Reviews Physics 136–45.

127 JH Holland, “Complex Adaptive Systems” (1992) 121 (1) Daedalus 17–30.

128 N Gilbert, Agent-Based Models (Sage Publications, 2019).

129 SP Borgatti, A Mehra, DJ Brass and G Labianca, “Network Analysis in the Social Sciences” in Science (Vol. 323, Issue 5916) (2009).

130 JW Weibull, Evolutionary Game Theory (MIT Press, 1997)

131 C Anderson and DW McShea, “Individual Versus Social Complexity, with Particular Reference to Ant Colonies” (2001) 76 (2) Biological reviews 211–37; M Moses, T Flanagan, K Letendre and M Fricke, “Ant Colonies as a Model of Human Computation” in Handbook of Human Computation (2013).

132 ATC Goh, “Back-Propagation Neural Networks for Modeling Complex Systems” (1995) 9 (3) Artificial Intelligence in Engineering.

133 J Doyne Farmer, M Gallegati, C Hommes, A Kirman, P Ormerod, S Cincotti, A Sanchez and D Helbing, “A Complex Systems Approach to Constructing Better Models for Managing Financial Markets and the Economy” (2012) 214 (1) European Physical Journal: Special Topics.

134 JB Ruhl and DM Katz, “Measuring, Monitoring, and Managing Legal Complexity” in Iowa Law Review (Vol. 101, Issue 1) (2015) (outlines the application of complexity science to legal systems, including measurement and management tools); P Vivo, D Katz and JB Ruhl, “CompLex: Legal Systems through the Lens of Complexity Science” (2024) Artificial Intelligence and Law. Advance online publication (applies complexity science concepts to model legal systems). Also conceiving the law as a complex adaptive system, see S Lierman, “Law as a Complex Adaptive System: The Importance of Convergence in a Multi-Layered Legal Order” (2014) 21 (4) Maastricht Journal of European and Comparative Law; SP Kurdyumov, “Evolution and Self-Organization Laws in Complex Systems” (1990) 134 Advances in Theoretical Physics; J Ruhl, “Law’s Complexity: Primer” (2007) 24 (4) Georgia State University Law Review; D Bourcier and P Mazzega, “Toward Measures of Complexity in Legal Systems” (2007) Proceedings of the International Conference on Artificial Intelligence and Law.

135 JB Ruhl and J Salzman, “Climate Change, Dead Zones, and Massive Problems in the Administrative State: A Guide for Whittling Away” (2010) 98 (1) California Law Review; CR Allen, JJ Fontaine, KL Pope and AS Garmestani, “Adaptive Management for a Turbulent Future” (2011) 92 (5) Journal of Environmental Management.

136 D Awrey, “Complexity, Innovation, and the Regulation of Modern Financial Markets” (2012) 2 (1) Harvard Business Law Review; K Pistor, “A Legal Theory of Finance” (2013) 41 (2) Journal of Comparative Economics.

137 DT Hornstein, “Complexity Theory, Adaptation, and Administrative Law” (2005) 54 (4) Duke Law Journal, .

138 WE Wagner, “Administrative Law, Filter Failure, and Information Capture” (2010) 59 (7) Duke Law Journal.

139 J Braithwaite, K Churruca, JC Long, LA Ellis and J Herkes, “When Complexity Science Meets Implementation Science: A Theoretical and Empirical Analysis of Systems Change” (2018) 16 (1) BMC Medicine (emphasises unpredictability in complex adaptive systems and discusses practical challenges).

140 B Cosens, JB Ruhl, N Soininen, L Gunderson, A Belinskij, T Blenckner, AE Camacho, BC Chaffin, RK Craig, H Doremus, R Glicksman, AS Heiskanen, R Larson and J Similä, “Governing Complexity: Integrating Science, Governance, and Law to Manage Accelerating Change in the Globalized Commons” in Proceedings of the National Academy of Sciences of the United States of America (Vol. 118, Issue 36) (2021) (links complexity science insights to governance and legal frameworks, especially in managing evolving systems).

141 See Court of Justice of the European Union. (1990). Fedesa and Others, Case C-331/88, para. 13; Court of Justice of the European Union. (2016). Philip Morris Brands and Others, Case C-547/14, para. 165.

142 Court of Justice of the European Union. (2015). Scotch Whisky Association and Others, Case C-333/14, para. 65 (“the review of proportionality of a national measure, such as that at issue in the main proceedings, is not to be confined to examining only information, evidence or other material available to the national legislature when it adopted that measure.”).

143 The adaptive character of regulation does not preclude the anticipation of future problems. Some problems warrant anticipatory intervention rather than ex post correction. The point is that anticipation alone is insufficient. Once a forward-looking rule is enacted, it must be continuously monitored to assess whether it is effective, what unintended effects it generates, and how it should be adapted over time. The principles developed in Section III.2 aim to achieve this objective.

144 HA Simon, “The Architecture of Complexity” (1962) 106 (6) Proceedings of the American Philosophical Society 467–82 (foundational case that hierarchical, modular systems are more robust and evolvable); CY Baldwin and KB Clark, Design Rules, Volume 1: The Power of Modularity (MIT Press, 2000). (shows how modularity enables incremental change without destabilising the whole); H Kitano, “Biological Robustness” (2004) 5 Nature Reviews Genetics 826–37 (explains how modularity and redundancy underpin robustness in complex systems); J Clune, JB Mouret and H Lipson, “The Evolutionary Origins of Modularity” (2013) 280 (1755) Proceedings of the Royal Society B: Biological Sciences (contends that modularity leads to faster evolvability in complex systems).

145 HCH Hofmann, F Coman-Kund and Z Xhaferri, “Assessing the Post-Lisbon European Union’s System of Delegated Powers at Fifteen” (2024) European Journal of Risk Regulation 3 (discussing the lack of a clear distinction between delegated and implementing acts).

146 P Craig, EU Administrative Law (3rd ed.) (Oxford University Press, 2021) (details the content of delegated acts in EU law); M Chamon, EU Agencies: Legal and Political Limits to the Transformation of the EU Administration (Oxford University Press, 2016) (on delegation, implementing acts, and scope of executive adaptation).

147 MC Dorf and JL Cohen, “The Domain of Reflexive Law” (2003) 103 (2) Columbia Law Review (describes “reflexive law” as standards updated continuously with bottom-up data, useful to justify separable, updateable layers).

148 HA Simon, “The Architecture of Complexity” (1962) 106 (6) Proceedings of the American Philosophical Society 467–482; GAJM Jagers Op Akkerhuis, “Analysing Hierarchy in the Organization of Biological and Physical Systems” in Biological Reviews (Vol. 83, Issue 1) (2008).

149 K Yeung, “Algorithmic Regulation: A Critical Interrogation” (2018) 12 (4) Regulation and Governance. (defines data-driven, often real-time monitoring and its governance implications).

150 LS Bennear and JB Wiener (2019, February 12). Adaptive regulation: Instrument choice for policy learning over time (Draft working paper) (on the need for collecting real time data to make regulation adaptive).

151 C Coglianese, J Nash and T Olmstead, “Performance-Based Regulation: Prospects and Limitations in Health, Safety, and Environmental Protection” (2003) 55 (4) Administrative Law Review 705–29 (exploring why indicator-based and threshold triggers can work, and their limits).

152 M Janssen, Y Charalabidis and A Zuiderwijk, “Benefits, Adoption Barriers and Myths of Open Data and Open Government” (2012) 29 (4) Information Systems Management. (on why machine-readable, API-friendly formats matter for oversight and reuse); A Fung, M Graham and D Weil, Full Disclosure: The Perils and Promise of Transparency (Cambridge University Press, 2007) (on how to design effective transparency that improves accountability and learning).

153 HCH Hofmann, DA Zetzsche and F Pflücke, “The Changing Nature of “Regulation by Information”: Towards Real-Time Regulation?” (2022) 28 (4–6) European Law Journal 179 (assessing current automated reporting practices in the financial sector).

154 M Scheffer, J Bascompte, WA Brock, V Brovkin, SR Carpenter, V Dakos, H Held, EH van Nes, M Rietkerk and G Sugihara, “Early-Warning Signals for Critical Transitions” in Nature (Vol. 461, Issue 7260) (2009). (empirical basis for monitoring to detect phase transitions and tipping points).

155 F Bastiat, “What Is Seen and What Is Not Seen” in Selected Essays on Political Economy (G B de Huszar, Ed.; S Cain, Trans.; pp. 1–50) (1964). D Van Nostrand Company (originally published 1850). Bastiat observed that the effects of policy which “are seen” receive disproportionate attention compared to those which are “not seen.”

156 E Ostrom, “Polycentric Systems for Coping with Collective Action and Global Environmental Change” (2010) 20 (4) Global Environmental Change. (polycentric, redundant triggers improve resilience and learning); C Folke, T Hahn, P Olsson and J Norberg, “Adaptive Governance of Social-Ecological Systems” in Annual Review of Environment and Resources (Vol. 30) (2005). (blueprints for adaptive governance)

157 M Randles, D Lamb, E Odat and A Taleb-Bendiab “Distributed Redundancy and Robustness in Complex Systems” (2011) 77 (2) Journal of Computer and System Sciences 293–304; S Naeem, “Society for Conservation Biology Species Redundancy and Ecosystem Reliability” (1998) 12 (1) Conservation Biology.

158 CA Dunlop and CM Radaelli, “The Lessons of Policy Learning: Types, Triggers, Hindrances and Pathologies” (2018) 46 (2) Policy & Politics 255–72.

159 B Eberlein and E Grande, “Beyond Delegation: Transnational Regulatory Regimes and the EU Regulatory State” (2005) 12 (1) Journal of European Public Policy. (evidence on regulatory networks and cross-domain coordination).

160 C Coglianese, J Nash and T Olmstead, “Performance-Based Regulation: Prospects and Limitations in Health, Safety, and Environmental Protection” (2003) 55 (4) Administrative Law Review 705–29; N Gunningham, P Grabosky and D Sinclair, Smart Regulation: Designing Environmental Policy (Oxford University Press, 1998). (advocates complementary instrument mixes); F Mormann, “Beyond Algorithms: Toward a Normative Theory of Automated Regulation” (2021) 62 (1) Boston College Law Review. (on output-based benchmarks that automatically adjust rates).

161 This shows that adaptive regulation can move beyond general principles. On the subject, see J Black, “Forms and Paradoxes of Principles-Based Regulation” (2008) 3 (4) Capital Markets Law Journal (making a point that regulation that rely on outcome is more adaptable).

162 C Walters, Adaptive Management of Renewable Resources (Princeton University Press, 1986); CS Holling, Adaptive Environmental Assessment and Management (Wiley, 1978). (explore the use of structured, iterative assessment for adaptive management).

163 BL Zaki and CM Radaelli, “Measuring Policy Learning: Challenges and Good Practices” (2024) 7 (1–2) Perspectives on Public Management and Governance 37–46 (warning that long cycles risk conflating learning with stochastic change or political recalibration).

164 C Perrow, Normal Accidents: Living with High-Risk Technologies (Updated ed.). (Princeton University Press, 1999). (explains that tight coupling and cascading failures require cross-system coordination); C Folke, T Hahn, P Olsson and J Norberg, “Adaptive Governance of Social-Ecological Systems” in Annual Review of Environment and Resources (Vol. 30) (2005).

165 G Antonelli, “Interconnected Dynamic Systems: An Overview on Distributed Control” (2013) 33 (1) IEEE Control Systems.

166 Case C-158/06, ROM-projecten, EU:C:2007:370, para 25.

167 A simple illustration comes from Amsterdam’s traffic management system. Certain intersections use sensors that detect waiting cyclists or vehicles and adjust signal timing accordingly. The traffic light turns green within seconds when sufficient load accumulates, see Merel Sterk, Sensing Cycling: Monitoring Cycling Traffic by Means of Sensor Technology (Master Thesis, Delft University of Technology 2020) https://yufeiyuan.eu/wp-content/uploads/2020/06/2020-06-2998.pdf [https://perma.cc/Z6XB-2GH7]; Adam Marsal, Everybody Counts: How Cyclists in the Netherlands Are Monitored (We Love Cycling, 31 December 2024) https://www.welovecycling.com/wide/2024/12/31/everybody-counts-how-cyclists-in-the-netherlands-are-monitored/ [https://perma.cc/ACC4-KX6U]. This represents rudimentary adaptive regulation. Rules (signal timing) adjust automatically based on observed conditions (traffic load). Yet the system operates without generating legal uncertainty. Accident rates at adaptive intersections do not exceed those at fixed-timing signals. Traffic flow improves. Most importantly, many cyclists and drivers remain unaware the system adapts to their presence. Their ignorance of the adaptive mechanism does not produce confusion about how to behave. The rules remain predictable from the user perspective even as the underlying regulatory response varies. That the transition occurs through adaptive logic rather than fixed intervals proves irrelevant to compliance. The analogy suggests that adaptive regulation can preserve legal certainty even adaptation is invisible in its operation.

168 C Argyris and DA Schön, Organizational Learning: A Theory of Action Perspective (Addison-Wesley, 1978). (provides the core theory of institutional learning); G Antonelli, “Interconnected Dynamic Systems: An Overview on Distributed Control” (2013) 33 (1) IEEE Control Systems. (how institutions learn and retain policies over time); B Levitt and JG March, “Organizational Learning. Annual Review of Sociology” (1998) 14 (1) Annual Review of Sociology. (on policy learning and paradigm adjustment).

169 A Kay, “A Critique of the Use of Path Dependency in Policy Studies” in Public Administration (Vol. 83, Issue 3) (2005).

170 D Kingsford Smith, “A Harder Nut to Crack? Responsive Regulation in the Financial Services Sector” (2011) 44 (3) UBC Law Review. (on cooperation between institutions); G Martinico, “Asymmetry and Complex Adaptive (Legal) Systems: The Case of the European Union” (2014) 21 (2) Maastricht Journal of European and Comparative Law. (conceiving EU law as a complex adaptive system).

171 G Martinico, “Asymmetry and Complex Adaptive (Legal) Systems: The Case of the European Union” (2014) 21 (2) Maastricht Journal of European and Comparative Law; AG Haldane and RM May, “Systemic Risk in Banking Ecosystems” In Nature (Vol. 469, Issue 7330) (2011) (shows network spillovers and systemic cascades across interconnected domains); SV Buldyrev, R Parshani, G Paul, HE Stanley and S Havlin, “Catastrophic Cascade of Failures in Interdependent Networks” (2010) 464 (7291) Nature. (formal model of cross-network cascades).

172 A Fung, M Graham and D Weil, “Full Disclosure: The Perils and Promise of Transparency. in Full Disclosure: The Perils and Promise of Transparency (2007); M Janssen, Y Charalabidis and A Zuiderwijk, “Benefits, Adoption Barriers and Myths of Open Data and Open Government” (2012) 29 (4) Information Systems Management. Also, C Scott, “Reflexive Governance, Regulation and Meta-Regulation: Control or Learning?” (2010). O De Schutter and J Lenoble (eds.), Reflexive Governance. (emphasising that reflexive regulation places substantial reliance on the regulees).

173 CR Allen and CS Holling, “Novelty, Adaptive Capacity, and Resilience” (2010) 15 (3) Ecology and Society.

174 BA Cherry and JM Bauer, (2004, September). Adaptive regulation: Contours of a policy model for the Internet economy. In 15th Biennial ITS Conference, Berlin (pp. 4–7) (similarly, arguing that telecom requires adaptive regulation due to the complexity of the field).

175 GM Dickinson, “Law Proofing the Future” (2026) 63 (1) Harvard Journal on Legislation (as critics of technology-specific legislation observe, adaptive capacity must avoid premature rule-particularisation).

176 A Ramkumar, “Senate Votes to Remove Ban on State AI Laws from GOP Megabill” (2025, July 1) The Wall Street Journal. https://www.wsj.com/politics/republican-megabill-ai-state-law-ban-b26bc0ee [https://perma.cc/3GE8-8PSM]; A Ramkumar, “How a Bold Plan to Ban State AI Laws Fell Apart – and Divided Trumpworld” (2025, July 2) The Wall Street Journal. https://www.wsj.com/politics/policy/how-a-bold-plan-to-ban-state-ai-laws-fell-apartand-divided-trumpworld-96bce19d [https://perma.cc/4MKU-5VZV].

Supplementary material: File

Schrepel supplementary material 1

Schrepel supplementary material
Download Schrepel supplementary material 1(File)
File 33.8 KB
Supplementary material: File

Schrepel supplementary material 2

Schrepel supplementary material
Download Schrepel supplementary material 2(File)
File 33.8 KB