I. Introduction
When addressing then-President George W. Bush at the 2006 White House Correspondents’ Association Dinner, Stephen Colbert noted jokingly that “[t]he greatest thing about this man is he’s steady. He knows where he stands. He believes the same thing Wednesday that he believed on Monday, no matter what happened Tuesday. Events can change; this man’s beliefs never will.”Footnote 1 Transposed to the realm of digital regulation, Colbert’s quip illuminates the fundamental limitation of “future-proof” regulation, i.e., rules drafted on Monday that purport to remain optimal on Wednesday, no matter what happens Tuesday.Footnote 2
This aspiration for regulatory stability stems from legitimate anxieties. The velocity of technological change in digital markets far exceeds the glacial pace of traditional legislative processes. Because regulations risk becoming obsolete before implementation, policymakers have responded by crafting deliberately expansive definitions intended to capture future technological paradigms.Footnote 3 The European Union’s approach to artificial intelligence exemplifies this strategy as it eschews defining AI under narrow technical specifications like “deep neural networks” in favour of broader conceptualisations meant to encompass computational models yet to be invented. The theory holds that such linguistic elasticity enables law to transcend temporal boundaries, shaping rather than chasing technological evolution.
One might conclude that adaptive regulation is unnecessary, that sufficiently broad principles and definitions already provide the requisite flexibility. If lawmakers draft rules using broad principles and avoid technical specifications, the objection runs, adaptation becomes unnecessary. Also, the argument holds that capacious language naturally accommodates unforeseen developments through interpretation.Footnote 4 Where regulation speaks in general terms, adaptive mechanisms add procedural complexity without functional benefit.
The AI Act’s legislative history refutes this claim. The Regulation’s original draft, circulated April 2021, organised regulatory obligations around a risk-based taxonomy.Footnote 5 AI systems were assumed to be designed for a specific purpose: healthcare, educational, law enforcement and so forth. Risk levels attached to each category, with corresponding compliance requirements calibrated to the anticipated harms within that domain. The framework deliberately eschewed narrow technical specifications in favor of functional categories designed to remain stable across technological iterations. High-risk applications would face stringent oversight. Low-risk applications would receive proportionate treatment.
The release of ChatGPT in November 2022 exposed a structural flaw in this design. Foundation models cannot be classified by designated purpose. A single model traverses risk categories depending on deployment context. The same system performs medical diagnosis (high-risk), generates creative content (low-risk), and produces software code (risk varies by application). The original taxonomy became incoherent. No degree of interpretive flexibility could reconcile purpose-based classification with general-purpose capability.
The European Parliament responded by introducing Chapter V, which regulates general-purpose AI models through capability-based thresholds rather than application-specific risk assessment.Footnote 6 Models exceeding defined computational benchmarks trigger regulatory obligations regardless of intended use. This represented not evolutionary interpretation of existing principles but architectural replacement. The foundational logic of the AI Act required redesign.Footnote 7
Timing matters here. This adaptation succeeded only because the AI Act remained unadopted when ChatGPT emerged. Had the Regulation been adopted before November 2022, its legal architecture would have ossified before foundation models became commercially significant.
The episode demonstrates a structural limit of principle-based regulation in complex technological domains. Broad language accommodates unforeseen variations within a stable conceptual framework. It cannot accommodate frameworks whose foundational assumptions are invalidated by technological development. Adaptive mechanisms address precisely this category of regulatory failure: not the need to update technical details within a sound framework, but the need to revise frameworks whose organising principles no longer correspond to technological reality.
The AI Act episode illustrates the insurmountable limitations, both practical and theoretical, that afflict future-proof regulatory design. At the practical level, regulations sufficiently capacious to encompass unknown future states necessarily sacrifice precision in scoping and enforcement.Footnote 8 Yet this sacrifice purchases no guarantee of continued relevance. More fundamentally, the future-proof paradigm rests on the dubious assumption that regulatory responses optimal for present technological configurations will remain appropriate for future ones. While certain normative principles (the prohibition of slavery, for instance) rightly claim temporal universality, the translation of this permanence to technical regulation of digital markets proves far more problematic. When regulation concerns itself with digital objects, the salience of intervening technological developments (i.e., Colbert’s “Tuesdays”) cannot be assumed away.
The neoclassical economic framework underlying much contemporary regulation compounds these difficulties. Built upon assumptions of predictable equilibrium states, and fungible production factors, the “future-proof” framework worked adequately for the industrial economy of the twentieth century. But in today’s knowledge economy, neoclassical methods fail to capture the complex, evolutionary nature of digital markets.Footnote 9 So unless regulatory theory makes a “complexity leap,” it risks becoming increasingly disconnected from the realities it seeks to govern.Footnote 10
This article proposes a paradigmatic shift from “future-proof” to “future-responsive” regulation (i.e., adaptive regulation).Footnote 11 The distinction is critical. Whereas future-proof regulation seeks resistance to change (analogous to waterproof materials resisting water), adaptive regulation embraces adjustability to evolving conditions.Footnote 12 So if making regulation adaptive might superficially appear to achieve the goal of future-proofing, this conflates fundamentally different regulatory philosophies. When an adaptive regulation is modified in response to Tuesdays, what persists is not the same regulation enduring through time but, in substance, a different regulation altogether. The future-proof ideal of regulatory permanence thus gives way to a more modest aspiration: creating regulatory frameworks capable of orderly transformation.
The promise of adaptive regulation, however, introduces its own pathologies. If regulations can be modified at will to accommodate technological developments or, in fact, any other event, the legal certainty essential to the rule of law is threatened.Footnote 13 A regulatory system capable of unconstrained morphing risks enabling arbitrary enforcement. The critical question thus becomes not whether regulation should be adaptive, but how to structure adaptation mechanisms in a way that preserves legal certainty (diminishes the specter of arbitrariness).
To address this question, the present article undertakes a systematic examination of adaptive mechanisms within recent European Digital Acts.Footnote 14 Title II first documents the various adaptive instruments embedded in these Acts. It offers a detailed analysis of their scope, triggers and procedural safeguards. Drawing on complexity science and institutional design literature, it proceeds to evaluate these mechanisms against normative criteria for effective adaptive regulation. The analysis reveals that while EU lawmakers have incorporated numerous adaptive elements (from delegated acts to periodic review clauses), these mechanisms remain wedded to neoclassical assumptions about predictable technological evolution. They also fail to embrace the genuine uncertainty that characterises complex digital ecosystems. In reaction, Title III proposes a framework for designing regulatory instruments that balance stability with responsiveness for governing rapidly evolving digital markets.
II. An empirical look at EU regulatory adaptiveness
In what ways has the European Union integrated adaptive mechanisms into its recent regulatory endeavors? To answer this question, I first introduce a framework for measuring regulatory adaptiveness across eight major EU digital laws passed between 2022 and 2024 (Section 1). I then analyse the results to identify which adaptive features recur across regulations and which are largely absent (Section 2).
1. Analytical framework
What follows introduces the study’s empirical foundation. It defines the scope of analysis (Section a), presents the analytical framework for assessing adaptive mechanisms, (Section b) and provides a first overview of the results (Section c).
a. Scope
This study covers all eight European Acts regulating the digital economy (i.e., the “European Digital Acts”) and adopted within the past five years: the Data Governance Act (May 2022, also called “DGA”),Footnote 15 the Digital Markets Act (September 2022, also called “DMA”),Footnote 16 the Digital Services Act (October 2022, also called “DSA”),Footnote 17 the Digital Operational Resilience Act (December 2022, also called “DORA”),Footnote 18 the Chips Act (September 2023),Footnote 19 the Data Act (December 2023),Footnote 20 the Artificial Intelligence Act (June 2024, also called “AI Act”),Footnote 21 and the Cyber Resilience Act (October 2024, also called “CRA”).Footnote 22
These regulations were selected for three reasons. First, they collectively form the backbone of the EU’s digital regulatory framework. Second, they span a wide range of regulatory objectives, from competition (DMA) and content moderation (DSA) to infrastructure (DORA, Chips Act), data governance (DGA, Data Act), and technological safety (AI Act, CRA). Third, all were adopted within a three-year window (2022–2024), which allows for a meaningful comparative analysis of how adaptiveness has (or has not…) been built into the EU’s most recent legislative efforts. These Acts, in other words, have been chosen for their shared ambition to regulate an evolving technological landscape, precisely where adaptive regulation matters most.
b. Methodology
To gauge the adaptiveness of a regulation, I rely on a structured framework assessing whether, and how, the European Digital Acts embed mechanisms to evolve over time. The framework rests on four dimensions aligned with the life cycle of legal adaptation. First, the capacity to observe regulatory performance in practice. Second, the presence of triggers that determine when observations should prompt change. Third, once revision is warranted, the availability of clear procedural and substantive routes for enacting it. Fourth, institutional arrangements that foster reflection, learning, and course correction. What follows is a breakdown of this framework.
i. Monitoring infrastructure
Data collection obligation: Are regulators under an obligation (i.e., beyond a mere possibility) to monitor the real-world effects of the regulation? This question goes to the heart of regulatory adaptiveness. Monitoring is a precondition to adaptation. Without systematic data collection, regulators cannot detect unintended consequences, gaps or emerging challenges. In the absence of such an obligation, updates risk being arbitrary or blind to actual market dynamics.
Real-time monitoring: Are regulated entities required to feed data streams to regulators so they can track its effects in real-time (e.g., using APIs)? This criterion tests whether regulators are equipped to move from periodic review to continuous oversight. Real-time inputs enable quicker detection of regulatory gaps and ensure that interventions are timely. Without such mechanisms, even the best intentions to adapt may arrive too late to matter.
Machine-readable format: Does the regulation require regulated entities to structure data they send to regulators in a way to enable automated processing? The potential for adaptation depends not only on the availability of data, but also on its usability. Structured, machine-readable data allows for automated analysis. It enables regulators to evaluate outcomes and update rules efficiently. Without it, adaptation remains a manual, resource-intensive process.
ii. Triggering logic
Indicators defined: Are there metrics that guide when to adapt the regulation (e.g., exogenous events, thresholds, complaints, Commission reports, etc.)? This question probes whether the law offers a signal function. Defined triggers anchor revisions in observable conditions, rather than political timing or administrative convenience. They make regulatory updates to be evidence-based.
Update frequency: Is there a predefined review timeline (e.g., every 3 years)? Built-in review cycles institutionalise regulatory learning. A predefined timeline not only signals that change is expected, but it also creates procedural expectations within the administration and the market. In turn, this predictability strengthens legitimacy and compliance.
Discretion: Is the regulation subject to a mandatory review, or is it only optional (e.g., does it use language like “shall assess” or “may consider”)? Whether revision is framed as an obligation or a possibility greatly affects regulatory responsiveness. Mandatory review clauses impose a duty to reassess; discretionary ones merely authorise it and may leave adaptive processes underused or politically delayed.
iii. Adaptation
Mechanism type: What type of mechanism governs how regulatory changes are made? Some Acts require going through the full legislative process to make any change. Others allow the Commission to update parts of the regulation through delegated or implementing acts. Legislative changes tend to be slower, but they make it possible to revise the regulation more fundamentally. Secondary instruments, by contrast, are faster and more flexible, but typically limited to technical or procedural updates.
Secondary instruments: In which areas is the Commission empowered to use secondary instruments to adapt the regulation without resorting to full legislative amendments? Secondary instruments allow the Commission to adapt specific aspects of the regulation more swiftly and flexibly than full legislative procedures. They make timely updates more feasible, although they are limited to changing non-essential elements and/or simply setting up conditions for uniform implementation across the EU.Footnote 23
Scope of modification: What parts of the regulation can be modified: only annexes, or also substantive provisions and core definitions? If only technical annexes can be revised, then the core legal architecture remains frozen. Genuine adaptiveness requires the capacity to update substantive rules and core definition, especially in dynamic fields like AI and digital infrastructure. But changing the core aspects of regulation can also have systemic, unintended consequences that beg strict procedures.
Type of modification: What types of modifications are permitted: can the regulation be made more stringent, more lenient, or both? Adaptation is a two-way street. A framework that only permits tightening risks over-regulation; one that only allows relaxation may lead to capture. Enabling both directions allows regulation to remain proportional to actual risk.
iv. Institutional learning
Learning institutions: Does the regulation create or mandate a body responsible for proposing adaptations? Adaptive regulation is more likely to materialise when a specific institution is charged with monitoring and proposing change. Without such a learning actor with specific (often technical) expertise, updates may depend on ad hoc political will rather than structured institutional feedback.
Actors involved: Who has the authority to initiate revisions: only the Commission, or also Member States and independent agencies? The diversity of initiators influences both the speed and inclusiveness of adaptation. Broadening the set of actors who can trigger revisions increases the chance that emerging problems are surfaced early and acted upon.
Stakeholder input: Is stakeholder engagement mandated during reviews or updates? Including stakeholders in the revision process anchors regulation in operational reality. It reduces blind spots, enhances legitimacy and increases the quality of updates by injecting on-the-ground knowledge into the legal process. But it also comes with a risk of regulatory capture that must be addressed by institutional design.
Impact assessment: Does the regulation require an impact assessment before changes can be made? Updating regulation without anticipating its consequences may do more harm than good. Mandated impact assessments act as a filter for ill-considered changes and strengthen the empirical foundation of legal evolution.
c. Overview
The table in Appendix 1 provides a snapshot of how each of the eight EU Digital Acts performs across the 14 adaptivity criteria, along with the relevant legislative references.
2. Findings
The results reveal systematic patterns across the regulatory corpus. I first distinguish between common features that enable adaptation and those that constrain it (Section a). Then, I highlight notable outliers, i.e., regulations that demonstrate exceptional innovation in specific adaptive features or surprising rigidity in areas where flexibility is generally observed (Section b).
a. Patterns
i. Adaptive traits
Data collection obligation. A vast majority of EU Digital Acts impose formal obligations on regulators to monitor (some of) the real-world effects of the rules they enforce. The Digital Markets Act requires the Commission to monitor gatekeepers’ effective compliance with their obligations under Articles 5, 6 and 7, as well as relevant decisions under Articles 8, 18, 24, 25 and 29 (Article 26.1).Footnote 24 The DSA goes further by establishing a system of multi-layered reporting. Digital Services Coordinators must publish annual activity reports (Article 55),Footnote 25 and the European Board for Digital Services must issue a yearly risk mitigation assessment for very large platforms and search engines (Article 35).Footnote 26 The Commission must also evaluate the broader societal impact of the DSA every five years, including on SMEs and fundamental rights (Article 91),Footnote 27 and report on crisis response measures where applicable (Articles 36.7 and 36.11).Footnote 28
The Digital Operational Resilience Act (DORA) similarly mandates annual reporting on ICT-related incidents (Article 22.2)Footnote 29 and requires the Oversight Forum to assess results of critical third-party provider supervision (Article 32.2).Footnote 30 Additionally, a joint report is to be submitted every five years by the European Supervisory Authorities on third-country ICT risks and their systemic impact (Article 44.2).Footnote 31 The Chips Act obliges the Commission to regularly inform the European Semiconductor Board of the outcomes of strategic mapping (Article 19.2),Footnote 32 to maintain and update a list of early warning indicators (Article 19.3),Footnote 33 and to monitor disruptions to the semiconductor value chain (Article 20.1 and 20.7).Footnote 34
Lastly, while the AI Act is more limited in scope, it requires the Commission to publish annual reports on the use of real-time biometric identification systems by law enforcement (Article 5.7)Footnote 35 and mandates national authorities to report annually on sandbox activities (Article 57.16).Footnote 36 The Advisory Forum must also issue an annual report on its own activities (Article 67.10).Footnote 37 These recurring information flows enable regulators to evaluate implementation and to ground future adaptations in empirical observation.
Update frequency. One of the more structurally promising traits across the EU Digital Acts is the prevalence of predefined review timelines which scaffold regulatory adaptiveness. Nearly all eight Acts mandate evaluation cycles typically spaced every three to five years. This provides predictability, creates bureaucratic discipline, and introduces feedback loops essential for learning.
The Chips Act mandates quadrennial reviews beginning in 2026 (Article 40),Footnote 38 while the Cyber Resilience Act sets a similar rhythm, albeit starting only in 2030 (Article 70.1).Footnote 39 Even the Data Act, though more modest in ambition, requires a one-off review by 2028 covering not just implementation but market impact and competitiveness dimensions (Articles 49.1 and 49.2).Footnote 40
Some other Acts, like the Digital Services Act, opt for a layered review architecture. It requires the Commission to assess specific provisions, such as Article 33, within three years (Article 91.1),Footnote 41 conduct an initial overall evaluation by early 2027 (Article 91.2),Footnote 42 and then repeat full assessments every five years thereafter (Article 91.3).Footnote 43 Others, such as the Digital Markets Act, supplement a triennial review (Article 53.1)Footnote 44 with an annual implementation report (Article 35.1),Footnote 45 ensuring both short- and long-cycle feedback. The AI Act pushes the frontier even further by integrating multiple reviews tailored to distinct regulatory components. Annex III (on high-risk AI systems) and Article 5 (on prohibited AI systems) must be assessed annually (Article 112.1),Footnote 46 while broader components (including governance, transparency, and enforcement) follow a four-year cadence (Articles 112.2, 112.3, 112.5, and 112.13).Footnote 47 Such a multi-track review mechanism is rare and, arguably, unprecedented in EU digital regulation.
Taken together, these provisions show that update frequency is not treated as an afterthought. Rather, it is embedded as a constitutional feature of the new digital rulebooks. They give legal expression to the need for a regulation that learns over time, rather than ossifies.
Discretion. A synoptic reading of the EU Digital Acts reveals a regulatory grammar where the Commission is endowed with broad discretion to use secondary instruments, yet it must issue delegated and implementing acts in other instances. In the Digital Markets Act, for instance, the Commission may adopt delegated acts to adjust gatekeeper obligations (Article 12.1 and 12.3),Footnote 48 or implementing acts to clarify procedural aspects (Article 46.1).Footnote 49 These include the forms and content for notifications and reports, audit methodologies for consumer profiling, rules for market investigations, hearing rights, cooperation between authorities and even deadline calculations. Similarly, under the Data Act, the Commission may define common specifications for interoperability (Article 33.5) or support open standards through guidelines (Article 33.11).Footnote 50
But this discretion is not unfettered. Numerous provisions across the AI Act and the Cyber Resilience Act contain mandatory instructions where the Commission shall adopt implementing acts to define transparency obligations (AI Act, Article 60.1),Footnote 51 or to establish simplified documentation templates for SMEs (CRA, Article 33.5).Footnote 52 The Data Governance Act offers a comparable mix, as it requires implementing acts on logos for registered data intermediation service providers (Article 11.9)Footnote 53 and EU-wide consent form for data altruism (Article 25.4)Footnote 54 while allowing discretion elsewhere. What emerges is a fine-grained typology of delegation. Some secondary instruments are hardwired into the legislation as required steps, others remain discretionary levers available to the Commission. This design allows EU digital regulation to layer predictable institutional oversight with flexible executive calibration. That structure blends rule-bound responsiveness with agility.
Scope of modification. A comparative reading of the EU Digital Acts suggests a broad perimeter of regulatory malleability. Nearly all regulations grant latitude (explicit or implicit) for future reengineering, including modification of core definitions, substantive provisions and technical annexes. This juridical plasticity is a latent strength of the EU Digital acquis.
In formal terms, six of the eight EU Digital Acts (DGA, DMA, DSA, DORA, Chips Act, Data Act) exhibit no material constraint on what may be modified, subject only to political will and procedural compliance. The governing review articles (e.g., Article 35 DGA;Footnote 55 Article 53 DMA;Footnote 56 Article 91 DSA;Footnote 57 Article 58 DORA;Footnote 58 Article 40 Chips Act;Footnote 59 Article 49 Data ActFootnote 60 ) operate as open mandates, without restricting the scope of legislative amendments. Notably, the DMA explicitly anticipates adjustments to core platform service classifications and behavioral obligations (Article 19),Footnote 61 which signals comfort with revisiting foundational categories. The Data Act, although containing a list of review priorities (Article 49.1),Footnote 62 frames them as particular rather than exclusive, thereby maintaining an open horizon for adaptation including on definitions and institutional roles.
The AI Act presents a more nuanced configuration. While its architecture delegates substantial authority to the Commission to amend annexes and procedural regimes (e.g., conformity assessments, documentation standards), the core definitional framework such as the notion of “AI system” in Article 3 remains insulated from delegated modification. This raises a potential rigidity concern, as semantic drift in technical advances may outpace legislative inertia. Still, Article 112.10 opens a backchannel, as it allows the Commission to propose legislative revisions on any part of the Regulation, implicitly including core definitions.Footnote 63
In sum, the EU Digital Acts largely endorse a model of adaptive completeness, whereby even foundational elements are structurally open to revision. This breadth of scope is a key enabler of adaptive regulation. It gives institutions room to revisit the very framing concepts upon which they rest.
ii. Anti-adaptive traits
Real-time monitoring. The continuous transmission of compliance data via APIs or equivalent systems remains a notable absentee from most EU Digital Acts. The Digital Services Act is the lone exception, imposing real-time-like duties on very large online platforms and search engines through Article 40.7, which grants regulators API-level access to ensure continuous oversight.Footnote 64 Elsewhere, the commitment to immediacy is more aspirational than operational. For instance, the Cyber Resilience Act, the AI Act, and the Digital Operational Resilience Act adopt structured but lagged notification frameworks. Critical incidents must be reported within 24 to 72 hours, but not as they occur (DORA, Article 19.4).Footnote 65 The Data Governance Act and the Chips Act require swift, yet not instantaneous reporting by competent authorities or the Commission itself (Article 11.10 DGA;Footnote 66 Article 12.5 Chips ActFootnote 67 ). Meanwhile, the Data Act and the Digital Markets Act remain conspicuously disengaged from real-time oversight, as they offer no structured mechanisms for continuous monitoring whatsoever. These piecemeal efforts reflect a broader pattern: the lack of real-time responsiveness undercuts the ambition for a truly adaptive regulatory model. If regulation is to become a living system, its sensory apparatus must be wired to detect changes as they happen, not after they have had systemic consequences.
Stakeholder input. A review of the EU Digital Acts reveals a pattern of modest stakeholder involvement, largely confined to downstream stages of regulatory production. In most regulations, stakeholder input is formally acknowledged, but typically through indirect or advisory mechanisms, with little in the way of structured co-decision or institutionalised deliberation.
The Digital Services Act, Digital Operational Resilience Act, and Digital Markets Act confine themselves to tepid invocations of the Better Regulation principles which asserts that stakeholders may be consulted “whenever broader expertise is needed.”Footnote 68 That formulation invites bureaucratic circumvention. Even the Data Act, which is relatively dense in implementing acts, mentions formal consultation in only a few targeted provisions (e.g., Articles 33.7, 35.6, 36.8),Footnote 69 and with limited procedural depth. Here, consultation provisions are either discretionary or tokenistic.
The Data Governance Act and Chips Act offer intermediate configurations, as they combine ad hoc obligations to consult experts or sectoral actors with the establishment of institutional footholds for stakeholders such as the European Data Innovation Board’s stakeholder subgroup (Article 29.2.c DGA)Footnote 70 or the representative organizations invited to assess semiconductor supply disruptions (Article 20.7 Chips Act).Footnote 71 But these venues fall short of meaningful co-regulatory structures.
Perhaps most advanced, the AI Act displays a relatively elaborate consultative apparatus. Articles 15, 40, 56 and 62 cumulatively orchestrate multistakeholder interactions in the shaping of benchmarks, standardisation, and codes of practice.Footnote 72 Similarly, the Cyber Resilience Act goes furthest in formalising consultation as it mandates regular stakeholder sessions (Article 9.2),Footnote 73 targeted engagements during the preparation of implementing acts (Articles 8.1, 8.2, 27.4, 30.6),Footnote 74 and tailored support to SMEs and open-source developers (Article 26.3).Footnote 75 This level of stakeholder formalism is the exception rather than the rule. It remains limited in scope and unevenly embedded across the EU’s digital regulatory corpus.
Overall, stakeholder engagement in the EU Digital Acts is more gestural than generative. It reflects a regulatory monologue with participatory punctuation, rather than a structured dialogue between regulators and regulated entities. EU institutional design is deeply centralised, with the Commission operating as both gatekeeper and gate-opener of participatory processes.
Machine-readable format. A curious paradox haunts the EU Digital Acts: the regulatory regimes governing data-rich, machine-mediated ecosystems are themselves strikingly resistant to machinic parsing. With the notable exception of the Digital Services Act, machine-readable obligations remain sparse or conspicuously absent. The DSA breaks rank by requiring that very large platforms and search engines maintain an ad repository with multicriteria queries and API access (Article 39.1),Footnote 76 and that intermediary services submit structured moderation reports and user restriction decisions (Articles 15.1, 24.5, 42.1).Footnote 77 This is regulatory information designed not merely to be read, but to be processed. And yet, it is an isolated island of computability in a largely analogue sea.
Elsewhere, the commitments to machine readability are less a strategy than an afterthought. The Data Governance Act gestures weakly toward automation: single information points may use automated means (Article 8.1)Footnote 78 and shall, “where appropriate,” transmit reuse requests by such means (Article 8.2).Footnote 79 There is a whiff of structure in the application forms for data altruism entities (Article 19.4)Footnote 80 and a formal nod in the requirement for a machine-readable consent form (Article 25.4),Footnote 81 but these fragments serve more as user-facing enhancements than as tools for regulatory computation. DORA and the AI Act similarly dabble in templates and declarations (DORA Articles 11.11;Footnote 82 AI Act Article 47.1).Footnote 83 They stop short of mandating machine-readable reporting to oversight bodies. Most striking, perhaps, is the silence of the Chips Act and Data Act, two instruments that one might reasonably expect to be showcases of data structuration. Their absence from this terrain speaks volumes. In Europe, the future of data governance remains readable, but not yet computable.
Indicators defined. One of the more elusive elements in the architecture of the EU Digital Acts is the use of predefined indicators to trigger regulatory adaptation. With the exception of the Chips Act and the AI Act, most instruments remain agnostic on this front. Provisions requiring reviews, such as Articles 35 of the DGA,Footnote 84 53 of the DMA,Footnote 85 91 of the DSA,Footnote 86 and 49 of the Data Act,Footnote 87 typically instruct the Commission to evaluate impact “where appropriate,” with no reference to structured metrics or thresholds. Even where domains are data-intensive by nature (such as platform markets or data governance), adaptive triggers are either entirely discretionary or simply omitted. This reliance on general impact assessments, absent concrete benchmarks, severely limits the operationalisation of feedback loops and undermines the ambition of adaptiveness.
Two exceptions confirm the rule. The Chips Act provides a structured framework through Annex II, which details a dashboard of performance metrics (such as SME participation, infrastructure access and investment flows) which must be reported annually (Article 12.4).Footnote 88 Likewise, the AI Act deploys a multi-tiered approach, with Article 112.11 mandating the AI Office to develop a risk-based methodology to inform future adaptations.Footnote 89 Notably, the AI Act also incorporates exogenous variables (e.g., changes in the information society) as grounds for review (Article 112.10).Footnote 90 These examples suggest that when adaptiveness is taken seriously, it is anchored in quantifiable signals, not vague gestures toward impact. Yet in the broader corpus of EU Digital Acts, the technocratic capacity to “listen” remains more aspirational than systematic.
Actors involved. The gate to regulatory adaptation across the EU Digital Acts remains firmly guarded by a single actor: the European Commission. From the Data Governance Act to the Cyber Resilience Act, no regulation confers formal revision initiative powers to Member States, national agencies, or independent authorities. While consultative frameworks are widespread (e.g., the European Data Innovation Board in the DGA, the Semiconductor Board in the Chips Act, or national experts designated under Regulation (EU) No 182/2011),Footnote 91 their role is advisory. These bodies may assist, advise and opine, but the final prerogative to propose legislative revision or adopt delegated acts is indeed monopolised by the Commission. Even in domains where national specificity abound, subsidiarity finds no procedural reflection.
This architecture reflects a structural asymmetry. Centralisation may promote coherence, but it imposes high reliance on the expertise, bandwidth, and political will of a single supranational institution. No regulation introduces a “multichannel” mechanism allowing bottom-up regulatory feedback from national authorities or decentralised bodies with sectoral expertise. At most, Member States can raise objections on specific implementing acts (as in the Data Act) but cannot trigger full-scale reviews. In this sense, the EU Digital Acts instantiate what one might call a “monologic adaptiveness”: input flows in, but only the Commission speaks. If adaptive regulation aims to function as a living system, this tight bottleneck at the point of initiative remains its primary organ failure.
Learning institutions. One of the striking structural blind spots across the EU Digital Acts lies in the near-total absence of dedicated bodies responsible for proposing adaptations. While many Acts establish new agencies (Digital Services Coordinators, Data Innovation Boards, Notifying Authorities, Oversight Forums…), these entities lack the mandate to turn accumulated knowledge into policy change. The result is a structural disconnect: implementation is distributed, but adaptation is not. No institutional relay connects operational experience to legislative evolution. The result is a brittle setup: a single actor, the Commission, must sense, learn, and act, alone. That is not a learning system. That is a bottleneck.
b. Notable outliers
i. Exceptionally adaptive traits
Learning institutions in the AI Act. The Artificial Intelligence Act departs from the general learning inertia of its counterparts. It creates a triad of institutions (the AI Office, the European Artificial Intelligence Board, and the Advisory Forum) designed to feed enforcement experience directly into the process of regulatory refinement. The AI Board, in particular, is vested with the power to issue recommendations on delegated acts, contribute to the evaluation and review of the Regulation (Article 66.e.ii),Footnote 92 and propose amendments to substantive provisions, including Annex III and Article 5 (Article 66.e.vii).Footnote 93 This is not mere consultation, it is an institutionalised channel for updating the core regulatory framework based on observed practice.
The Advisory Forum complements this architecture by injecting technical expertise and producing written contributions and opinions to support both the Board and the Commission (Article 67.8).Footnote 94 Meanwhile, the AI Office plays a crucial role in monitoring compliance, especially with respect to general-purpose AI models (Article 89.1).Footnote 95 Together, these bodies form a procedural ecosystem of regulatory learning, where oversight and revision are formally linked, and where the regulation is not only implemented but also continuously interrogated. This design moves beyond static rulemaking and signals an institutional commitment to adaptiveness that is structurally rare across the EU digital acquis.
Stakeholder engagement in the AI Act. The AI Act stands out as the most comprehensive EU Digital Act in terms of stakeholder engagement. It not only mandates consultation but embeds participatory mechanisms across multiple regulatory functions. The Commission is required to cooperate with stakeholders when developing benchmarks and methodologies for assessing accuracy and robustness (Article 15.2).Footnote 96 Standardisation requests must also be preceded by stakeholder consultation (Article 40.2),Footnote 97 which ensures that technical norms reflect a wide array of expert inputs.
The AI Office, a central node in the Act’s governance structure, may involve providers of general-purpose AI models and national authorities in co-drafting Union-level codes of practice. In addition, it may draw on civil society, industry, academia, downstream providers, and independent experts for broader support (Article 56.3).Footnote 98 At the Member State level, obligations go beyond symbolic gestures. Public authorities must actively facilitate stakeholder access to regulatory sandboxes, deliver targeted training and awareness-raising initiatives, and establish structured communication channels to promote engagement with standard-setting processes (Article 62.1).Footnote 99 Altogether, the AI Act weaves stakeholder engagement into both the formulation and implementation of regulatory norms. It turns stakeholder engagement from a rhetorical accessory into a structural feature.
Machine-readable formats in the DSA. The Digital Services Act carves out a singular position among the EU Digital Acts by embedding machine-readability as a core feature of its transparency infrastructure. Unlike other regulations, which often omit the issue altogether, the DSA mandates specific technical affordances to facilitate automated scrutiny. Providers of very large online platforms and search engines must maintain a “searchable and reliable” repository, accessible via APIs and enabling multicriteria queries (Article 39.1).Footnote 100 This is a notable step toward computational accountability.
Further, intermediary service providers must report user restriction decisions in a machine-readable database submitted to the Commission (Article 24.5),Footnote 101 and publish biannual content moderation reports in machine-readable format (Article 15.1 and Article 42.1).Footnote 102 This set of obligations elevates the DSA beyond manual, prose-based compliance logic and moves toward an ecosystem where regulatory oversight is capable of automation and real-time querying. In short, the DSA operationalises transparency not just as a legal requirement, but as a format-compatible data layer, amenable to computational enforcement.
Indicators for assessing the regulation in the Chips Act and the AI Act. Among the EU Digital Acts, only the Chips Act and the AI Act offer a fully fleshed-out system of indicators for assessing regulatory performance, a notable outlier in an otherwise vague landscape. The Chips Act includes a detailed Annex II that enumerates specific metrics for success, including the number of participating entities, tools developed, levels of private co-investment, infrastructure access rates, training outcomes, the activation of competence centers, and the volume of venture capital received by start-ups and SMEs along the semiconductor value chain (Annex II).Footnote 103 These indicators are not simply illustrative. The Chips Joint Undertaking is mandated to report on them annually, which anchors metrics firmly in the institutional machinery of evaluation (Article 12.4).Footnote 104
The AI Act is equally, if not more, granular. It defines clear input, process, and outcome metrics to guide regulatory adaptation. That includes Commission reports (Articles 112.1–112.7),Footnote 105 adequacy of national enforcement resources, sanctioning patterns, market entry by SMEs, and the state of standardisation initiatives (Article 112.4).Footnote 106 It goes further by incorporating exogenous triggers into the review logic, i.e., technological breakthroughs, evolving risks to health, safety or fundamental rights, and shifts in the information society (Article 112.10).Footnote 107 The AI Office, in parallel, is charged with building a risk-based evaluation framework to guide updates to the Regulation’s annexes and substantive provisions (Article 112.11).Footnote 108 These twin efforts instantiate an adaptive logic grounded not in generic impact language, but in traceable indicators that transform the assessment of these acts from discretion to method.
ii. Exceptionally non-adaptive traits
Data collection in the Data Governance Act. The paradox of the Data Governance Act lies in its silence on structured data reporting. Despite its central ambition to foster trust in data sharing and promote the re-use of public sector data by laying down a framework for intermediaries and data altruism organizations, the regulation does not impose systematic data collection obligations on stakeholders or national agencies. There is no requirement for machine-readable transparency reporting, no metrics to assess compliance with data altruism standards, no standardised templates to monitor the activities of data intermediation services. The Act aspires to coordinate a European data space, yet it does so without wiring the system for feedback, which leaves its adaptive capacity dependent on anecdote rather than evidence.
General revision in the Chips Act and Cyber Resilience Act. Both the Chips Act and the Cyber Resilience Act institutionalise delegated and implementing acts as operational levers of regulatory adaptation (Article 37 Chips Act;Footnote 109 Article 61 CRA).Footnote 110 Yet neither regulation empowers the Commission, in their respective review clauses, to submit amendment proposals to the European Parliament and the Council (Article 40 Chips Act;Footnote 111 Article 70 CRA).Footnote 112 This omission is not trivial. It deprives the review exercise of its natural endpoint (i.e., legislative recalibration) and confines adaptation to secondary instruments alone. The result is a formalised stasis cloaked in procedural activity.
Scope of modification in the Cyber Resilience Act. The Cyber Resilience Act confines the Commission’s adaptive authority to the terrain of secondary instruments, which leaves the architecture of core definitions untouched. Delegated acts may recalibrate the scope of the Regulation by modifying Annex III and IV to include or remove product categories (Articles 7.3 and 8.2)Footnote 113 and adjust certification requirements, conformity procedures, and enforcement modalities. Technical standards may also be updated to follow technological evolution. Yet nowhere does the text empower the Commission to revise foundational legal concepts. This asymmetry reflects a selective plasticity. The CRA is adaptive where the terrain is operational, inert where it is more fundamental.
III. Principles to making regulation more adaptive
The shortcomings of adaptive mechanisms in the EU Digital Acts stem less from faulty execution than from the conceptual lens through which they are designed. Remedying this demands two steps. First, show why complexity science provides a superior theoretical base for adaptive regulation (Section 1). Second, distil this theory into operational design principles capable of producing genuinely future-responsive instruments (Section 2).
1. Why complexity science
Title II revealed a paradox at the heart of EU Digital Acts. While lawmakers have incorporated adaptive mechanisms, these instruments remain anchored in neoclassical assumptions about predictable technological trajectories and equilibrium states. The AI Act exemplifies this tension. Despite its rather sophisticated institutional architecture, its adaptive mechanisms are designed for a world where AI evolves predictably and incrementally, not the world we actually inhabit, where new AI capabilities emerge unexpectedly. Regulatory architecture presupposes linear evolution, discretionary triggers and centralised adaptation pathways. It assumes technology moves in linear arcs toward equilibrium, and that feedback can wait for the next reporting deadline. The Acts collect data, but mostly in lagged, annual or multi-year batches. They invite stakeholders, but sporadically and on the regulator’s terms. They allow modification, but only through channels controlled by a single institutional gatekeeper. The machinery to learn exists; the mandate and wiring to act on that learning, much less so.
These design choices are increasingly untenable in markets characterised by increasing returns,Footnote 114 path dependenciesFootnote 115 and emergent properties.Footnote 116 If adaptive regulation is to transcend its current limitations, it requires a methodological foundation capable of embracing genuine uncertainty while maintaining legal predictability. Complexity science offers precisely such a foundation.
For readers unfamiliar with the field, complexity science refers to the study of systems composed of many interconnected elements whose interactions generate collective patterns that cannot be fully deduced from their individual parts.Footnote 117 In such systems, ranging from ecosystems and financial markets to online platforms, small changes can produce large, sometimes abrupt effects;Footnote 118 feedback loops can amplify or dampen dynamics;Footnote 119 and the system’s overall behavior emerges from continuous adaptation rather than settling into a fixed equilibrium.Footnote 120
The case for relying on complexity science to build adaptive regulation rests on three propositions. First, digital markets are complex adaptive systems, and complexity science is, by definition, the discipline devoted to understanding such systems. The European Digital Acts regulate ecosystems where multiple agents interact, adapt and co-evolve in response to changing conditions, i.e., the phenomena that complexity science has studied for decades across scientific fields.Footnote 121 When the Digital Markets Act addresses platform ecosystems, or when the AI Act governs machine learning systems, regulators confront not static market structures but dynamic systems characterised by non-linear relationships and emergent behaviors.Footnote 122 The conceptual apparatus of neoclassical economics and Newtonian science, with its emphasis on equilibrium and linearity, proves inadequate for capturing these dynamics.Footnote 123 Complexity science, by contrast, offers a theoretical framework for understanding how systems evolve. Applying a science developed precisely to understand adaptive systems ensures conceptual alignment between the object of regulation and the framework used to regulate it.
Second, complexity science provides not abstract theory, but empirically grounded insights derived from decades of rigorous research. Since the establishment of the Santa Fe Institute in 1984, complexity scientists have developed robust methodologies for studying adaptive systems across disciplines, spanning physics,Footnote 124 biology,Footnote 125 economicsFootnote 126 and computer science.Footnote 127 Research in complexity science has led to the creation and fostering of scientific instruments, including agent-based modeling,Footnote 128 network analysisFootnote 129 and evolutionary game theory.Footnote 130 They have yielded reproducible findings about how complex systems such as ant colonies,Footnote 131 neural networksFootnote 132 and financial marketsFootnote 133 behave.
Third, and perhaps most compelling, legal scholars have already shown that legal systems can learn from complexity insights. The work of Ruhl and Katz on legal complexity (macro),Footnote 134 but also legal scholarship in fields such as environmental regulation,Footnote 135 financial oversight,Footnote 136 administrative lawFootnote 137 and public health governance (micro)Footnote 138 have operationalised complexity concepts for legal purposes.
Yet, this is not to suggest that complexity science offers a panacea. Complex systems are, by definition, difficult to predict and control.Footnote 139 Several limitations warrant explicit acknowledgment. First, complexity science excels at identifying patterns and dynamics but offers limited guidance on normative choices. When feedback loops amplify market concentration, complexity science can explain the mechanism but cannot determine whether the resulting market structure is socially desirable. These value judgements remain irreducibly political and require democratic deliberation rather than scientific resolution. Second, if complexity merely becomes another way of saying “it is complicated,” it adds nothing to regulatory design. Third, and this bears repeating, the dynamics of complex systems, shaped by emergence, scaling effects, chaos and feedback loops, can disorient regulated entities if the legal order seeks to emulate the restless dynamics of living systems. Mechanisms that track system dynamics can produce a landscape where regulatory requirements are in constant flux, which would undermine business planning and chill innovation. The cure for regulatory inadaptability must not become regulatory volatility.
Paradoxically, these very limitations are what make complexity science worth the trouble. Complexity starts from an acknowledgment of irreducible uncertainty and seeks to design governance frameworks that remain effective under such conditions. In an era where technological change routinely outpaces regulatory response, where digital markets exhibit increasing returns and winner-take-all dynamics, where biotechnology yields unforeseen breakthroughs, and where some artificial intelligence systems evolve beyond their creators’ foresight, regulatory frameworks grounded in assumptions of predictability (i.e., future-proof systems premised on the enduring validity of their methods and objectives) are not inadequate, they are dangerous. Complexity science offers a more honest and ultimately more effective foundation for adaptive regulation in genuinely complex domains.Footnote 140 My task, now, is to provide a comprehensive framework that translates complexity insights into concrete, predictable regulatory design principles. That is the purpose of what follows next.
2. Principles for future-responsive regulation
European courts have long recognised that regulation must stay proportionate to its aims and that lighter alternatives take precedence whenever they can achieve the same outcome.Footnote 141 In domains marked by uncertainty, the case law has increasingly emphasised that proportionality review cannot be confined to the evidence available at the moment of adoption; courts assess whether the measure remains justified in light of current and evolving information.Footnote 142 What emerges is a process-oriented form of proportionality review. Taken together, these strands support a simple proposition. In fast-changing digital markets, designing regulation as adaptive is a structural way of complying with proportionality throughout the regulatory life cycle, not only at the moment of enactment.
Building on complexity insights, I articulate concrete principles for designing regulatory instruments capable of orderly transformation.Footnote 143 These principles address the central challenge identified in Title II, i.e., how to structure adaptation mechanisms that preserve legal certainty while enabling necessary evolution in rapidly changing digital markets. They also echo the sequencing of this paper’s empirical inquiry.
a. Blueprint for adaptive regulation
i. Principle #1: modular regulatory architecture
The first principle concerns the scope of adaptation. Complex adaptive systems maintain resilience through modular components that can evolve independently without destabilising the whole.Footnote 144 Transposing this insight to regulation, future-responsive frameworks would separate essential from non-essential elements, and would thus create distinct layers subject to different revision mechanisms.
Each legislative act should explicitly identify which provisions are essential and which are not. This distinction clarifies the Commission’s scope for adaptation through delegated and implementing acts; it prevents legal uncertainty and modifications that exceed the legislative framework.Footnote 145
Essential elements (e.g., core definitions, fundamental rights protections…) remain stable under this principle, as they are modifiable only through legislative procedures. Non-essential elements (e.g., technical standards, operational procedures…) are delegated to faster revision mechanisms such as implementing acts or technical specifications.Footnote 146 This layered approach enables incremental adaptation without wholesale reform, and allows regulation to respond to technological change while maintaining legal predictability.Footnote 147 This mirrors how biological systems achieve robustness through hierarchical organization, where fast-changing peripheral elements adapt while core functions remain stable.Footnote 148
The AI Act’s architecture already gestures toward such modularity, though incompletely. It delegates substantial authority to amend annexes and procedural regimes, but core definitional frameworks remain frozen. A modular approach permits controlled evolution even of fundamental concepts, subject to heightened procedural safeguards rather than complete insulation from change.
With the scope of adaptability set, the question shifts from what can change to how we know when change is needed. This is where Principle #2 enters. If modularity provides the skeleton of an adaptive regime, sensing is its nervous system.
ii. Principle #2: distributed sensing
Adaptive systems require continuous feedback from their environment to remain responsive.Footnote 149 Current EU digital regulation suffers from what might be termed “sensory poverty,” reliance on periodic reviews and prose-based reporting that capture only fractional glimpses of market dynamics. Future-responsive regulation demands comprehensive monitoring infrastructure across three dimensions.Footnote 150
First, mandatory monitoring obligations require regulators to collect implementation data systematically across all major regulatory functions, not merely in response to specific concerns.Footnote 151 Second, machine-readable formats must structure reporting so regulators can process data automatically.Footnote 152 Third, real-time data pipelines must be established where feasible, particularly in critical, fast-moving markets, where delayed feedback risks regulatory obsolescence.Footnote 153
The Digital Services Act’s requirement for API-based monitoring of very large online platforms provides a (partial) template. Machine-readable reporting should become the default across all digital regulations, as it turns oversight from manual assessment into computational pattern recognition. This shift from episodic to continuous monitoring will also enable detection of phase transitions (moments when events trigger systemic shifts) before they cascade through the regulatory domain.Footnote 154 Fundamentally, machine-readable reporting has a crucial proportionality function. Requiring firms to generate data that regulators cannot meaningfully process drift into disproportionate administrative load, detached from regulatory purpose. The inverse is also true. Once authorities build the infrastructure to exploit standardised data, compliance costs convert into institutional learning. Proportionality is then restored, means and ends realign. In this light, machine-readable data analysis is not a procedural embellishment but a constitutional baseline.
Now, one word of caution. These monitoring capacities, however advanced, cannot overcome a structural limitation that confronts any monitoring system, i.e., the difficulty of detecting foregone economic activity. Regulatory frameworks can track compliance costs, identified harms, etc. They cannot observe counterfactual innovation that regulation prevented from emerging. Companies never founded, products never developed, and business models never attempted leave no trace for regulators to monitor.Footnote 155
Addressing this limitation requires indirect measurement strategies. Comparative analysis across jurisdictions with varying regulatory stringency can illuminate innovation differentials attributable to regulatory choices. Surveys of entrepreneurs and investors about abandoned projects due to regulatory barriers provide signals about foregone opportunities, though such evidence remains inherently incomplete. Regulatory sandbox programs that temporarily exempt participants from specified requirements generate controlled experiments revealing innovation that baseline rules might suppress. These approaches mitigate rather than solve the counterfactual problem. The asymmetry between observable compliance costs and invisible opportunity costs persists. It biases adaptive systems toward detecting regulatory excess while missing regulatory harm in the form of stifled innovation.
Yet imperfect monitoring proves superior to its absence. Title II documented that most EU Digital Acts lack systematic data collection obligations, machine-readable reporting requirements, or real-time monitoring infrastructure. The regulatory frameworks analysed operate with minimal observational capacity of any kind. Distributed sensing mechanisms, even with their structural limitations in detecting foregone opportunities, represent a substantial improvement over regulatory systems that proceed without empirical feedback. The choice is not between perfect observation and imperfect observation, but between structured monitoring that captures some regulatory effects and ad hoc assessment that captures too few of them.
With the sensors switched on, the question shifts from detecting change to acting on it. This is where Principle #3 enters. If sensing is the nervous system of an adaptive regime, triggering mechanisms are its reflexes.
iii. Principle #3: pluralistic triggering mechanisms
The Commission’s current monopoly on adaptation initiatives creates a single point of failure in regulatory evolution.Footnote 156 Complex systems achieve resilience through redundancy, a fundamental principle in complex systems where multiple pathways to the same function prevent single points of failure from collapsing the entire system.Footnote 157 Future-responsive regulation should therefore establish multiple channels for triggering reviews. Institutional learning is indeed stimulus dependent. Different triggers generate different cognitive responses, whether epistemic, bargaining-based, hierarchical or reflexive.Footnote 158 These responses are not functionally equivalent. Each embeds its own distortions. A single triggering mechanism therefore hardwires one mode of learning into the system and suppresses alternative adaptive pathways.
Under this principle, Member States, national agencies and sectoral bodies possess formal powers to initiate regulatory reviews based on observed implementation challenges.Footnote 159 This does not require that multiple actors reach consensus for every adaptation, but rather that each can independently trigger review processes when specific indicators are met. The AI Act’s multi-tiered institutional architecture, with its AI Board empowered to propose amendments, offers a partial model, though one still requiring expansion to include national-level triggers. For example, a qualified minority of Member States representing at least 35% of the EU population could petition for review when they detect implementation failures. The mechanism would set a clear threshold and creates a disciplined channel through which signals from the periphery travel back to the centre.
Beyond diversifying the actors who can initiate change, triggering mechanisms must operate through three distinct modalities. First, indicator-based triggers should be anchored in predefined metrics rather than discretionary assessments.Footnote 160 Quantifiable thresholds (e.g., market concentration ratios exceeding specified levels, compliance costs surpassing projected baselines, harm incidents reaching critical frequencies, or innovation rates declining below historical averages) must automatically mandate review. The Chips Act’s Annex II performance metrics exemplify this approach, though their application remains limited to reporting rather than triggering adaptation.Footnote 161 Similarly, the AI Act would gain from a rule that obliges high-risk system providers to keep API endpoints open to national enforcers so that performance metrics, incident rates and distributional results become directly observable. The DMA would log innovation indicators to detect when the flow of new entrants in a digital sector deviates from its historical range. The EU Digital Acts would also record compliance costs borne by SMEs and ease the burden once these costs climb above a defined share of revenue.
Second, systematic review cycles must embed mandatory policy iteration protocols that go beyond mere reporting.Footnote 162 Each regulation must incorporate tiered review periods, perhaps annual for technical annexes, triennial for operational provisions and quinquennial for core frameworks, that require not just assessment but concrete recommendations for change.Footnote 163 These reviews are structured as policy iteration exercises, where regulators must either propose specific adaptations based on accumulated evidence or provide reasoned justification for maintaining the status quo.
Third, emergency triggers must enable rapid response to unforeseen developments without waiting for scheduled reviews or indicator thresholds.Footnote 164 When new technologies fundamentally alter market dynamics or when regulatory failures create immediate risks, streamlined procedures must allow expedited adaptation while maintaining appropriate safeguards. This requires the use of exogenous event clauses that require reassessment following pre-defined emergency situations.
This multi-modal, multi-actor approach ensures that adaptation can occur through routine learning, responsive adjustment, or crisis intervention as circumstances demand. This creates what complexity scientists term “distributed control” where no single agent commands the system, yet coherent behavior emerges from the interaction of multiple decision-makers responding to local information.Footnote 165 It transforms the regulatory system from one dependent on a single decision-maker to a distributed network capable of detecting and responding to signals from multiple sources. It prevents both regulatory sclerosis and capture. Importantly, the transparency of these trigger mechanisms resolves the apparent tension between adaptiveness and legal certainty. The EU legal principle of legal certainty does not demand regulatory stasis. As the Court of Justice has consistently held, it requires that “EU rules enable those concerned to know precisely the extent of the obligations imposed on them.”Footnote 166 Transparent triggering mechanisms satisfy this requirement not by freezing rules in time, but by making the conditions and processes of change themselves predictable. Regulated entities can anticipate when reviews will occur, what indicators will prompt adaptation, and through what procedures changes will be implemented.Footnote 167
Once a trigger is pulled, the question shifts from whether to adapt to how adaptation is remembered and refined over time. This is where Principle #4 enters. If triggers are the reflexes of an adaptive regime, institutional memory is its long-term intelligence.
iv. Principle #4: networked institutional memory
Adaptation without learning merely replaces one static configuration with another.Footnote 168 Future-responsive regulation demands institutional structures capable of both recursive evaluation (a hallmark of complex adaptive systems that modify their behavior based on accumulated experience, creating what complexity scientists call “path-dependent” institutional learningFootnote 169 ) within individual regulations and coordination across the regulatory corpus. This requires a two-tier institutional architecture that addresses both vertical learning and horizontal coherence.
At the first tier, dedicated learning bodies must bridge operational experience with policy evolution within each regulatory domain. Following the AI Act model, each major digital regulation must establish an expert body with explicit mandates to synthesise implementation data and maintain institutional memory. These bodies serve as the institutional memory of their respective regulations. They document failures and successes to inform future adaptations. Take the DMA. A “Digital Markets Implementation Board” would consolidate quarterly compliance reports, maintain public machine-readable logs of all enforcement actions including violation patterns and sanctions, conduct annual evaluations of whether gatekeeper obligations achieve intended effects, and issue binding recommendations for regulatory adaptations when evidence warrants.
At the second tier, an inter-regulatory coordination body must ensure systemic coherence as individual regulations evolve.Footnote 170 Digital markets operate as interconnected systems where changes in one domain cascade across others. They exhibit “tight coupling,” where perturbations in one component rapidly propagate through interdependencies.Footnote 171 When the DMA adapts its gatekeeper obligations, for instance, corresponding adjustments in the DSA’s platform responsibilities might be necessary to maintain alignment. Yet current EU Digital Acts evolve in isolation, risking fragmentation and conflicting requirements. An “European Coordination Board” bringing together the AI Office, the European Board of Digital Services, the DORA Oversight Forum, and other expert bodies would track spillovers across regimes. It would identify friction points early and ensure that knowledge moves across Acts. Every three years, the Board would run a meta-review and publish a state of digital regulation report that assesses whether the EU Digital Acts evolve on a coherent path or start pulling apart.
Under this principle, this coordination body does not possess direct regulatory powers but monitors spillover effects and proposes harmonising adjustments. It serves as the connective tissue between domain-specific learning bodies, thus facilitating knowledge transfer and preventing regulatory silos. When patterns observed in one domain have implications for others, such as emerging business models that cut across regulatory boundaries, this body ensures coordinated response rather than fragmented reaction.
Critically, both tiers must be empowered to convene joint reviews when cross-regulatory issues arise. They also document every regulatory adaptation in public, machine-readable logs that show triggering conditions, stakeholder inputs, impact assessments, and decisional rationales.Footnote 172 This record builds accountability and learning. It allows pattern recognition across cycles of adaptation and secures legal certainty through transparency.
This principle gains salience, as EU ex ante instruments move deeper into terrain that used to fall under ex post enforcement. Under Article 102 TFEU, for example, institutional memory grows case-by-case. Judgments generate evidence about market effects, remedies, and enforcement errors. Ex ante regimes like the Digital Markets Act short-circuit this learning cycle. By intervening before harm materialises, and by diverting conduct away from traditional enforcement, the system produces fewer decisions and less factual record. The result is a structural memory deficit. Ex ante rules suppress the jurisprudence that would normally replenish the historical record of regulatory performance. Systematic documentation and evaluation are essential substitutes for the learning that ex post enforcement once generated.
b. Implementation
These principles collectively enable “adaptive capacity,” i.e., the ability of a system to maintain core functions while evolving in response to changing environments.Footnote 173 By embedding variation and retention mechanisms directly into regulatory architecture, future-responsive regulation transforms a static rule system into a complex adaptive system capable of co-evolving with the realities it governs.Footnote 174
These principles do not constitute a complete blueprint but rather foundational elements for constructing future-responsive regulation. They recognise that in complex, rapidly evolving markets, the aspiration for permanent rules must yield to modest ambitions of orderly transformation. Yet their implementation confronts objections at two levels. One is conceptual, about the appropriateness of (even adaptive) regulation in domains of genuine uncertainty. The other is operational; it pertains to compliance costs for regulated entities. Both warrant examination.
The case for adaptive regulation presupposes that regulation itself proves necessary. This assumption requires interrogation.Footnote 175 In domains characterised by rapid technological change and genuine uncertainty about welfare effects, premature regulation, however adaptive in design, risks stifling beneficial innovation whose contours cannot yet be anticipated. The AI Act illustrates this tension. Adopted in 2024, the Regulation codified regulatory responses to general-purpose AI models barely two years after their commercial emergence. A public choice perspective suggests caution about framing adaptive regulation as universally superior to regulatory restraint. That said, adaptive mechanisms are specifically designed to compensate for foundational design choices made under conditions of limited empirical evidence. This capacity for correction distinguishes adaptive from static regulation and, in a sense, limits the negative effects of (too) early regulation.
Applied to the EU Digital Acts, this article takes them for granted. This methodological choice reflects political realism rather than normative agnosticism. European policymakers have been regulating digital markets and show no sign of slowing down. Given that regulation will occur, my analysis focuses on examining how regulatory frameworks can be designed to manage this existing reality. The choice is between adaptive regulation and static regulation, between frameworks capable of learning from implementation experience and frameworks that ossify around initial design choices made under conditions of limited information. Where regulators exercise forbearance, the principles articulated here become unnecessary. Where they do not, these principles offer guidance for optimising regulation.
The second objection concerns operational feasibility. A recurring objection to adaptive regulation concerns the compliance burden on small firms. The argument holds that adaptative regulation requires companies to monitor legal changes and recalibrate internal compliance systems. These administrative tasks impose greater relative costs on entities lacking dedicated legal departments or sophisticated compliance infrastructure.
The objection carries weight but founders on closer inspection as it misconstrues the relevant comparison. Static regulation that ossifies into irrelevance imposes its own costs. Firms operating under obsolete rules face uncertainty about enforcement priorities, competitive disadvantages relative to non-compliant actors, and potential liability when regulators belatedly update frameworks through enforcement rather than transparent revision. A pharmaceutical company adhering to outdated safety protocols, for instance, may find itself simultaneously over-investing in unnecessary procedures and under-investing in newly understood risks. The costs of regulatory drift, though less visible than compliance expenditures, are real. The choice is not between adaptive regulation and costless stasis. It is between the costs of orderly revision and the costs of accumulated obsolescence.
Concerns about disproportionate impact on small firms can also be addressed through regulatory design rather than abandoned as insurmountable. Exemptions for enterprises below specified size or users’ thresholds reduce burden without sacrificing adaptive capacity. The AI Act establishes reduced documentation requirements for small and medium enterprises and provides regulatory sandbox access to facilitate compliance. Similar provisions appear across the EU Digital Acts corpus. Simplified reporting templates, extended transition periods and technical assistance programs targeted at smaller entities can further mitigate adjustment costs. These mechanisms require conscious incorporation into adaptive frameworks, but they pose no conceptual obstacle.
A final consideration undermines the objection entirely. Static regulation tends toward irreversible stringency. Once enacted, relaxation of the provisions (if proven necessary) requires the same legislative effort as initial adoption, and political economy dynamics favour retention over repeal. Adaptive regulation, by contrast, permits adjustments in two directions. Revision mechanisms tighten obligations where evidence reveals harm. They also relax requirements where implementation demonstrates rules prove unnecessary or counterproductive. This benefit disproportionately favours small firms, precisely because static compliance costs fall heaviest on them.
All in all, these objections circumscribe but do not invalidate the case for adaptive regulation. A more fundamental objection resides in the fact that implementing the principles articulated here requires not just new regulatory instruments but a fundamental shift in regulatory philosophy, from physics to gardening, from engineering static outcomes to cultivating dynamic processes.
IV. Conclusion
This paper set out to define adaptive regulation, to explain why it matters, to measure it across eight recent EU Digital Acts, and to propose design principles to make it work in practice. The empirical analysis revealed that adaptive mechanisms are present, but they are incomplete and still tethered to neoclassical assumptions. Building on complexity science, I distilled four principles for future-responsive regulation that, together, form an integrated system. Modular architecture defines what can change, distributed sensing detects when change is needed, pluralistic triggering determines who can initiate it, and networked institutional memory ensures that each adaptation compounds rather than resets.
All of which points here: adaptive regulation is not an academic indulgence. It is the only way to steer between two cliffs. On one side lies static law, rules that age badly, drift out of sync with their subject, and end up ineffective, counterproductive, or even dangerous. On the other side lies the temptation to ban regulation altogether, on the theory that if static law is harmful, no law must be better. That is the logic behind the calls in the United States to keep AI regulation off the books for a decade.Footnote 176 Both extremes are bad bets. Static law locks you into Monday’s answers for Tuesday’s questions. No law leaves you with no answers at all. Adaptive regulation keeps the middle ground open. It builds the capacity to intervene if and when necessary, because Tuesdays happen.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/err.2026.10087
Funding Statement
Open access funding provided by Vrije Universiteit Amsterdam.