Hostname: page-component-76fb5796d-qxdb6 Total loading time: 0 Render date: 2024-04-27T07:23:02.496Z Has data issue: false hasContentIssue false

The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy

Published online by Cambridge University Press:  19 February 2024

Marco Almada*
Affiliation:
Department of Law, European University Institute, Florence, Italy
Anca Radu
Affiliation:
Department of Law, European University Institute, Florence, Italy
*
Corresponding author: Marco Almada; Email: Marco.Almada@eui.eu

Abstract

Over the last few years, artificial intelligence (AI) technologies have become embedded in various domains of social life, prompting legislative efforts at both national and international levels. In the European Union (EU), this drive for legislation has been reflected in various legal instruments, notably the proposed AI Act, which is expected to become a global standard through the “Brussels Effect.” This Article argues that while the AI Act will likely produce a Brussels Effect of its own, such an outcome will be accompanied by a side effect that undermines the EU’s ambition to spread legislative text and values in AI governance. Since the AI Act follows EU product safety legislation, its provisions supply limited protection to some of the values the EU policy intends to protect, such as the protection of fundamental rights. These shortcomings are compounded by the EU’s active efforts to shape alternative instruments, such as the Council of Europe’s proposed convention on AI along the lines of the AI Act. As a result, the diffusion of the AI Act as a global standard will have consequences for the EU policy agenda on AI and the conceptualization of the Brussels Effect.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of the German Law Journal

AI has become a central topic in the EU’s digital regulation agenda. The European Commission has repeatedly emphasized the need to foster the adoption of AI technologies in the EUFootnote 1 through recent legislative reforms such as those dealing with the EU data protection framework.Footnote 2 Policies aimed at various facets of the European digital single market also included provisions focused on AI.Footnote 3 These provisions coexist with the apparent crown jewel of the EU regulatory approach, the AI Act,Footnote 4 a horizontal legal instrument that intends to foster the adoption of safe, trustworthy, and human-centric systems in the EU.

The AI Act was initially proposed in April 2021 and is currently undergoing the final steps of the ordinary legislative procedure. It reflects a two-pronged EU AI strategy that seeks to turn the EU into a “world-class AI hub” to ensure the safety and trustworthiness of AI systems used within the Union.Footnote 5 The EU regulatory framework for AI is expected to follow a value-based approach within this scheme. More than that, it is expected to promote European values worldwide.Footnote 6 However, the development of AI technologies is primarily led by corporations based, for the most part, outside the EU.Footnote 7 How is a legal instrument internal to the EU legal order supposed to have a global impact on the spread of EU values for AI governance?

One of the mechanisms posited for that global influence is the so-called Brussels Effect.Footnote 8 Because the EU single market comprises hundreds of millions of consumers with considerable spending power and, given the interests of those companies who attend to these consumers, access to that market is extremely attractive to businesses. This attractiveness means that, under certain circumstances, companies might cater to stringent EU standards in their global operations while other jurisdictions might pattern their own rules after the EU approach. Given the peculiarities of AI-related markets, there has been some debate about whether the AI Act will deliver the Brussels Effect.Footnote 9

In this Article, we argue that the AI Act’s likely Brussels Effect creates the risk of a strong side effect: that is, the spread of the Act as a regulatory template for the world might undermine many of the European values the EU AI strategy is meant to promote. On the policy side, the risk of a Brussels Side Effect maps an internal tension within EU digital policy because promoting the AI Act as a global standard is likely to also spread many of the shortcomings associated with the EU regulatory approach. On the theoretical side, our argument complicates one of the Brussels Effect's tenets: that the EU can set global regulatory agendas because EU regulation is strict. While there is little reason to doubt that stringency contributes to the occurrence of a Brussels Effect, the risk of a side effect in the AI Act suggests that success in spreading the form of EU law may play against some of the policy goals the said effect is expected to promote.

To make these interconnected points, the Article proceeds as follows. In Part A, we provide an overview of the AI Act, showing that some aspects were designed to increase the probability of a Brussels Effect in AI governance. However, the overall regulatory arrangement has considerable limitations when it comes to the protection of fundamental rights, democracy, and the rule of law. In Part B, we argue that these shortcomings will likely be spread globally; these issues are not sufficient to prevent the emergence of a Brussels Effect for the AI Act. And, as the shortcomings of the AI Act gain global traction, this success in exporting regulatory standards can lead to a reduced level of fundamental rights protection. Such a concern is particularly salient in the context of the Council of Europe’s (CoE) proposed convention on human rights, democracy, and the rule of law, which we examine in Part C. While the convention could provide an escape valve to the limits of the AI Act regarding fundamental rights, the EU’s current position in CoE negotiations threatens this potential by shaping the convention into a vehicle for the diffusion of the AI Act approach. Therefore, the pursuit of a Brussels Effect in AI is put before the values the effect is expected to promote. Finally, we conclude the Article with some remarks on the impacts of this phenomenon on AI regulation and the study of the Brussels Effect.

A. The EU Approach to AI Regulation

As mentioned above, AI regulation in the EU is a two-headed beast. On the one hand, it follows the market imperative at the core of the European project: the AI Act is expected to create a single market for AI,Footnote 10 preventing market fragmentation and supplying market actors with the legal certainty they need to operate in all 27 EU Member States.Footnote 11 On the other hand, it is a value-laden instrument that restricts access to the EU single market to “trustworthy” AI systems compliant with Union values, notably those that protect fundamental rights.Footnote 12 The text of the AI Act reflects a balance between those two aims.

In crafting such a balance, the EU must observe certain constraints. Some of these constraints are constitutional: unlike a sovereign state, the EU can only legislate within the limits of the competencies conferred on it by its Member States.Footnote 13 Most of these competencies are sector-specific, allowing the EU to act in a specific domain, such as environmental protection or antitrust. But, given its goal of avoiding market fragmentation, the AI Act is designed as a horizontal instrument; that is, it is a regulation that applies to all AI systems.Footnote 14 And the protection of fundamental rights does not, in itself, provide a basis for EU legislation.Footnote 15 As a result, the AI Act shoehorns the protection of fundamental rights into the most general legal basis available for legislation: Article 114 of the TFEU.Footnote 16 This provides a general competence for approximating rules on the internal market.Footnote 17

To the extent that the AI Act relies on EU competence to regulate the single market,Footnote 18 it must be framed as an instrument that promotes market integration.Footnote 19 While the EU has a vast repertoire of approximation measures, the AI Act settles upon a specific method: product safety regulation.Footnote 20 By adopting this frame, the EU can benefit from decades of expertise with previous product safety instruments, which have become a global standard.Footnote 21 And, by drawing upon the mechanisms for EU-wide coordination in product risks, the EU can—at least in theory—avoid the enforcement issues currently plaguing its data protection framework.Footnote 22 The AI Act is, therefore, a reasoned response to the goals and constraints of EU AI strategy.

In the following pages, we overview how the EU adapted its product safety approach to AI and then discuss some of the shortcomings of that approach. These shortcomings hinder the pursuit of goals beyond the original product safety framework, particularly those connected to fundamental rights protection. Hence, the AI Act, despite its horizontal ambitions, is not sufficient to address the value-setting ambitions of EU policy.

I. The AI Act in a Nutshell

All product regulations require a clear definition of the target product, and the AI Act is no different. In line with a growing international consensus,Footnote 23 the Act’s main regulatory target is AI systems, which is defined as a machine-based system,Footnote 24 designed to operate with a certain degree of autonomy and potentially adapting itself after deployment, that generates outputs such as predictions, recommendations, or decisions, based on objectives supplied either explicitly or implicitly to the system.Footnote 25 Some of the Act’s provisions specify the technical requirements of these systems. Others lay down obligations for the various actors involved in placing these systems on the EU market, putting them into service, or using them. But all these obligations are cast in terms of the AI system perceived as a discrete product.

The AI Act relies on a risk-based approach to govern such AI systems. Under the proposed framework, risks are subject to a top-down classification in which the EU legislator defines three categories of risks and specifies which applications of AI fall into each category.Footnote 26 Utilizing a precautionary approach, some applications of AI are prohibited because the risks they pose to individuals’ health, safety, and fundamental rights are deemed unacceptable.Footnote 27 Most other AI systems are not considered to pose a substantive risk by the mere virtue of their intended application.Footnote 28 The AI Act does not introduce specific rules for such systems,Footnote 29 leaving their governance to voluntary codesFootnote 30 and sector-specific legislation such as the General Product Safety Regulation.Footnote 31 Instead, it devotes most of its substantive provisions to applications deemed to pose a high risk to health, safety, and fundamental rights.Footnote 32

High-risk AI systems are governed by rules derived from the New Legislative Framework for product safety.Footnote 33 Under such an approach, the providersFootnote 34 of AI systems must ensure that the system meets specific technical requirements before being allowed into the EU single market, put into service, or otherwise used.Footnote 35 The providers themselves usually evaluate conformity with such requirements through internal controls, but external assessment may sometimes be required.Footnote 36 Even if internal controls are sufficient de jure, providers might find themselves de facto required to rely on external forms of validation, such as certification mechanisms or conformity to harmonized technical standards set by the CommissionFootnote 37 in order to ensure full coverage of the legal requirements, which provide abstract descriptions of the outcomes the technical measures must ensure.

In the original Commission text, the risk profile of an AI system is based on its intended application. However, such an arrangement is ill-suited to models that can be used for various tasks, such as the large language models popularized in 2023.Footnote 38 This inadequacy follows from the fact that the said models were intended as components for other AI systems. For example, it has been suggested that systems such as ChatGPT can be used by public-sector bodies as components to build their own systems for automating interactions with citizens.Footnote 39 In such circumstances, the provider of the AI system would be the legal or natural person who repurposes this general system for a specific purpose. However, that provider might lack the means to effect technical changes on the tools they use.Footnote 40 Therefore, the effectiveness of the technical requirements described above would be contingent on the technical decisions made by the provider of the general-purpose system used in a high-risk application, which seldom accounts for all the risks in a particular high-risk context.

To address these shortcomings, both the Council General ApproachFootnote 41 and the political compromise arrived at by the European ParliamentFootnote 42 agreed on the need to adopt specific rules for AI systems without a narrowly-defined purpose.Footnote 43 In the AI Act’s final compromise text, these systems are subject to a special legal framework defined in Title VIIIA. Just like the EU approach to AI systems with a defined purpose, the framework for general purpose AI systems relies on the application of different regulatory obligations in accordance to an ex ante classification of risk. All general purpose AI systems are subject to the information disclosure requirements outlined in Article 52c AI Act, but some systems deemed to create “systemic risk” are subject to duties regarding the evaluation of such risks and accountability to external actors (Articles 52d and 52e AI Act). Once again, the AI Act directs the bulk of its obligations and enforcement mechanisms towards a small set of systems that are seen as the most critical sources of risk.

Conformity to ex ante requirements addresses the risks anticipated during system design. However, some issues might not be detected beforehand, while others might result from use. To address these gaps in risk response, providers are expected to develop a system of risk management practices for their AI systemFootnote 44 and collect information about it once it comes into the EU market.Footnote 45 Providers must take corrective measures or face penalties such as fines or the loss of access to the EU market if any risks are detected—either through the provider’s monitoring practices or by the action of market surveillance authorities.Footnote 46 So, the AI Act tackles the risks associated with AI throughout the entire life cycle of AI systems, from their design to the end of their operation.

II. The Limits of the AI Act

The outline in subheading I above suggests that the AI Act is in an excellent position to achieve the goals that prompt its adoption.Footnote 47 Its reliance on a product safety framework provides legal certainty, as the providers can rely on the expectations built with previous product safety lawsFootnote 48 to which they are already subject in many cases.Footnote 49 The extension of that framework to cover risks to fundamental rights addresses many of the concerns raised by AI applications, especially, but not exclusively, in the public sector.Footnote 50 Nonetheless, the AI Act has been subject to extensive critique by scholarsFootnote 51 and civil society organizations.Footnote 52 Why is that?

A complete presentation of the critiques of the AI Act would exceed this Article’s scope. But, for our argument, it is essential to highlight the tension between the fundamental rights aims built into the AI Act’s framework and the product safety instruments used to pursue these goals.Footnote 53 Within product safety legislation, the risk associated with an adverse event can be calculated, at least in principle, if one measures the likelihood of the event and its resulting severity; the value of the risk is the product between these two quantities.Footnote 54 Such a definition is ill-suited to capture various risks to fundamental rights, such as those affecting dimensions of fundamental rights that are not amenable to computational representationFootnote 55 or those stemming from the cumulative harmful effects of practices that are not very harmful in and of themselves.Footnote 56 Therefore, the AI Act’s formula of protecting fundamental rights through the same mechanisms used to address health and safety risks introduces a new risk: that important forms of harm to these rights end up neglected.

Another line of concern focuses on the actual rules applied to AI systems. While the rules for high-risk AI are extensive, they are formulated in abstract terms. Consequently, their implementation requires extensive interpretation efforts by the providers of AI systems, who become de facto responsible for determining how the legal requirements are converted into software requirements.Footnote 57 In doing so, they face a technical challenge: expressing the relevant legal requirements as software. Such an expression might be feasible for legal rules that demand little interpretation, but fuzzier rules and legal principles are not so easily represented in computer code.Footnote 58 And, because many AI systems are large-scale systems,Footnote 59 changing them to fix errors in representation—or to cope with changes in the law—can be a slow process.Footnote 60 Reliance on technical measures may thus entrench arbitrary or even wrongful interpretations of the law by the providers of AI systems.Footnote 61

The AI Act includes a few mechanisms that may provide guidance and avoid arbitrary interpretation by providers, such as the (currently narrow) requirements for external certification and the pervasive role that harmonized standards play in evaluating compliance with the Act.Footnote 62 Still, reliance on external actors also has its issues. Technical standards and certification schemes are both produced by private bodies,Footnote 63 in which deliberations are framed in technical language and are seldom open to the general public.Footnote 64 As a result, there are considerable doubts about whether standard-setting organizations and other technical bodies are legitimate parties for specifying norms directed at protecting fundamental rights, coming, as they do, from external observers and technical decision-makers.Footnote 65

If these and other critiques hold, protecting fundamental rights through the AI Act’s product safety framework might be ineffective. The Act might fail to respond to issues that cannot be easily framed in the kind of quantified risk product safety thrives in, and it might fail to address all dimensions of the risks it does detect. Furthermore, opaque private actors’ outsized role in codifying fundamental rights means that the AI Act may produce negative outcomes for other public interests, such as democratic governanceFootnote 66 and the rule of law.Footnote 67 The AI Act, meant to protect the EU from the risks associated with AI technologies, may itself run against many values the EU is expected to uphold.Footnote 68 And, in doing so, the Act undermines one of its stated purposes.

B. The AI Act as a Global Standard?

Part A situated the AI Act and its potential shortcomings within the EU legal order. However, the Act is not solely directed at producing results within the EU single market. Instead, as discussed in the introduction, it is expected to position the Union as a global leader in AI. Is such leadership possible if the AI Act provides insufficient protection for fundamental rights and other public values?

At first glance, the answer to this question might seem to be “no.” One of the necessary conditions for the Brussels Effect is that the regulation must be stringent, such that compliance with it is sufficient to meet the demands of other jurisdictions. Because sovereign states can establish norms to address fundamental rights directly—and, indeed, are currently doing soFootnote 69 —providers of AI systems would need to adopt additional measures to cope with those fundamental rights requirements that cannot be cast in terms of product safety standards. While such a conclusion is reasonable in light of our previous discussion, current analyses of the AI Act lead us to sustain, in Part B, subheading I, that a Brussels Effect might happen nonetheless. The limits of the AI Act regarding fundamental rights protection are less salient in some applications, and the other requirements for such an effect are still present. So, the AI Act might become a global standard despite shortcomings in the product safety framework as a guardian of fundamental rights.

However, success in spreading the AI Act’s regulatory framework does not automatically lead to success in setting the European approach to AI as a global standard. In Part B, subheading II, we argue that the opposite is true. Any Brussels Effect from the AI Act will likely produce a side effect, spreading around the world norms that pay insufficient attention to the values the EU is founded on.Footnote 70 As other jurisdictions pattern their laws after the AI Act, they will adopt a model that, as seen in Part A, does not cover all the dimensions of fundamental rights and public interests it is meant to. Furthermore, the global adoption of an insufficient standard of value protection may come back to haunt the EU; for example, by restricting the possibility of fixing the AI Act’s deficits through international treaties. Under these circumstances, spreading the letter of EU AI regulation can constrain the EU’s ability to shape the values guiding the adoption of AI at the European and global levels.

I. The AI Act’s Potential Brussels Effect

From the onset, the AI Act was designed with its worldwide effects in mind.Footnote 71 Such a global concern reflects the Commission’s ambitions regarding the positioning of the EU,Footnote 72 with its previous experiences on the impact of EU digital regulation on the laws of other jurisdictions and international treaties, as a global leader.Footnote 73 While the EU cannot force other jurisdictions to follow its lead, it can sway their regulatory approaches through various mechanisms. Some of those involve bilateral or even multilateral action, as is the case of the CoE convention on AI we examine in Part C. But, under certain circumstances, the EU can also exercise unilateral influence via the Brussels Effect.Footnote 74

The Brussels Effect is a market-based mechanism for regulatory exportation. Through soft coercion enabled by its strong internal market, the EU often spreads its regulatory standards, even if its trade partners do not favor those.Footnote 75 In its original formulation, the Brussels Effect was mainly seen as a de facto phenomenon in which companies comply with EU standards—even when formally subject to less strict ones—because economic factors push them to do so.Footnote 76 But it can also take a de jure form as other jurisdictions emulate the EU regulatory approach due to corporate lobbying,Footnote 77 political pressures to catch up with technological change, or other factors.Footnote 78 Either way, scholarship on EU regulation has identified five conditions that must be met for policy diffusion through the Brussels Effect.Footnote 79

If the AI Act is to produce a Brussels Effect, market size is vitally important.Footnote 80 Based on available evidence, the EU is likely to be a large market for AI systems. The substantial population covered by the EU single market and its wealth make it incredibly attractive to providers of consumer goods based on AI. Similarly, large online platforms are unlikely to forgo access to the millions of users based in EU Member States.Footnote 81 The EU also offers substantial markets for AI systems marketed for business uses and public sector applications.Footnote 82 Because conformity with the AI Act is a prerequisite for selling AI systems in the EU single market, the risk of being pushed out of it will likely be salient for providers of AI systems who operate globally.

It is also relatively straightforward to show that the AI Act meets the second requirement for a Brussels Effect: regulatory capacity. Footnote 83 Because AI is a novel technology that has undergone significant developments in the last few years,Footnote 84 there is little established knowledge on how to regulate AI systems. The EU adopted two strategies to mitigate this general ignorance. First, it worked to develop extensive expertise in AI. AI technologies were a focal topic in the reform of EU data protection law in the mid-2010s.Footnote 85 The AI Act was preceded by the work of a high-level expert group formed by people from academia and industry,Footnote 86 and national and EU bodies hired AI experts.Footnote 87 Second, reliance on the product safety framework allows the EU to transpose decades of expertise in interpretation and enforcement to the AI Act rather than having to create an entirely new set of institutions and practices.Footnote 88 As such, few jurisdictionsFootnote 89 have the technical and institutional capabilities available to the EU for AI regulation.Footnote 90

A Brussels Effect for the AI Act also requires stringency.Footnote 91 If EU standards are more demanding than those of other jurisdictions, compliance with the former is likely enough for the latter. Here, the AI Act stands on less solid ground. Regarding systems that are neither prohibited nor classified as high-risk, the AI Act does not establish a comprehensive legal framework, mostly limiting itself to disclosure requirements, such as those that cover certain application under Article 52 of the Act. However, some jurisdictions have proposed stricter rules for particular applications, such as online recommender systems.Footnote 92 Others have proposed additional rules for non-high-risk AI, which often rely on mechanisms beyond the technical requirements imposed by the AI Act.Footnote 93 Therefore, conformity with the AI Act might not be sufficient to ensure that a low-risk AI system complies with the laws of any potential jurisdiction.

By contrast, the AI Act adopts stringent approaches both for high-risk AI systems and for general purpose AI systems with a systemic risk.Footnote 94 But, as discussed in Part A, subheading II, some of the public interest concerns driving AI are not covered by a product safety framework, as is the case for important dimensions of fundamental rights. Any third-country legislation that directly touches upon issues not covered by the AI Act will thus create requirements that go beyond the EU requirements. As of February 2024, there are a few domains in which the AI Act can be said to be more stringent than the practices of other jurisdictions: its new rules for general purpose AI systems with systemic risk, its prohibition of some categories of AI systems under Article 5, and for high-risk AI systems, in the regulation of issues that are already covered by the product safety framework. Accordingly, a Brussels Effect is more likely when it comes to these aspects of the AI Act.

An additional requirement for a Brussels Effect is that regulation must be directed at an inelastic target: a product or producer that must be tied to a regulatory regime regardless of its characteristics.Footnote 95 In the case of the AI Act, it is possible to identify two different forms of elasticity. The first one concerns its scope. If providers could simply provide their AI systems from outside the EU, they would have little incentive to comply with a more stringent framework, let alone extend it worldwide. But the AI Act curtails this possibility through a territorial extension mechanism,Footnote 96 which makes its provisions applicable to any AI system that has its outputs within the EU, even if its providers—or even users—are based in a third country.Footnote 97 In theory, nothing prevents a provider from leaving or entering the EU market altogether, but the market size outlined above might prove too tempting for most large-scale providers.Footnote 98 And, once the decision to join the EU single market is made, providers have minimal room for manoeuvre to avoid the Act’s scope.Footnote 99 Providers of AI systems used within the EU, or towards persons based in the EU, cannot dodge the AI Act as a whole.

A second form of elasticity might happen within the AI Act’s framework. Once a system is subject to the Act, it is assigned to one of the three regulatory frameworks presented in Part A, subheading II, or classified as a general purpose AI system. It might also be subject to additional rules targeted at specific classes of systems.Footnote 100 However, providers can avoid the rules applicable to high-risk AI systems if they declare that their particular system does not pose a high level of risk to the values protected by the AI Act.Footnote 101 This exemption does not require the provider to pursue an external assessment or have their decision ratified by an authority,Footnote 102 but it must follow the guidelines present on Article 6(2a) AI Act and any subsequent criteria specified by the Commission under the powers delegated to it. Contrastingly, the definition of a general purpose AI system in Article 3 AI Act only features narrow exclusions from its scope. The systemic risk label, as defined in Article 52a AI Act, also does not afford much flexibility to the provider, as is entirely determined by external evaluation, which can come either from a decision by the Commission or from the application of pre-defined thresholds such as the number of compute operations used to train an AI system. The result of these changes is that the classification of a system as a high-risk system is elastic, but only to some extent, and the rules for general purpose AI systems are much less so.

Finally, the Brussels Effect also requires non-divisibility of the regulated object.Footnote 103 If providers can create separate AI systems for the EU market, they do not need to comply with EU standards in other jurisdictions. This non-divisibility is entirely absent from the regulation of prohibited AI systems, as providers can continue commercializing these systems in jurisdictions that allow them to do so.Footnote 104 Some lawful applications, such as AI systems made for the public sector and other tailor-made applications, are also amenable to segmentation, as these products are already highly differentiated for their customers.Footnote 105 Therefore, AI markets such as those are unlikely to see a substantial Brussels Effect.

Still, the current approaches to AI promote non-divisibility in other applications. Most current advances in AI technologies, especially those concerning general purpose AI models, rely on machine learning systems that require vast amounts of data and extensive computing capabilities for their training and use.Footnote 106 Only a few economic actors have the resources necessary to create such systems.Footnote 107 As a result, most AI providers build their AI systems compositionally, starting their work from components or even fully-trained models offered by these large-scale providers,Footnote 108 who effectively become suppliers of digital infrastructure.

The compositional construction of AI technologies reinforces the AI Act’s likelihood of avoiding divisibility. To the extent that AI technologies rely on centralized infrastructures, including general-purpose AI systems,Footnote 109 they preclude smaller providers from spinning off EU-specific versions of their products. Even large providers might find the costs of maintaining an EU-specific version of their technical infrastructure excessive. Market segmentation might be a financially unsound move in those cases because creating EU-specific products is more expensive than global compliance with EU law requirements. Similarly, this reliance on components and general-purpose AI tools promotes non-divisibility within the EU market as low-risk AI systems built with general-purpose tools will comply with some of the tool’s technical requirements.Footnote 110

Based on the overview above, we agree with those studies that suggest a limited Brussels Effect for the AI Act.Footnote 111 Market factors alone are insufficient to globalize the EU prohibition of certain uses of AI or, indeed, its rules on AI systems falling outside the more strictly regulated classes of AI systems. Even within the latter categories, the spread of EU standards depends on the possibility of product differentiation and the extent to which the product safety framework addresses relevant regulatory concerns. Yet the technical complexity involved in governing AI increases the difficulty of identifying—at least ex ante—situations in which other standards are more stringent than the EU approach. Therefore, the EU standard for high-risk AI will likely shape the governance of these applications worldwide. But, as we shall see, this success in spreading the AI Act framework comes at a price.

1. The Risk of a Brussels Side Effect

Briefly, a Brussels Effect for the AI Act will produce a noticeable side effect: a reduced level of protection of these values that cannot be framed as product safety requirements. The global diffusion of AI safety standards based on the AI Act offers insufficient protection for values such as fundamental rights, democracy, and the rule of law. Even worse, standards based on the AI Act can introduce new risks to these values by imposing norms guided by a restrictive view of them. If these shortcomings of the AI Act are not addressed during its legislative process, the Brussels Effect can lead to a global weakening of values that are dear to the EU legal order.

We postulate that because of the mechanisms through which the global diffusion of standards is based, the AI Act might weaken the protection of fundamental rights, the rule of law, and other high-level democratic values. First, the aforementioned side effect can be produced de facto through compliance with the AI Act’s technical requirements. Most technical requirements, such as those formulated through technical standards, supply an extensive list of factors that must be observed in their implementation.Footnote 112 Given the EU’s reputation for stringent regulation, providers, users, and the general population will likely believe that conformity with a standard or certification scheme that complies with the AI Act is sufficient to protect fundamental rights and other values.Footnote 113 Yet Part A, subheading II, suggests that such an assumption can break down if the values are formulated in general terms or are not amenable to translation into software rules. Suppose the providers of AI systems are, nonetheless, expected to comply with the product safety requirements imposed by the AI Act. In that case, those fuzzier regulatory goals that escape the product safety framework are likely to be deprioritized in the software design process. Consequently, risks to values not covered by the AI Act might only be detected once they have materialized into harm to individuals and social groups.

Adverse consequences may also emerge from the de jure form of the Brussels Effect. However, such an occurrence is less likely than the abovementioned de facto variant. Most jurisdictions are not subject to the competence constraints that led the EU to frame the AI Act as a product safety instrument. Accordingly, they have the power to adopt other approaches to regulation or supplement product safety regulation with rights-based instruments or other regulatory tools.Footnote 114 However, most forms of AI regulation proposed so far rely extensively on technical knowledge and resources,Footnote 115 which can be in short supply in many jurisdictions.

On the one hand, this technical scarcity might prompt some jurisdictions to adopt regulations more in line with their existing capabilities.Footnote 116 On the other hand, some jurisdictions might outsource the management of regulatory complexity to the EU.Footnote 117 So, legislators worldwide might find themselves replicating the advantages and shortcomings of the AI Act even if they could theoretically do otherwise.

Should the possibility of external side effects from the AI Act be considered in its legislative procedure? A political realist might point out that the EU acts within its powers if it enacts a product safety regulation and has neither the power nor the duty to care about the implications outside the Union’s borders. Such a position, however, would be at odds with the EU’s constitutional duty to promote European values in its relations with the wider world.Footnote 118 And, specifically in the case of the AI Act, it would clash with the stated policy goal of using AI regulation as a vehicle for the global promotion of European values.Footnote 119

Therefore, the Brussels Side Effect postulated in this Article’s title emerges from the peculiar configuration of the AI Act. Given the internal requirements of AI law, the AI Act was shoehorned into a product safety framework. This framework fails to attend to values the EU is constitutionally required to observe, such as respect for democracy, the rule of law, and fundamental rights.Footnote 120 It is nonetheless likely to become a global standard, at least for general purpose AI systems and for high-risk applications, to the extent that the market on AI technologies satisfies the conditions for a Brussels Effect. Under these circumstances, the EU’s ambition of spreading a European approach to AI is derailed by its success in exporting regulatory standards to the world.

C. AI Regulation Between the EU and the Council of Europe

While the EU pioneered the idea of a comprehensive approach to AI regulations, other jurisdictions and international organizations also crafted their approaches.Footnote 121 As of 2024, one of the most advanced proposals was the one formulated by the Council of Europe (CoE), an international organization formed by forty-six member States, including all of the EU’s member States. Because of this direct overlap between the EU’s territorial scope and the potential parties of a CoE treaty on AI, we now turn our analysis to the CoE’s proposal and its interactions with the AI Act.

Since its foundation in 1949, the CoE has acted to protect human rights in Europe. Its activities aim to protect and promote its three pillars: Human Rights, Democracy, and the Rule of Law.Footnote 122 For these purposes, the CoE carries out various activities, notably the elaboration of treaties on topics that affect one or more of these pillars. It is in this capacity that the CoE enters the domain of AI regulation.

Given the potential impact of AI technologies on human rights, democratic values, and the rule of law, the CoE set up an ad hoc Committee on Artificial Intelligence in 2019. This committee, grounded on the Human Rights pillar of the CoE’s competencies,Footnote 123 was set up to examine the feasibility and the potential elements of a convention to deal with the new and future threats posed by AI systems.Footnote 124 The main result of this work, delivered by the end of 2021, was a feasibility study of a legal framework for the development, design, and application of AI based on the CoE’s three pillars.Footnote 125

After delivering this feasibility study, the ad hoc committee was substituted by a new advisory body, the Committee on Artificial Intelligence (CAI). The new committee was meant to follow up on the previous work and draft an “appropriate legal instrument on the development, design, and application of AI systems based on the CoE’s standards on human rights, democracy, and the rule of law, and conducive to innovation, in accordance with the relevant decisions of the Committee of Ministers.”Footnote 126 The addition of “innovation” as a guiding concern for CAI suggests that the resulting instrument is expected to tackle a problem that also appears in the AI Act:Footnote 127 how to foster the adoption of AI technologies while protecting fundamental public interests.Footnote 128

In December 2023, the CAI decided to publish a “Draft Framework Convention” of its intended Convention on AI, Human Rights, Democracy, and the Rule of Law.Footnote 129 This publication was accompanied by a disclaimer that the draft “does not preclude the final outcome of negotiations in the CAI.”Footnote 130 Still, the published text suggests some convergences and divergences between the CAI’s view of how to regulate AI and the EU’s approach to the AI Act.

The similarities cover essential aspects of both proposals. Beyond the similarity in the formulation of CAI’s goals and those guiding the AI Act, they also adopt similar framings to the object of the regulation. The CAI text stipulates that AI regulation is directed at “activities within the lifecycle of artificial intelligence systems”,Footnote 131 which could potentially “interfere with human rights, democracy and the rule of law”.Footnote 132 Both instruments establish substantive requirements for protecting human and fundamental rights,Footnote 133 which apply to the uses of AI systems in the public and private sectors.Footnote 134 They both propose horizontal rules for AI that coexist with other legal instruments; for example, the CoE has treaties on data protectionFootnote 135 and cybercrime,Footnote 136 while the AI Act coexists with EU data protection lawFootnote 137 and recent regulations on the digital single market, such as the Digital Markets ActFootnote 138 and the Digital Services Act.Footnote 139 Similar needs and concerns introduced much convergence between the EU and the CAI’s approach to AI regulation.

Yet, the divergences between both approaches might be even more substantial. As we have seen in Part A, the AI Act adopts a risk-based approach with a clear focus on high-risk AI systems. By contrast, the CAI approach combines a risk-based approach with principle-based elements,Footnote 140 which allows it to specify a genuinely horizontal method by setting up principles that apply to all AI applications within its scope. This is not to say that the CAI ignores risk judgements entirely, as it includes obligations for a risk management framework.Footnote 141 However, risk classification in the CAI proposal happens after determining the applicable rules,Footnote 142 whereas the AI Act approach looks at risks before determining the applicable regulatory regime for an AI system.

Another distinction between the AI Act and the Draft Framework Convention comes from the values protected by each approach. While the AI Act claims to pursue various public interests,Footnote 143 such as the protection of fundamental rights, democracy, and the rule of law, it mostly does so by appending these values to product safety provisions that require providers to address risks to health and safety. Instead, the Draft Framework Convention includes general principles such as equality, non-discrimination,Footnote 144 human dignity, privacy, accountability, transparency and oversight, safe innovation, inclusive democratic processes, and preserving public health and the environment. To support these manifold goals, the CAI supplements the Draft Framework Convention with a draft methodology entitled the Human Rights, Democracy and the Rule of Law Risk and Impact Assessment (HUDERIA). This methodology seeks to supply “clear, concrete and objective criteria” to identify sensitive contexts in which AI systems are likely to pose “significant levels of risk to the enjoyment of human rights, the functioning of democracy and the observance of the rule of law.”Footnote 145 Additionally, HUDERIA lays down procedural mechanisms to ensure risk and impact assessment, access to remedies, and regular monitoring to be put in place by AI developers, users, and intermediaries. This holistic approach to risk assessment clearly contrasts with the product safety mechanisms discussed in Part A, so the full vision of the CAI for AI regulation departs from the AI Act rather than the divergences we identified in the Draft Framework Convention.

Such divergences create a few issues for the AI Act’s potential Brussels Effect. If the CAI approach turns out to be more stringent than the EU’s, the EU instrument loses some of its appeal as a global standard.Footnote 146 In contexts that overlap the AI Act and CAI instruments, there is also the risk of inconsistencies between the regulatory approaches. These inconsistencies may follow from the differences between the product safety and principle-based approaches to regulationFootnote 147 or from differences in balancing market imperatives and human rights (and, in the case of the CAI approach, democratic values and the rule of law). Either way, the resolution of these inconsistencies will pose problems to jurisdictions that must apply AI Act-style requirements and CAI-style requirements at the same time.Footnote 148

How do we resolve these potential clashes between the instruments? Because institutional limits to EU competences constrain the AI Act,Footnote 149 it cannot be altered to match the full scope of the CAI approach. So, any convergence between these two instruments would come in one of two ways: either the AI Act’s scope is reduced, or the CAI provisions change and become more similar to the AI Act.

Under the former approach, the AI Act would be stripped of its fundamental rights requirements, which the CAI approach to human rights would cover. Such a reframing would cast the AI Act as a pure product safety instrument, removing the need for the compromises made to extend the framework for protecting fundamental rights. However, it would require the EU to cede its role in defining fundamental rights rules in AI regulation. While all twenty-seven EU Member States and the EU participate in the CAI, the negotiations for an international law instrument involve all forty-six CoE member States and some observer states such as Argentina, Australia, Canada, Costa Rica, Israel, Japan, Mexico, Peru, the United States, and Uruguay.Footnote 150 EU perspectives would heavily influence any rules on fundamental rights produced in this context, and they would also entail some compromise between EU values and interests and those of the non-EU parties involved in the negotiation.Footnote 151

Instead, the EU has decided to solve conflicts between the AI Act and the CAI approach by pushing the CoE towards the former. In November 2022, the EU Council authorized the Commission to start negotiations on behalf of the EUFootnote 152 to ensure consistency between both approaches.Footnote 153 The specific positions the Commission is expected to pursue are detailed in an addendum to the decision, which—at least in its draft version—states that the Union should push the CoE towards a risk-based approachFootnote 154 that is fully compatible with the AI ActFootnote 155 and recognizes an important role for technical standards and certification mechanisms.Footnote 156 In short, the Commission’s position in the negotiations should be that of shaping the CoE convention into an approach that includes the core elements of the AI Act.

This direct exercise of influence in the negotiations cannot be mistaken for a Brussels Effect, as this requires the EUFootnote 157 to act in a multilateral forum. Yet, this multilateral action is intricately connected to the EU’s unilateral influence in regulation. On the one hand, the potential replication of AI Act provisions into the CoE convention removes a potential competitor to the AI Act in the global sphere. On the other hand, the expectation of a Brussels Effect from the AI Act strengthens the EU negotiating position in the final rounds of negotiation for the convention.Footnote 158 So, the EU’s position in the CoE negotiations is coherent with its ambitions to shape the regulation of AI technologies globally.Footnote 159 The side-effects of regulating AI through a product safety approach might harm the protection of fundamental rights, democracy, and the rule of law in the EU and worldwide.

D. Conclusion

The above arguments have implications for the theoretical debates on the Brussels Effect and the regulatory debates on AI governance. Our contribution to the scholarship on the Brussels Effect is narrow. While we believe the Side Effect may appear in other areas—particularly in different branches of EU digital regulation—we make no direct attempt to establish its existence elsewhere. Still, these arguments show some of the difficulties in assessing the stringency criteria for the occurrence of the Brussels Effect.

Notably, Bradford herself has arguedFootnote 160 that the Brussels Effect provides an alternative to the “race to the bottom” models of regulatory competition. As seen in Part A, subheading II, the existence of a Brussels Side Effect in AI regulation would produce a situation in which the stringent regulatory standards of the AI Act lead to weaker standards for the protection of fundamental rights. Such a scenario suggests the need to distinguish between the global diffusion of specific regulatory instruments and the diffusion of regulatory goals and framings, which might not accompany, or even be undermined by, the former under the Brussels Effect.

Regarding the regulation of AI technologies, the side-effect mapped in Part B, subheading II, suggests that the AI Act may be a double-edged sword for the EU. On the one hand, it provides a template that will shape other regulatory efforts by the simple fact of being the first substantive regulation on AI, thus preserving the EU as a global rule-maker. On the other hand, this very efficiency may prevent the EU from developing the instruments it needs to address the shortcomings of the product safety framework for the regulation of fundamental rights. The European approach to AI may be undermined by the very instrument meant to establish it.

In December 2023, negotiating teams from the Parliament and the Council reached a political compromise regarding the AI Act’s text.Footnote 161 The final compromise text was closed in February 2, 2024, and it is expected to be formally approved by both legislative institutions before the end of summer 2024.Footnote 162 The CoE AI convention is expected to follow a similar timeframe, as the final compromise text is currently expected for mid-March 2024.Footnote 163 This timetable, and the institutional context detailed above, leave little hope that either the AI Act or the CoE AI Convention will undergo any substantial changes to address the issues raised above. It remains to be seen, therefore, whether success in spreading the AI Act will be anything more than a Pyrrhic victory for the European’s ambitions of spreading values through AI regulation.

Acknowledgements

The authors thank Giovanni Sartor, Mădălina Busuioc, Chris Marsden, Gregory Lewkowicz, Nicolas Petit, Thomas Streinz, Soizic Penicaud, Vagelis Papakonstantinou, Axel Beelen, and Mireille Hildebrandt for their comments on previous versions of this manuscript. They would also like to thank Deirdre Curtin, Nicolas Petit, and Thomas Streinz for their comments on related work.

Competing Interests

The authors declare none.

Funding Statement

Marco Almada’s work on this Article was partially funded by doctoral grants from Fundación Carolina and the EUI ASPIRE programme.

References

1 See, generally and as an example, European Commission, Fostering a European Approach to Artificial Intelligence, No. COM/2021/205 final (2021).

2 Paul Nemitz, Constitutional democracy and technology in the age of artificial intelligence, 376 Phil. Trans. Royal Soc’y A 8–10 (2018).

3 See generally Troels Krarup & Maja Horst, European Artificial Intelligence Policy as Digital Single Market Making, 10 Big Data & Soc’y 20539517231153811 (2023).

4 Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final (2021). Throughout this article, references to this particular document refer to the Explanatory Memorandum that accompanies the original legislative proposal. References to specific articles of the AI Act, or to the non-binding recitals that precede its text, are drawn instead from the final compromise text produced in February 2024, unless specified otherwise: Council of the European Union, Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts - Analysis of the Final Compromise Text with a View to Agreement, No. 5662/24 (Jan. 2024).

5 Commission, supra note 1, at 1.

6 Id. at 7.

7 See generally Nur Ahmed et al., The Growing Influence of Industry in AI Research, 379 Science 884 (American Association for the Advancement of Science Mar. 2023).

8 See ANU BRADFORD, THE BRUSSELS EFFECT: HOW THE EUROPEAN UNION RULES THE WORLD (Oxford University Press Mar. 2020).

9 See Part B.I.

10 Recital 5 AI Act.

11 See, e.g., Recital 6 AI Act.

12 Recital 1 AI Act.

13 Article 5(2) TEU: Consolidated version of the Treaty on European Union, 2016 OJ (C 202) 13.

14 COM(2021) 206 final at 3.

15 Article 51(2) CFR: Charter of Fundamental Rights of the European Union, 2016 O.J. (C 202) 389.

16 Consolidated version of the Treaty on the Functioning of the European Union, 2016 OJ (C 202) 47.

17 For an overview of the potential legal bases available for horizontal AI regulation, see Pieter Van Cleynenbreugel, EU By-Design Regulation in the Algorithmic Society: A Promising Way Forward or Constitutional Nightmare in the Making?, in Constitutional Challenges in the Algorithmic Society 202, 209–12 (Hans-W. Micklitz et al. eds., Cambridge University Press 2021).

18 According to Recital 2 AI Act, three sets of provisions are grounded on the EU’s competence to lay down rules on the protection of personal data under Article 16 TFEU: provisions on AI systems used for law enforcement purposes to supply real-time biometric identification, risk assessments of natural persons, or biometric categorization. Everything else is grounded on Article 114 TFEU.

19 Action under Article 114 TFEU must have some connection with the market approximation goal of that treaty provision, even if the ECJ tends to allow a broad scope for such approximation measures: Case C-376/98, Federal Republic of Germany v European Parliament and Council of the European Union, 2000 E.C.R. I–08419, paras 106–108.

20 On the Commission rationale leading to the product safety frame for the AI Act, see Gabriele Mazzini & Salvatore Scalzo, The Proposal for the Artificial Intelligence Act: Considerations around Some Key Concepts, in La via europea per l’Intelligenza artificiale (Carmelita Camardi ed., Cedam 2022) 2–5.

21 Charlotte Siegmann & Markus Anderljung, The Brussels Effect and Artificial Intelligence: How EU Regulation Will Impact the Global AI Market TECHNICAL REPORT, Centre for the Governance of AI, 78 (Aug. 2022).

22 See generally Gloria González Fuster et al., The Right to Lodge a Data Protection Complaint: Ok, but Then What?: An Empirical Study of Current Practices under the GDPR Technical Report, Data Protection Law Scholars Network (Jun. 2022).

23 See, e.g., OECD, Recommendation of the Council on Artificial Intelligence (May 2019); UNESCO, The Recommendation on the Ethics of Artificial Intelligence (Feb. 2020).

24 Annex I of the original AI Act proposal specified technical approaches that counted as AI, but such approach was dropped to align the AI Act to definitions proposed at the international level, notably by the OCDE: Council of the European Union, supra note 4, at 3.

25 Article 3(1) AI Act.

26 On the difference between this approach and the risk-based approaches adopted in other EU digital instruments, see generally Giovanni De Gregorio & Pietro Dunn, The European risk-based approaches: Connecting constitutional dots in the digital age, 59 Common Mkt. L. Rev. 473 (2022).

27 Article 5 AI Act. On the consumer protection roots of this provision, see generally Catalina Goanta, Regulatory Siblings: The Unfair Commercial Practices Directive Roots of the AI Act (SSRN Research Paper, Jan. 2023), https://ssrn.com/abstract=4337417 (last visited Feb. 9, 2024).

28 Between 85% to 95% of the AI systems in the EU single market, under Commission estimates: Impact Assessment Accompanying the Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, No. SWD(2021) 84 final 71 (Apr. 2021).

29 Article 52 AI Act specifies transparency rules for certain kinds of AI systems, such as those used for emotion recognition, which apply regardless of the risk level ascribed to the system.

30 Article 69 AI Act.

31 Recital 82 AI Act, referring to Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of the European Parliament and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC (Text with EEA relevance), (2023) OJ (L 135) 1.

32 Article 6 AI Act specifies two kinds of high-risk AI systems. An AI system is a high-risk AI system if it is a product covered by one of the pieces of harmonizing product safety legislation listed in Annex II, or a component of such a product. Otherwise, an AI system is a high-risk system if it is intended for one of the applications listed in Annex III of the Act.

33 For an overview of the New Legislative Framework and its application in the AI Act, see Michael Veale & Frederik Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach, 22 Comput. L. Rev. Int’l. 97, 102–6 (2021).

34 Other actors, such as importers and the users of AI systems, are subject to certain obligations: Articles 24–29 AI Act. But the bulk of the Act is directed at providers, even if the Parliament compromise text proposes an expansion of the list of obligations under Article 28 AI Act.

35 Article 10–15 AI Act.

36 Article 43 AI Act.

37 See Martin Ebers, Standardizing AI - The Case of the European Commission’s Proposal for an Artificial Intelligence Act, in The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics 338 (Larry A. DiMatteo et al. eds., Cambridge University Press 2022).

38 See generally OECD, AI Language Models: Technological, Socio-Economic and Policy Considerations, No. DSTI/CDEP/AIGO(2022)1/FINAL (Organisation for Economic Co-operation and Development Apr. 2023).

39 See General Secretariat of the Council of the European Union, ChatGPT in the Public Sector – Overhyped or Overlooked? (Publications Office of the European Union, Apr. 2023).

40 On the challenges of targeting the regulation of general-purpose AI, see Marco Almada & Nicolas Petit, The EU AI Act: A Medley of Product Safety and Fundamental Rights?, RSC Working Paper No. 2023/59 (European University Institute 2023), 13–17.

41 Council of the European Union, Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts - General Approach, No. 14336/22, art. 4a (Nov. 2022).

42 Brando Benifei & Ioan-Dragoş Tudorache, Report on the Proposal for a Regulation of the European Parliament and of the Council on Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), No. PE731.563v02-00 (May 2023), art. 28b. The Parliament text defined “general purpose AI” but did not use it in any binding instruments, focusing its obligations on a related concept: the so-called “foundation models.”

43 Agreement on the need to regulate such systems did not translate itself on agreement about the form such regulation should take, as the legal framework for general purpose AI systems remained a highly debated topic until the final moments of interisntitutional negotiations: Luca Bertuzzi, EU Countries Give Crucial Nod to First-of-a-Kind Artificial Intelligence Law, EURACTIV, https://www.euractiv.com/section/artificial-intelligence/news/eu-countries-give-crucial-nod-to-first-of-a-kind-artificial-intelligence-law/ (last visited Feb. 8, 2024). For a guide to the EU legislative procedure, see generally Tiago Sérgio Cabral, A Short Guide to the Legislative Procedure in the European Union, 6 UNIO – EU Law Journal No. 1, 161 (2020).

44 Article 9 AI Act.

45 On the post-marketing surveillance system requirements, see Article 61 of the AI Act.

46 Article 65(5) AI Act.

47 Commission, supra note 4, at 3.

48 For an overview of the EU product safety framework, see Geraint Howells & Jonathan Watson, European Consumer Law, in European Union Law 723, 735–37 (Steve Peers & Catherine Barnard eds., Oxford University Press 4th ed. 2023).

49 Article 6(1) AI Act specifies that the high-risk label—and the ensuing regulatory framework—applies to systems already covered by specific product harmonization laws at the EU level. Furthermore, Recital 82 AI Act declares that product safety law is seen as a “safety net” for AI regulation, which suggests that most AI systems are expected to fall into the definition of product used elsewhere in EU law.

50 See generally EU Fundamental Rights Agency, Getting the Future Right – Artificial Intelligence and Fundamental Rights (Publications Office of the European Union, Dec. 2020).

51 See generally Veale & Borgesius, supra note 33; NATHALIE SMUHA ET AL., How the EU can achieve trustworthy AI: A response to the European Commission’s proposal for an Artificial Intelligence Act. 64 (LEADS Lab May 2021); Lilian Edwards, Regulating AI in Europe: Four Problems and Four Solutions (Ada Lovelace Institute Jan. 2022); Vera Lúcia Raposo, Ex machina: Preliminary critical assessment of the European Draft Act on artificial intelligence, 30 Int’l J.L. & Info. Tech. 88 (2022).

52 See, e.g., EDRi et al., An EU Artificial Intelligence Act for Fundamental Rights. A Civil Society Statement. Open letter. (Nov. 2021).

53 For more extensive treatments of the expressive limits of the AI Act’s risk framework, see generally Almada & Petit, supra note 40, 17–25; Alessandro Mantelero, Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI (Springer Nature 2022); Sofia Palmieri & Tom Goffin, A Blanket That Leaves the Feet Cold: Exploring the AI Act Safety Framework for Medical AI, 30 Eur. J. Health Law 406 (2023).

54 Article 3(1a) of the AI Act explicitly incorporates this definition into the final compromise text.

55 Mireille Hildebrandt, Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning, 20 Theoretical Inquiries L. 83 (2019).

56 See generally Burkhard Schafer, Death by a Thousand Cuts: Cumulative Data Effects and the Corbyn Affair, 45 Datenschutz und Datensicherheit - DuD 385 (2021); Hin-Yan Liu et al., Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research, 102 Futures 6 (2018).

57 Cleynenbreugel, supra note 17, at 203–8; Marco Almada, Regulation by Design and the Governance of Technological Futures, 14 Eur. J. Risk Reg. 697, 699–702 (2023).

58 Bert-Jaap Koops & Ronald Leenes, Privacy Regulation Cannot Be Hardcoded. A Critical Comment on the ‘Privacy by Design’ Provision in Data-Protection Law, 28 Int’l Rev. L. Computs. & Tech. 159, 6–8 (2014).

59 See generally Simone Vannuccini & Ekaterina Prytkova, Artificial Intelligence’s New Clothes? A System Technology Perspective, J. Info. Tech. 02683962231197824 (2023).

60 On the temporality of technical change, see generally Lyria Bennett Moses & Monika Zalnieriute, Law and Technology in the Dimension of Time, in Time, Law, and Change: An Interdisciplinary Study 303 (Sofia Ranchordás & Yaniv Roznai eds., Hart Publishing 2020).

61 On technical norms as a source of regulatory entrenchment, see Almada, supra note 59, at 704–706.

62 Article 42 AI Act. For an overview of the role of standardization in the AI Act, see Sybe de Vries et al., Internal Market 3.0: The Old “New Approach” for Harmonising AI Regulation, 8 European Papers 583.

63 Or, in the case of EU harmonized standards, quasi-private ones. See Marta Cantero Gamito, Europeanization through Standardization: ICT and Telecommunications, 37 Y.B. Eur. L. 395 (2018).

64 See Olia Kanevskaia, Governance of ICT Standardization: Due Process in Technocratic Decision-Making, 45 N.C. J. Int’l L. 549 (2019).

65 For an example, see Corinne Cath, The Technology We Choose to Create: Human Rights Advocacy in the Internet Engineering Task Force, 45 Telecomm. Pol’y 102144 (2021).

66 See generally Carles Boix, AI and the Economic and Informational Foundations of Democracy, in The Oxford Handbook of AI Governance (Justin Bullock et al. eds., Oxford University Press 2022).

67 See Emre Bayamlıoğlu & Ronald Leenes, The ‘Rule of Law’ Implications of Data-Driven Decision-Making: A Techno-Regulatory Perspective, 10 Law Innovation & Tech. 295 (Routledge 2018).

68 Article 2 TEU.

69 See, e.g., the Brazilian draft regulatory framework for AI: Projeto de Lei n° 2338, Senado Federal (2023). For an English-language overview of that proposal, see generally Evangelos Sakiotis et al., Brazil’s Senate Committee Publishes AI Report and Draft AI Law, Inside Privacy (Jan. 27, 2023), https://www.insideprivacy.com/emerging-technologies/brazils-senate-committee-publishes-ai-report-and-draft-ai-law/ (last visited Feb. 9, 2024).

70 Article 2 TEU.

71 On the AI Act as a pioneer in global AI regulation, see Mazzini & Scalzo, supra note 20, at 1.

72 Commission, supra note 1, at 4.

73 Bradford, supra note 8, Ch. 5.

74 This is not to say that other jurisdictions are merely passive recipients of EU influence. Instead, they modulate reception of EU influences through the lenses of local regulatory frameworks and priorities. See generally Paul M Schwartz, Global Data Privacy: The EU Way, 94 N.Y.U. L. Rev. 771 (2020); Laura Schertel Mendes & Bruno R Bioni, O regulamento europeu de proteção de dados pessoais e a lei geral de proteção de dados brasileira: mapeando convergências na direção de um nível de equivalência, 124 RDC 157 (2019); Emmanuel Pernot-Leplay, China’s Approach on Data Privacy Law: A Third Way Between the U.S. and the E.U.?, 8 Penn St. J. L. & Int’l. Aff. 49 (2020).

75 Bradford, supra note 8, at xiv.

76 Id. at 6.

77 Id. at 84.

78 On the pacing problem in law and technology, see generally Gary E. Marchant, Addressing the Pacing Problem, in The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem 199 (Gary E. Marchant et al. eds., Springer Netherlands 2011).

79 For an overview of the conditions needed for the Brussels Effect and the effects of their absence, see Bradford, supra note 8, Ch. 2.

80 Id. at 26–30.

81 See generally Alex Engler, The EU AI Act Will Have Global Impact, but a Limited Brussels Effect, Brookings (Aug. 6, 2022), https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/ (last visited Feb. 9, 2024).

82 National governments and the EU institutions themselves are eager to adopt AI technologies: Colin van Noordt & Gianluca Misuraca, Artificial intelligence for the public sector: results of landscaping the use of AI in government across the European Union, 39 Gov’t. Info. Q. 101714, 9–11 (2022).

83 Bradford, supra note 8, at 30–37.

84 In fact, part of the appeal of AI technologies resides not in their current capabilities but in their promises of future results: Hartmut Hirsch-Kreinsen, Artificial Intelligence: A “Promising Technology,” AI & Soc’y. Early access, 9–10 (2023). This speculative character often lends itself to overstated claims by AI proponents and its critics. For critical analyses of some of those claims, see generally Franz Seifert & Camilo Fautz, Hype After Hype: From Bio to Nano to AI, 15 Nanoethics 143 (2021); Inioluwa Deborah Raji et al., The Fallacy of AI Functionality, 2022 ACM Conference on Fairness, Accountability, and Transparency 959 (ACM Jun. 2022).

85 Nemitz, supra note 2, at 8–10.

86 See generally AI HLEG, Policy and Investment Recommendations for Trustworthy AI (Jun. 2019).

87 Such as the European Centre for Algorithmic Transparency, European Commission, https://algorithmic-transparency.ec.europa.eu/index_en (last visited Mar. 27, 2023).

88 Mazzini & Scalzo, supra note 20, at 3–4.

89 Exceptions include the United States and China. Indeed, the US National Institute of Standards and Technology and the Cybersecurity Administration of China are relevant soft-law sources in the field of AI regulation. See generally NIST, AI Risk Management Framework: AI RMF (1.0), No. NIST AI 100-1 (2023); Helen Toner et al., Translation: Internet Information Service Algorithmic Recommendation Management Provisions, DigiChina (Oct. 1, 2022), https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022/ (last visited Feb. 9, 2024).

90 See, e.g., Cecil Abungu, Algorithmic Decision-Making and Discrimination in Developing Countries, 13 Case W. Res. J.L. Tech. & Internet 39, 64 (2022).

91 Bradford, supra note 8, at 37–48.

92 See generally the Cyberspace Administration of China’s Internet Information Service Algorithmic Recommendation Management Provisions (2022), translated by Toner et al., supra note 89.

93 Such as the individual rights proposed in the Brazilian AI bill. See generally, Sakiotis et al., supra note 72.

94 This hypothesis is grounded on the level of detail in the AI Act and its accompanying standards and the global diffusion of the EU approach to product safety: Siegmann & Anderljung, supra note 21, at 78–80.

95 For example, consumer markets are relatively inelastic because a producer must meet a jurisdiction’s requirements before serving that market, whereas stock markets are more elastic because of the possibilities afforded by international capital flows: Bradford, supra note 8, at 48–53.

96 See generally Joanne Scott, Extraterritoriality and Territorial Extension in EU Law, 62 Am. J. Comp. L. 87 (2014).

97 Article 2(1)(b) AI Act.

98 For example, the CEO of OpenAI made public remarks in May 2023 to the effect that the AI Act might prompt the company to withdraw its tools, such as ChatGPT, from the EU markets. This threat, however, was publicly abandoned in the same week it was made: Shiona McCallum & Andrew Vance, ChatGPT-Maker U-Turns on Threat to Leave EU over AI Law, BBC News (May 25, 2023), https://www.bbc.com/news/technology-65708114 (last visited Feb. 9, 2024).

99 Siegmann & Anderljung, supra note 21, at 39.

100 Such as the transparency requirements from Article 52 of the AI Act, or those aimed at general purpose AI systems with systemic risk.

101 Article 6 AI Act, in the final compromise text, includes a procedure through which AI systems classified as high-risk AI systems can avoid the more stringent rules applicable to such systems. The provider of such a system can argue that it does not pose a high risk in light of the extenuating factors from Article 6(2a) AI Act, such as its use on narrow procedural tasks or its playing a preparatory role, other than profiling, in the tasks listed in that Annex.

102 Providers are required to document their assessments (Article 6(2b) AI Act) and follow any additional evaluation guidelines emitted by the Commission (Article 6(2c–d) AI Act).

103 Siegmann & Anderljung, supra note 21, at 54–62.

104 See generally Salvatore Orlando, Regole di imissione sul mercato e pratiche di intelligenza artificiale» vietate nella proposta di Artificial Intelligence Act, 2022 Persona e Mercato 346 (2022).

105 Siegmann & Anderljung, supra note 21, at 48.

106 See Jennifer Cobbe et al., Understanding Accountability in Algorithmic Supply Chains, 2023 ACM Conference on Fairness, Accountability, and Transparency 1186 (ACM 2023), 1188–1192.

107 Siegmann & Anderljung, supra note 21, at 30.

108 See generally Jennifer Cobbe & Jatinder Singh, Artificial Intelligence as a Service: Legal Responsibilities, Liabilities, and Policy Challenges, 42 Comput. L. & Sec. Rev. (2021).

109 See generally Philipp Hacker et al., Regulating ChatGPT and Other Large Generative AI Models, 2023 ACM Conference on Fairness, Accountability, and Transparency 1112 (ACM 2023), 1112–1114.

110 More speculatively, one should not discard the possibility that some providers of general-purpose AI adopt some or all the requirements for high-risk AI to pitch their systems as suitable components for high-risk applications.

111 See generally Engler, supra note 84; Siegmann & Anderljung, supra note 21.

112 See generally BRICE LAURENT, EUROPEAN OBJECTS: THE TROUBLED DREAMS OF HARMONIZATION (The MIT Press 2022), ch. 1.

113 See the diffusion of the CE markings in product safety: Siegmann & Anderljung, supra note 21, at 79.

114 Some scholars have argued for radically different approaches to AI regulation: see AI HLEG, supra note 89; Edwards, supra note 53; Margot E. Kaminski, Regulating the Risks of AI, 103 B.U. L. Rev. 1347 (2023). The current version of the Brazilian AI Bill adopts a mostly rights-based approach: Projeto de Lei n° 2338, Senado Federal (2023).

115 Except, perhaps, for regulatory approaches based on principles and ethical guidelines, but even those will need to bridge the gap between abstract principles and the technical nuance of real-world uses of AI.

116 See generally Abungu, supra note 93.

117 Bradford, supra note 8, at 253.

118 Article 3(5) TEU.

119 Commission, supra note 1, at 8.

120 Article 2 TEU.

121 See generally Mireille Hildebrandt, Global Competition and Convergence of AI Law, in Elgar Encyclopedia for Comparative Law (Jan M. Smits et al. eds., Edward Elgar 2023).

122 About the Council of Europe, Council of Europe Office in Ukraine, https://www.coe.int/en/web/kyiv/the-coe/about-coe (last visited Jun. 8, 2023).

123 Article 1 Statute of the Council of Europe (1949).

124 Terms of Reference for the Ad Hoc Committee on Artificial Intelligence (CAHAI). Extract from CM(2019)131-Addfinal. 1 (Council of Europe 2019).

125 Council of Europe, Feasibility Study, No. CAHAI 23 (Dec. 2020).

126 Terms of Reference for the Committee on Artificial Intelligence (CAI). Extract from CM(2021)131-Addfinal 1 (Council of Europe 2021).

127 See Part A supra.

128 This language parallels the dual objectives of the AI Act discussed in Part A supra. Further studies are needed before any claims that the AI Act caused this shift.

129 Council of Europe, Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, No. CAI(2023)28 (18 Dec. 2023) [hereinafter the Draft Framework Convention].

130 Cover sheet to the Draft Framework Convention.

131 Article 4 of the Draft Framework Convention.

132 Article 4 of the Draft Framework Convention. On the grounding for this decision, see Catelijne Muller, The Impact of AI on Human Rights, Democracy and the Rule of Law, in Towards regulation of AI systems: Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law 21 (Council of Europe Dec. 2020).

133 Article 6 of the Draft Framework Convention.

134 Article 3 of the Draft Framework Convention. There is, however, some controversy among potential parties to the convention about the extension of rules to the private sector, as we discuss in Part D of this Article.

135 Modernised convention for the protection of individuals with regard to the processing of personal data (Convention 108+), CETS 181 (2018).

136 Convention on Cybercrime, CETS 185 (2001).

137 For a brief overview of the EU data protection framework and its evolution, see generally Eleni Kosta, A Divided European Data Protection Framework: A Critical Reflection on the Choices of the European Legislator Post-Lisbon, in Research Handbook on EU Data Protection Law 68 (Eleni Kosta & Ronald Leenes eds., Edward Elgar Publishing 2022).

138 Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance), 2022 O.J. (L 265) 1.

139 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance), 2022 O.J. (L 277) 1.

140 Articles 6–13 of the Draft Framework Convention.

141 Article 16 of the Draft Framework Convention.

142 See generally Victoria Hendrickx & Peggy Valcke, The Council of Europe’s Road towards an AI Convention: Taking Stock, Law, Ethics & Policy of AI (Jan. 25, 2023), https://www.law.kuleuven.be/ai-summer-school/blogpost/Blogposts/AI-Council-of-Europe-draft-convention (last visited Feb. 9, 2024).

143 Recital 1 AI Act.

144 Including an explicit prohibition of discrimination ‘based on a combination of one or more of the [safeguarded] grounds’: Article 9 Draft Framework Convention.

145 Outline of HUDERIA Risk and Impact Assessment Methodology, No. CAI-BU(2022)03 (May 2022).

146 On the importance of stringency for the Brussels Effect, see Part B.I supra.

147 See Almada & Petit, supra note 40, at 13–18.

148 The EU Member States would be in this position, as they are also parties to the CoE and involved in the CAI negotiation procedure. However, similar issues would face any jurisdiction that is pushed towards the AI Act, even if partially, through the Brussels Effect.

149 See Part A supra.

150 The Council Of Europe’s Relations with Observer States, Council of Europe, https://www.coe.int/en/web/der/observer-states (last visited Jun. 9, 2023).

151 On the diverse perspectives held by nations on matters of AI regulation, see Isaac Ben-Israel et al., Towards Regulation of AI Systems: Global Perspectives on the Development of a Legal Framework on Artificial Intelligence Systems Based on the Council of Europe’s Standards on Human Rights, Democracy and the Rule of Law, No. DGI (2020) 16 (Dec. 2020).

152 Council Decision (EU) 2022/2349 of 21 November 2022 authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, 2022 O.J. (L 311) 1.

153 Id. Recital 7.

154 Recommendation for a Council Decision authorizing the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, COM(2022) 414 final (2022), point 11.

155 COM(2022) 414 final, point 12.

156 COM(2022) 414 final, point 17.

157 And its Member States, who are expected to support the Commission’s positions considering the duty of sincere cooperation present in Article 4(3) TEU.

158 In fact, the Commission has sought to actively delay negotiations on the convention to give more time for the AI Act’s legislative procedure: See generally Luca Bertuzzi, EU Commission Postponed AI Treaty Negotiations with Further Delays in Sight, EURACTIV (May 10, 2022), https://www.euractiv.com/section/digital/news/eu-commission-postponed-ai-treaty-negotiations-with-further-delays-in-sight/ (last visited Feb. 9, 2024).

159 Luca Bertuzzi, EU prepares to push back on private sector carve-out from international AI treaty, EURACTIV, https://www.euractiv.com/section/artificial-intelligence/news/eu-prepares-to-push-back-on-private-sector-carve-out-from-international-ai-treaty/ (last visited Feb. 11, 2024).

160 Bradford, supra note 8, at 52–53.

161 Luca Bertuzzi, European Union Squares the Circle on the World’s First AI Rulebook, EURACTIV (Sep. 12, 2023), https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on-the-worlds-first-ai-rulebook/ (last visited Feb. 9, 2024).

162 Bertuzzi, supra note 43.

163 Developments of the Negotiations Regarding the Council of Europe Convention on Artificial Intelligence, Delegation of the European Union to the Council of Europe, https://www.eeas.europa.eu/delegations/council-europe/developments-negotiations-regarding-council-europe-convention-artificial-intelligence_en (last visited Feb. 9, 2024).