Hostname: page-component-cb9f654ff-9knnw Total loading time: 0 Render date: 2025-09-06T17:25:12.212Z Has data issue: false hasContentIssue false

Clashing Frameworks: the EU AI Act and Arbitration

Published online by Cambridge University Press:  27 August 2025

Sara Migliorini
Affiliation:
Faculty of Law, University of Macau, Macau SAR, China
João Ilhão Moreira*
Affiliation:
Faculty of Law, University of Macau, Macau SAR, China
*
Corresponding author: João Ilhão Moreira; Email: joaomoreira@um.edu.mo
Rights & Permissions [Opens in a new window]

Abstract

The EU AI Act (Regulation (EU) 2024/1689) represents a significant departure from the EU’s traditionally restrained regulatory approach to commercial arbitration. The Act classifies certain use cases of AI in arbitration as potentially “high-risk” and introduces stringent compliance obligations for legal tech providers, arbitral institutions and arbitrators. This article argues that the Act’s application to arbitration disrupts the long-standing balance between party autonomy, procedural flexibility and regulatory oversight that has characterised the EU’s treatment of the field. It also highlights the challenges of reconciling its rigid framework with key aspects of arbitration – namely, party autonomy, confidentiality and procedural flexibility. The article concludes by proposing a full or partial carve-out of commercial arbitration from the scope of the AI Act’s high-risk provisions.

Information

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Introduction

Regulation (EU) 2024/1689 (hereinafter the “EU AI Act” or the “Act”),Footnote 1 which entered into force on 1 August 2024, marks a landmark step in the governance of artificial intelligence (AI).Footnote 2 It establishes an ambitious framework designed to address potential risks and encourage innovation across a wide range of sectors.Footnote 3 The Act also represents the centrepiece of the EU’s strategy to position itself as a global leader in AI regulation, with significant implications that are expected to extend far beyond the EU’s borders.Footnote 4

Within its broad scope, the Act specifically addresses the use of AI systems in both adjudication and alternative dispute resolution (ADR).Footnote 5 Specifically in the field of ADR, the Act classifies certain usages of an AI system as potentially “high-risk”Footnote 6 when the dispute resolution process produces legal effects for the parties involved.Footnote 7 Although the Act does not distinguish among types of ADR, its provisions on high-risk applications are particularly relevant to arbitration procedures. This is because other forms of ADR often do not produce legal effects for the parties.

Arbitration is, however, used across highly diverse contexts, ranging from consumer disputes to high-stakes transnational commercial cases involving sophisticated parties with access to specialised legal counsel. In acknowledgement of these differences, traditionally, the EU legislator has regulated arbitration in certain sectors, most notably consumer protection,Footnote 8 with intervention aimed at ensuring procedural guarantees.Footnote 9 By contrast, commercial arbitration has remained largely unregulated by EU law,Footnote 10 especially with respect to procedural aspects.

This EU regulatory intervention is particularly significant given the transformative impact AI is having on the practice of arbitration.Footnote 11 Companies are increasingly developing AI-driven platforms tailored to arbitration, offering tools that streamline access to legal databases, generate procedural documents and summarise case materials.Footnote 12 Arbitral institutions are integrating these technologies into their case management systems or developing proprietary AI tools to enhance efficiency and transparency.Footnote 13 Meanwhile, individual arbitrators are turning to commercially available AI systems and law-firm-developed tools to support their decision-making processes.Footnote 14 This growing reliance on AI by arbitration practitioners compounds the need to examine how the EU AI Act is likely to impact the future of the field.

Building on the emerging literature on the interface between the EU AI Act and commercial arbitration,Footnote 15 this article calls for a full or partial carve-out of commercial arbitration from the scope of the “high-risk” provisions of the Act. To support this proposal, this article highlights the challenges that the Act is likely to pose to legal tech providers, arbitral institutions and arbitrators. The identified issues stem mainly from the difficulty of applying a rigid regulatory framework to arbitration, which relies on party autonomy, especially for procedural matters. Confidentiality in arbitration also makes enforcement of the Act more difficult, while the Act’s extraterritorial reach complicates its application in cross-border disputes.

This article is structured as follows: Section 1 explains how the EU AI Act disrupts the long-standing balance between EU law and commercial arbitration. Section 2 outlines the Act’s key provisions relevant to arbitration. Section 3 analyses the Act’s impact on the primary stakeholders in commercial arbitration – companies, arbitral institutions and arbitrators – and the difficulties of reconciling the Act’s framework with specific aspects of the field. Finally, the conclusion offers a perspective on how the EU legislator could exclude commercial arbitration from the scope of the Act’s high-risk provisions.

1. The EU’s traditional approach to commercial arbitration

Commercial arbitration has generally been portrayed as an independent legal order, evolving alongside, but separately from, the EU legal framework, with only limited and sporadic interactions between the two.Footnote 16 This perspective is often reiterated:

“Although the relationship between EU law and international arbitration has long been treated as one of mutual indifference, the two systems increasingly interact and conflict with each other rather than simply coexisting neutrally. Nonetheless, EU law does not regulate commercial arbitration, and there is a current diversity of arbitration laws and practices across the European Union.”Footnote 17

Over the years, this relationship has been tacitly reaffirmed by EU institutions,Footnote 18 and despite the potential legislative avenues available, the EU’s competence to regulate arbitration has been significantly underutilised. For example, Article 81 of the Treaty on the Functioning of the European Union (TFEU) empowers the EU to adopt harmonisation measures for “the development of alternative methods of dispute settlement.” However, with few exceptions, EU lawmakers have systematically excluded arbitration from the scope of harmonisation measures based on Article 81 of the TFEU.Footnote 19 In addition, the Court of Justice of the European Union (CJEU) has traditionally interpreted these exclusions broadly, confirming the EU’s general restraint in the regulation of arbitration.Footnote 20 On occasion, the EU legislator has utilised other legal bases to regulate dispute resolution mechanisms, notably in the context of consumer protection.Footnote 21 Yet, commercial arbitration, as a whole, has remained largely untouched by direct EU regulatory intervention across all potential legal bases. Such legislative and judicial abstention has sustained the doctrinal view that the EU’s legal order and commercial arbitration would evolve in parallel, with minimal interference between them.Footnote 22

Therefore, prior to the EU AI Act’s entry into force, the interaction between EU law and commercial arbitration was confined primarily to issues of substantive law. Accordingly, the classic problem of EU law and commercial arbitration has been the risk that arbitrators may commit errors of law and “the very limited avenues that, unlike in litigation, exist to correct these errors of law.”Footnote 23 In all instances, the concern has been the tension between ensuring the application of certain important provisions of EU substantive law and upholding party autonomy as well as the principle of the finality of arbitral awards. The relevant cases have concerned matters such as arbitrability,Footnote 24 public policy,Footnote 25 and their interaction with the judicial procedures for the annulment, recognition and enforcement of arbitral awards.Footnote 26 These issues have been addressed by EU rules, derived mostly from the cited decisions of the CJEU, which, for the most part, has subscribed to a conception of the EU’s legal order and commercial arbitration as evolving separately.

By contrast, procedural matters have largely remained outside the scope of EU law. While potentially contending with issues of due process and procedural fairness, EU law has not established rules or mandates concerning how commercial arbitral proceedings are to be conducted. Purely procedural aspects of commercial arbitration are generally determined by agreement between the parties, often by reference to standardised arbitration rules adopted by private arbitral institutions.Footnote 27 Arbitration laws stipulate additional requirements or provide further guidance in matters of procedure, but in this last respect, EU law has been consistently absent. Instead, rules on issues such as arbitrators’ duty to disclose conflicts of interest or the production of evidence have been left to national legislatures and, more importantly, to the work of the arbitral community itself.Footnote 28

This approach aligns with the preferences of the arbitral community, which has consistently advocated for minimal top-down regulation.Footnote 29 The reluctance of EU decision-makers to intervene in the arbitration market has allowed the arbitral community to design its own institutions, governance mechanisms and unique methods for selecting service providers that dominate the field.Footnote 30 Arbitral institutions and practitioners have successfully preserved arbitration as a largely autonomous system in which most regulatory norms are community-driven. In particular, the arbitral community has been successful in advocating the doctrine of party autonomy, which posits that, as a key tenet of arbitration, parties are free to determine the mechanism through which their disputes will be resolved. This principle has gained widespread acceptance, with legislators typically endorsing the notion that arbitration should evolve according to its own internal logic, free from excessive external interference.Footnote 31 The CJEU has repeatedly adopted a narrative that is extremely favorable to party autonomy and limits the extent to which EU law might interfere with commercial arbitration.Footnote 32

It is against this backdrop that the EU AI Act’s decision to regulate ADR – without carving out commercial arbitration – departs significantly from the EU’s established approach to the field. In particular, with the EU AI Act, the EU legislator has laid down rules that directly constrain commercial arbitration, restricting the role of party autonomy and of the institutions and professionals that have traditionally self-regulated the market.

2. Main features of the EU AI Act

Before analysing how the AI Act applies to arbitration, it is important to first outline the Act’s key features. This section focuses on the provisions of the EU AI Act that are relevant to commercial arbitration, presenting an overview of their structure and content. This explication serves as the foundation for Section 3, which examines the Act’s implications for commercial arbitration.

The section begins by discussing the Act’s scope of application, identifying the types of AI systems it targets and the entities subject to its obligations. It then addresses the Act’s territorial reach, which, under certain conditions, extends to entities outside the EU. Next, it explores the classification of “high-risk” AI use cases, including those related to ADR. Finally, the section outlines the compliance obligations assigned to entities commercialising and using high-risk systems.

2.1. Material scope of the EU AI Act

The EU AI Act regulates AI-based tools. To this end, it differentiates between two main categories, namely “AI systems” and “AI models.” In addition, the category of AI systems includes a subcategory of tools called “general-purpose AI systems.” These three concepts are briefly described in turn below.

AI systems are the Act’s main object of regulation. Under the definition adopted by the Act, AI systems are machine-based systems that (i) are designed to have varying levels of autonomy; (ii) may exhibit adaptiveness after deployment; and (iii) infer, from the input they receive, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.Footnote 33 This definition encompasses some of the most prominent and recognisable applications of AI, such as biometric systems, voice assistants and recommendation systems.Footnote 34

Within the category of AI systems, the Act further defines a subcategory called “general-purpose AI systems.” These are “AI systems based on a general-purpose AI model serving a variety of purposes, both for direct use as well as for integration into other AI systems.”Footnote 35 Generative AI commercial products, such as ChatGPT, fall into this category.Footnote 36

The Act’s second regulatory focus is on “AI models,”Footnote 37 which are distinguished from AI systemsFootnote 38 most notably because they cannot be utilised directly by users; rather, they must be integrated into commercial products that offer additional functionalities, such as a user interface.Footnote 39 A prime example of an AI model is the GPT-4 large language model (LLM), which serves as the foundation for the commercial chatbot ChatGPT.Footnote 40

At the time of writing, the AI models category holds limited relevance for companies and other actors offering services in commercial arbitration.Footnote 41 Conversely, within the context of commercial arbitration, companies and arbitral institutions are currently developing or utilising AI tools that can be classified as AI systems or general-purpose AI systems under the Act. Arbitrators, in turn, have access to a variety of nonspecialised, commercial AI systems that also fall within the same categories.Footnote 42 Consequently, this article focuses on the provisions of the AI Act that pertain to AI systems and general-purpose AI systems; it does not discuss the provisions specifically related to AI models.

2.2. Personal scope of the EU AI Act

The EU AI Act creates obligations for a series of actors along the AI value chain,Footnote 43 namely “providers,” “downstream providers,” “importers,” “distributors” and “deployers.” These entities, which may be public or private,Footnote 44 are collectively defined as “operators” and are assigned different obligations under the Act.Footnote 45

Providers are natural or legal persons that either develop an AI system or have an AI system developed by a third party and then commercialise it under their own name or trademark, whether for payment or free of charge.Footnote 46 In addition, the EU AI Act applies to downstream providers, which are “providers of an AI system, including a general-purpose AI system, which integrates an AI model.”Footnote 47 The EU AI Act also applies to importers (i.e., entities located in the EU that place on the EU market an AI system bearing the name or trademark of an entity established in a non-Member State)Footnote 48 and distributors (i.e., entities other than the provider or the importer that make an AI system available on the EU market).Footnote 49 Finally, the Act applies to entities that use an AI system. Users of AI systems are called “deployers” and are defined as the “natural and legal persons using an AI system under their own authority, except where the AI system is used in the course of a personal, non-professional activity.”Footnote 50

2.3. Geographical scope of the EU AI Act

The EU AI Act’s vast geographical reach extends beyond entities and situations entirely connected to EU territory.Footnote 51 This geographical reach is determined by Article 2(1) of the Act, which enumerates various scenarios to which its provisions apply.

First, the Act applies to entities established or located within the EU. Specifically, these include deployers of AI systems that have their place of establishment within the EU,Footnote 52 with the Act further applying to all “affected persons” located within the EU.Footnote 53 Second, the Act’s applicability extends to certain entities, regardless of their geographical location – notably, it applies to all providers placing AI systems or general-purpose AI models on the EU market.Footnote 54 Third, providers and deployers of AI systems based in third countries must comply with the EU AI Act when “the output generated by their AI system is utilised within the Union.”Footnote 55

2.4. The definition of “high-risk” AI systems

The EU AI Act adopts a risk-based regulatory framework, under which AI systems are classified in three different categories, based on the perceived risk that they pose to fundamental rights.Footnote 56 Certain AI systems are deemed to pose unacceptable risks and are prohibited.Footnote 57 Other AI systems are considered “high-risk” because they pose potential threats to the fundamental rights of individuals, including the right to an effective remedy and to a fair trial as protected by the EU Charter of Fundamental Rights.Footnote 58 These high-risk AI systems are subject to stringent requirements by the Act, such as transparency and human oversight.Footnote 59 All other AI systems that do not fall within the prohibited or high-risk categories are regulated mostly through industry-led codes of conduct.Footnote 60

The Act identifies high-risk AI systems by reference to two distinct definitions.Footnote 61 One definition includes AI systems that also qualify as products or safety components of products under existing EU product safety legislation.Footnote 62 Another open-ended definition includes all other AI systems that are deployed in specific sectors involving fundamental rights protected by the EU Charter of Fundamental Rights.Footnote 63 These high-risk sectors are outlined in annex III of the AI Act and include areas such as border control, employment, education, and – particularly relevant to the topic of this article – adjudication and ADR.Footnote 64

The definition of high-risk AI systems in annex III is based on the concept of intended purpose.Footnote 65 According to the Act, an AI system’s intended purpose is determined by the information made available by the provider, which includes not only technical documentation and instructions for use but also promotional materials and public statements.Footnote 66 This means that providers have some level of influence over whether their system is classified as high-risk.Footnote 67 However, this flexibility mainly applies in situations where the AI system does not have a clear single usage, such as in the case of general-purpose AI systems like ChatGPT.Footnote 68

The Act also includes an exception to the classification of an AI system as high-risk. AI systems employed in one of the critical sectors listed in annex III are not deemed high-risk if they do not pose a significant threat to “the health, safety, or fundamental rights of individuals, including by not materially affecting decision-making outcomes.”Footnote 69 According to a literal reading of the Act, this exception is triggered exclusively under one of four specified conditions,Footnote 70 which are further clarified by recital (53) with specific use cases.

Under the combined reading of article 6(3) and recital (53), a series of use cases are excluded from the high-risk category.Footnote 71 For example, AI systems designed to transform unstructured data or classify documents are categorised as performing “narrow procedural tasks” and are therefore not high-risk. Also excluded are AI applications that enhance the language of documents, where this function is an adjunct to human effort rather than a replacement, and thus, the role of human judgment is maintained in the process. Another exception encompasses AI systems that identify deviations from established decision-making patterns, which are intended to complement human assessments, thereby augmenting oversight without undermining human decision-making authority. Finally, AI systems engaged in preparatory tasks that have minimal impact on subsequent evaluations are also included in this classification. Such tasks encompass indexing and data processing activities that provide support without directly influencing high-stakes assessments.

The Act allows the Commission to clarify the exception outlined in Article 6(3) by adopting guidelines that detail its practical implementation.Footnote 72 These guidelines must include a comprehensive list of practical examples distinguishing high-risk from non-high-risk use cases of an AI system.Footnote 73 In formulating these guidelines, the Commission is to pay special attention to the needs of small and medium-sized enterprises and to sectors most likely to be impacted by the Act.Footnote 74 Furthermore, under certain conditions, the Commission has the authority to issue delegated acts that can modify annex III;Footnote 75 this includes the ability to add new use cases for high-risk AI systems, change existing ones or remove specific cases from the list altogether.Footnote 76

2.5. Allocation of obligations along the AI value chain

When an AI system is classified as high-risk under the Act, sections 2 and 3 of chapter III lay out a significant number of requirements and obligations that are allocated to different entities. Failure to comply with all the requirements and obligations laid out by the Act for deployers and providers is heavily sanctioned.Footnote 77

Providers and downstream providers of high-risk AI systems (‘providers’, as previously defined, are entities that place an AI system on the market or into service) are subject to the most stringent set of obligations.Footnote 78 For example, they are required to establish a risk management system,Footnote 79 complete procedures for the validation and testing of used data sets,Footnote 80 and draw up technical documentation related to the AI system.Footnote 81 They are also under an obligation to preserve an AI system’s automatically generated logsFootnote 82 and to register the AI system in a newly created registry kept by the Commission.Footnote 83

The Act also delineates obligations for deployers of high-risk AI systemsFootnote 84 (“deployers,” as previously defined, are entities using an AI system under their authority in the course of a professional activity). In particular, the Act lists a series of obligations, including assigning human oversight of the AI system to natural persons who have the necessary competence and training,Footnote 85 preserving the AI system’s automatically generated logs,Footnote 86 and monitoring the operation of the high-risk system and informing relevant authorities if certain serious risks arise.Footnote 87

The Act also provides for the case in which a deployer might assume the role of provider because of modifications they make to an AI system.Footnote 88 In particular, under Article 25(1)(c), a deployer assumes all the obligations of a provider when “they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service.” Such modification must alter the intended purpose of the AI system “in such a way that the AI system concerned becomes a high-risk AI system by Art. 6(3) of the Act.” Notwithstanding these indications, the scope of substantial modification under Article 25(1)(c) remains unclear.Footnote 89 The Commission is empowered to adopt guidelines to clarify the practical implementation of the provisions related to this substantial modification.Footnote 90

When an AI system is not classified as high-risk, providers and deployers are still subject to certain obligations. For example, providers and deployers have a duty to take measures that ensure, to the maximum possible extent, a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI systems on their behalf.Footnote 91 In addition, the EU AI Act establishes a framework for the adoption of codes of conduct reflecting contemporary technological solutions and industry best practices. Footnote 92 Codes of conduct may also provide for the extension to non-high-risk systems of all the obligations and requirements applicable to high-risk systems.Footnote 93 They may be developed by individual AI providers or in partnership with representative organisations, including civil society and academic institutions,Footnote 94 and their establishment is facilitated by the AI office and national authorities.Footnote 95

3. Implications of the EU AI act for commercial arbitration

This section discusses the implications of the EU AI Act for key stakeholders in commercial arbitration, taking into account the current state of AI usage in the field. First, the section focuses on AI-based tools specifically designed for arbitration and the entities that are developing them, which are classified as “providers” under the Act. The second part of this section focuses on the challenges surrounding the use of AI tools by arbitrators themselves, distinguishing between the use of high-risk systems and general-purpose AI systems such as ChatGPT. Unlike the entities that develop AI tools specifically for arbitration, arbitrators utilising specialised or more open-ended AI-based tools for different tasks will primarily be considered “deployers” of AI systems under the Act. Third, this section discusses the additional complications relating to the Act’s vast geographical scope of application. Finally, the section discusses the consequences of infringing the Act and the suitability of the Act’s enforcement structure for the field of commercial arbitration.

3.1. Implications for companies and arbitral institutions providing AI-based tools for commercial arbitration

The Act categorises AI systems used in commercial arbitration as high-risk when they are intended to be used for two types of tasks, namely (1) assisting an arbitrator in researching and interpreting facts or the law, and (2) applying the law to a specific set of facts.Footnote 96 Because the market for AI tools for legal professionals in commercial arbitration is growing rapidly, many existing products designed to assist arbitrators may qualify as “high-risk” systems under this definition. This classification imposes a substantial compliance burden on entities classified as providers of such systems. The first part of this subsection briefly describes the landscape of AI usage in commercial arbitration. The second part discusses the implications of the Act for the actors developing these tools.

3.1.1. AI tools for arbitration developed by companies and institutions

Specialised companies are releasing increasingly sophisticated tools for commercial arbitration that merge multiple functions. For example, some specialised companies provide AI systems that enable the research of legal materials across multiple commercial databases and arbitral institution repositories,Footnote 97 in addition to offering summarisation and text generation functions.Footnote 98 These AI systems, which are specifically designed for commercial arbitration and promoted as such in their providers’ marketing materials, might be classified as high-risk systems. Such classification could stem from the fact that the “intended purpose” of these systems, as defined by the providers, is to be used in researching and interpreting the law.Footnote 99

In addition, certain tools that have been developed for use in law firms to carry out various legal tasks might be used to support the work of arbitrators in ways that may trigger their classification as “high-risk.”Footnote 100 For example, “Harvey,”Footnote 101 adopted by several law firms internationally, offers an AI-based assistant that allows legal practitioners to analyze large volumes of documents and also answers legal, regulatory and tax questions. Similarly, another system with widespread adoption, “Robin AI,”Footnote 102 drafts contract summaries, allows for the retrieval of specific contract data and can generate reports focusing on terms or obligations. All these uses might, in certain contexts, qualify as “researching and interpreting facts and the law” and “applying the law to a concrete set of facts.”

Arbitral institutions are starting to integrate AI tools into their processes, and although this trend is still in its early stages, it is expected to grow rapidly in the coming years. For example, the International Court of Arbitration (ICC) announced a collaboration with an AI company to roll out an online case management system, “ICC Case Connect,”Footnote 103 in which all case documents will be centralised, making them accessible as data to be processed by other AI tools.Footnote 104 In the United States, the American Arbitration Association (AAA) has developed ClauseBuilder AI (Beta), a tool designed to streamline the drafting of arbitration and mediation clauses.Footnote 105 Another example is the Arbitration and Mediation Center of Peru’s Lima Chamber of Commerce, which offers a platform that integrates ChatGPT and allows users to receive assistance through the chatbot.Footnote 106 In China, the Guangzhou Arbitration Commission has developed an AI arbitration secretary designed to help with many of the administrative tasks that would otherwise need to be completed by the parties, the arbitral tribunal or the institution.Footnote 107

3.1.2. Implications of the act for companies and arbitral institutions

Although the above-mentioned tools certainly qualify as AI systems under the Act, one could contend that some of the tasks do not fall within annex III, point 8, or, alternatively, that they qualify for the exception set forth in Article 6(3) and therefore are not high-risk. For example, AI systems that assist arbitrators with tasks such as proofreading, data retrieval, presenting search results in a structured format or highlighting deviations from established decision-making patterns may arguably be exempt from high-risk classification.Footnote 108 However, the conditions governing this exception, as well as the specific use cases that may qualify for exemption, are articulated in an open-ended manner,Footnote 109 which increases uncertainty for companies investing in the development of these tools.

Moreover, even though some use cases are expressly identified as exempt under the Act,Footnote 110 a significant number of tasks performed by AI systems in commercial arbitration occupy a grey area and may be perceived as influencing decision-making and fulfilling one or more of the conditions in Article 6(3). For instance, activities such as summarising documents or selecting specific documents or excerpts from a larger set could be construed as having a material impact on a tribunal’s decision because how information is curated and presented can shape human decision-makers’ understanding and perceptions.Footnote 111

An additional layer of unpredictability arises from the requirement to register high-risk systems, which also extends to AI systems that may qualify for exemption from the high-risk classification under Article 6(3).Footnote 112 Companies and institutions designated as providers will be required to prepare extensive documentation for the registration process, which is likely to be a complex and resource-intensive endeavor.Footnote 113 Further, there is a looming risk that the competent authorities might reassess and reclassify these AI systems registered as non-high risk under Article 6(3), potentially changing their status to high-risk after registration.Footnote 114

Finally, it is worth noting that even if certain AI tools do not qualify as high-risk under the AI Act, they may still face regulatory hurdles. For instance, AAA ClauseBuilder, mentioned earlier, merely assists with drafting arbitration clauses and is unlikely to be classified as high-risk;Footnote 115 however, its designation as an AI system under the Act still subjects it to certain obligations, such as ensuring adequate AI literacy among staffFootnote 116 as well as complying with voluntary codes of conduct.Footnote 117 Although these codes are not mandatory, they may put pressure on stakeholders to comply with strict standards,Footnote 118 conceivably imposing restrictive effects on innovation and obstacles to market access for smaller players.

3.2. Implications of the EU AI Act for arbitrators using AI systems

In addition to impacting companies and arbitral institutions, the EU AI Act is likely to have significant implications for arbitrators who use AI-based tools to facilitate decision-making.Footnote 119 The “high-risk” classification of certain AI systems used in dispute resolution, combined with the Act’s stringent compliance obligations, raises questions for arbitrators intending to rely on AI tools during arbitral proceedings.

Two distinct scenarios must be considered to understand how the Act applies to arbitrators. The first scenario involves the use of AI systems designed specifically to assist arbitrators in researching legal facts, interpreting the law or applying the law to the facts of a particular dispute.Footnote 120 In this case, arbitrators using such systems are classified as deployers of high-risk AI systems under the Act and are consequently subject to the relevant obligations. The second scenario concerns arbitrators’ use of general-purpose AI systems, such as ChatGPT, for similar tasks.Footnote 121 Although general-purpose AI systems are not designed specifically for arbitration, their use in ways that align with high-risk activities may nonetheless bring them within the scope of the high risk provisions of the Act.Footnote 122

3.2.1. Usage of high-risk AI systems by arbitrators

In the first scenario, where arbitrators use AI systems classified as high-risk, they are subject to the obligations set forth in Article 26 of the Act. This is because, under such circumstances, arbitrators qualify as deployers.Footnote 123 In particular, under Article 26(2) of the Act, arbitrators must ensure human oversight of the AI system to prevent outputs that might adversely affect fundamental rights, health or safety.Footnote 124 This requirement obliges arbitrators to retain ultimate decision-making authority and intervene whenever an AI system’s outputs compromise the fairness or legality of proceedings.Footnote 125

This requirement is intended to ensure due process – a cornerstone of arbitration – by guaranteeing that a human is involved in all material decisions. At the same time, it constrains party autonomy, given that it limits the extent to which arbitrators can rely on AI tools, even in situations where the parties have explicitly agreed to their unrestricted use. Nonetheless, it is worth noting that there already appears to be a prevailing understanding within the arbitral community that arbitrators must not replace their independent analysis of the facts, the law and the evidence with AI-generated outputs.Footnote 126

In addition, Article 86 of the Act requires arbitrators to disclose to the parties whenever a high-risk AI system is used in the decision-making process. More specifically, Article 86 confers on individuals affected by a decision based on high-risk-system output the right to obtain from the deployer “clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.” Therefore, based on the language of Article 86, arbitrators will have to not only disclose that they used an AI system but also provide information about the system in question and how it impacted the decision-making process.Footnote 127 This transparency obligation ensures that parties are fully informed about the usage of AI within the arbitration proceeding, thereby fostering accountability and preserving trust in the arbitral process. This disclosure obligation also reflects an emerging consensus within the arbitral community: guidelines and opinions are converging on the principle that the use of AI systems in arbitration must be disclosed to the parties.Footnote 128

3.2.2. Usage of general-purpose AI systems by arbitrators

The second scenario, involving the use of general-purpose AI systems by arbitrators, raises more complex questions. The use of general-purpose AI systems is not per se classified as high-risk under the Act.Footnote 129 However, if used for high-risk activities listed in annex III of the Act, their classification may change. Articles 6(2) and 25(1)(c) of the Act suggest that when arbitrators use general-purpose AI tools in a manner that aligns with high-risk activities, they may be classified as providers under the Act.Footnote 130 Consequently, arbitrators would not only be subject to obligations typically imposed on deployers (i.e., those established under Article 26, such as assigning human oversight and monitoring the operation of the high-risk AI system) but also to certain obligations applicable to providers, such as ensuring data quality and maintaining compliance documentation under Article 16.Footnote 131

Determining whether the use of a general-purpose AI system constitutes high-risk use depends on the specific task for which the system is deployed. As previously described, annex III, particularly point 8(a), identifies two activities as high-risk: “researching and interpreting facts and the law” and “applying the law to a concrete set of facts.”Footnote 132

Relevant again here is the language of annex III and the exception established in Article 6(3). If an AI system is used for tasks such as proofreading or enhancing the language and tone of a decision, such usage does not fall within the scope of annex III, point 8(a), because it does not intend to assist an arbitrator in assessing facts and applying the law. Moreover, proofreading or enhancing the language of a text would also be classified as merely improving “the result of a previously completed human activity” under Article 6(3)(b) and, therefore, would not qualify as high-risk. Conversely, the use of a general-purpose system to perform any core decision-making task in the proceedings would unequivocally be classified as high-risk under Article 25(1)(c).

Many tasks, however, fall into a grey area between these two extremes. Ancillary tasks, such as legal research, drafting procedural documents and reviewing parties’ claims and evidence, can often be accomplished with the assistance of general-purpose AI systems not specifically designed for these purposes. Whether such tasks qualify as high-risk would then depend on whether they materially influence the outcome of decision-making, as per Article 6(3) and the clarification provided by recital (53) of the Act.Footnote 133 In practice, these ancillary tasks, although not constituting decision-making per se, may nonetheless shape how the arbitral tribunal interprets facts and applies the law.Footnote 134 Consequently, the EU AI Act’s broad language may be interpreted as significantly curtailing the use of LLMs and other general-purpose AI systems for such uses, given the potential for these tools to influence the outcome of the arbitral process.Footnote 135

3.3. Implications of the Act’s broad geographical scope

The Act’s broad extraterritorial scope introduces further complexities for companies, arbitral institutions and arbitrators. As noted, the Act applies not only to providers and deployers located within the EU but also to those who provide or deploy AI systems whose outputs are used within the EU, as well as to all affected parties based in the EU. Footnote 136 Given the inherently cross-border nature of many commercial arbitration proceedings, determining the circumstances in which the Act applies is not always straightforward.

In certain cases, the applicability of the Act is clear. This is the situation for EU-based companies, arbitral institutions and arbitrators who either market or deploy AI systems for arbitration within the Union. Such actors are directly subject to the Act’s provisions under Article 2(1)(a) and (b).

However, the Act’s extraterritorial reach, as outlined in Article 2(1)(c), significantly broadens its scope. Under this provision, as previously described, the Act applies where the output of an AI system is used within the EU. Consequently, it can be argued that the Act extends to non-EU arbitral institutions and arbitrators who, although physically outside the EU, conduct arbitral proceedings that are legally seated in the EU.Footnote 137 In such cases, the connection to the EU arises from the legal framework governing the arbitration rather than the geographic location of the arbitrators or the institution.

The broad and ambiguous language of the Act may also encompass situations with even more tenuous connections to the EU. For example, if the concept of “use of the output of an AI system” is interpreted to include the enforcement of an arbitral award, the Act could apply to proceedings where enforcement is sought in an EU court, even if the arbitration itself was conducted outside the EU and administered by a non-EU arbitral institution.Footnote 138 Similarly, the notion of an “affected party located in the EU” as per Article 2(1)(g) could further extend the Act’s applicability. In such cases, the Act may apply simply because one party to the arbitration has a presence in the EU, even if the arbitration is conducted entirely outside the EU, administered by a non-EU institution and involves facts with no substantive ties to the Union.

In practice, the Act’s application will, therefore, entirely depend on how key terms – such as “use of an output” and “affected party” – are interpreted.Footnote 139 For arbitrators and institutions operating outside the EU, this represents a significant level of legal uncertainty because they may find themselves inadvertently subject to the Act’s stringent compliance requirements, even in cases with only marginal connections to the EU.Footnote 140

3.4. Enforcement and fines

Despite the uncertainty stemming from the Act’s requirements and its broad geographical scope, noncompliance with the Act exposes companies, arbitral institutions and other stakeholders to significant risks.Footnote 141

The most immediate consequence of noncompliance with the Act is the risk of significant fines. Article 99(4) provides for penalties of up to €15 million or 3 per cent of a company’s total global annual turnover, whichever is higher.Footnote 142 Additional fines and nonmonetary sanctions are to be stipulated by the Member States.Footnote 143 Such fines could be unsustainable for individual arbitrators, small companies or arbitral institutions operating as small enterprises. Consequently, these smaller stakeholders may be disincentivised from fully exploring the potential benefits of AI technologies.Footnote 144 This, in turn, risks consolidating the market for AI tools in arbitration around a few well-resourced providers, thereby undermining competition and reducing the diversity of solutions available to arbitral institutions and practitioners.

More indirectly, arbitrators who misuse AI systems may also suffer significant reputational damage, likely adversely affecting their professional standing and future opportunities, or even conceivably subjecting them to civil liability for violating the implied terms of their mandate.Footnote 145 Further, given the lack of case law in this area, there is now – and for the foreseeable future – a lack of clarity regarding the extent to which misuse of AI by an arbitral tribunal or an arbitral institution might affect the validity of an arbitral award.Footnote 146

In addition, enforcing compliance regarding the use of AI in arbitration poses unique difficulties. Given the confidential nature of proceedingsFootnote 147 and the inherent challenges in verifying the deployment and oversight of AI systems, it will often be difficult to determine the extent to which these systems were used and whether their usage complied with the Act.Footnote 148 In practice, it may be difficult to determine whether an AI tool has been used,Footnote 149 let alone whether its use was in compliance with the Act’s requirements. For example, an arbitrator’s reliance on an AI system for legal research or procedural document drafting may not be evident from the procedural record or the award itself.

4. Conclusion

The EU AI Act represents a bold regulatory intrusion into parties’ procedural autonomy, which has long underpinned commercial arbitration. By subjecting arbitration to the Act’s risk-based regulatory framework, the EU is set to affect the delicate equilibrium between self-regulation, party autonomy and due process that has allowed commercial arbitration to thrive as a flexible, efficient mechanism for resolving transnational disputes.Footnote 150

Although the Act’s ambition is to safeguard procedural fundamental rights and procedural fairness in adjudication, its coverage of all types of arbitration proceedings – lumped indistinctly within the “alternative dispute resolution” category of annex III – fails to account for the unique characteristics of commercial arbitration and the proven capacity of its stakeholders to respond and adapt to technological advancements without heavy-handed oversight.Footnote 151

Commercial arbitration operates in an ecosystem fundamentally distinct from other forms of ADRs. In consumer or employment disputes, where significant power asymmetries exist, robust procedural safeguards are necessary to protect vulnerable parties.Footnote 152 By contrast, commercial arbitration typically involves experienced and well-resourced parties negotiating procedural frameworks on equal footing, often with the assistance of specialised institutions.Footnote 153

The EU AI Act’s uniform regulatory approach fails to account for these differences, imposing compliance obligations that are not only impractical but also undermine arbitration’s foundational principles of flexibility and party autonomy. This one-size-fits-all approach also appears to disregard the long-standing EU approach to arbitration, which has traditionally left procedural matters untouched to ensure that commercial arbitration remains a party-driven and adaptable form of dispute resolution.

Therefore, to reconcile the EU AI Act’s objectives with the needs of commercial arbitration, the EU should address this sector specifically under its powers established in Articles 6 and 7 of the Act.Footnote 154 The most suitable solution would be to exclude commercial arbitration from annex III’s scope through a delegated act under Article 7(3). Such exclusion should be understood as covering matters arising from all relationships of a commercial nature, contractual or otherwise,Footnote 155 to the exclusion of those matters covered by the ADR Directive.Footnote 156 This carve-out would align with the principle of proportionality, thereby ensuring that regulatory interventions target only those arbitration proceedings – such as consumer arbitrations – where weaker parties require protection.Footnote 157

Alternatively, the Commission could issue guidelines under article 6(5) to narrow the scope of high-risk AI use cases in arbitration.Footnote 158 To achieve this objective, targeted consultation with arbitral institutions, practitioners and users would help differentiate between AI tools that genuinely threaten fundamental rights and those that enhance efficiency without compromising due process.Footnote 159 These guidelines could establish a tiered compliance system that exempts routine AI applications while reserving scrutiny for systems that directly impact adjudicative outcomes.

References

1 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L 2024/1689.

2 The EU institutions have framed the EU AI Act as the world’s first comprehensive AI law. See “EU AI Act: First Regulation on Artificial Intelligence” (European Parliament, 18 June 2024) <www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> accessed 29 January 2025. See also Statement by President von der Leyen on the Political Agreement on the EU AI Act” (European Commission, 8 December 2023) <ec.europa.eu/commission/presscorner/detail/en/statement_23_6474> accessed 29 January 2025. Although the Act undeniably has an ambitious scope, aiming to regulate AI applications across all sectors, it is important to acknowledge that many other jurisdictions have made significant progress in the governance of AI. On the development of AI regulation in other jurisdictions, see, e.g., Bradford Anu, Digital Empires: The Global Battle to Regulate Technology (Oxford University Press 2023); Eke Damian Okaibedi and others, Responsible AI in Africa: Challenges and Opportunities (Springer Nature 2023); Nathalie A Smuha, “From a ‘Race to AI’ to a ‘Race to AI Regulation’: Regulatory Competition for Artificial Intelligence” (2021) 13(1) Law, Innovation and Technology 57; Keith Jin Deng Chan, Gleb Papyshev and Masaru Yarime, “Balancing the Tradeoff Between Regulation and Innovation for Artificial Intelligence: An Analysis of Top-Down Command and Control and Bottom-Up Self-Regulatory Approaches” (2024) 79 Technology in Society 102747; Sara Migliorini, “China’s Interim Measures on Generative AI: Origin, Content and Significance” (2024) 53 Computer Law & Security Review 105985; Pande Devyani and Araz Taeihagh, “Navigating the Governance Challenges of Disruptive Technologies: Insights from Regulation of Autonomous Systems in Singapore” (2023) 26(3) Journal of Economic Policy Reform 298.

3 As examples of the emerging literature on the Act, see, e.g., Irina Carnat, “Addressing the Risks of Generative AI for the Judiciary: The Accountability Framework(s) Under the EU AI Act” (2024) 55 Computer Law & Security Review 106067; Johann Laux, Sandra Wachter and Brent Mittelstadt, “Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk” (2024) 18(1) Regulation & Governance 3; Hannah van Kolfschooten, “EU Regulation of Artificial Intelligence: Challenges for Patients Rights” (2022) 59(1) Common Market Law Review 81; Qiang Ren and Jing Du, “Harmonizing Innovation and Regulation: The EU Artificial Intelligence Act in the International Trade Context” (2024) 54 Computer Law & Security Review 106028; Frans af Malmborg, “Narrative Dynamics in European Commission AI Policy – Sensemaking, Agency Construction, and Anchoring” (2023) 40(5) Review of Policy Research 757; Sara Migliorini, “’More than Words’: A Legal Approach to the Risks of Commercial Chatbots Powered by Generative Artificial Intelligence” (2024) 15 European Journal of Risk Regulation 719; Rostam J Neuwirth, “Prohibited Artificial Intelligence Practices in the Proposed EU Artificial Intelligence Act” (2023) 48 Computer Law & Security Review 105798.

4 Malmborg (n 3) 765; Laux, Wachter and Mittelstadt (n 3) 17. See also Marco Almada and Anca Radu, “The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy” (2024) 25(4) German Law Journal 646; Gerhard Wagner, “Liability Rules for the Digital Age– Aiming for the Brussels Effect” (2023) 13(3) Journal of European Tort Law 191.

5 EU AI Act, Art. 6(2) and annex III, point 8.

6 EU AI Act, Art. 6(3) and annex III, point 8.

7 The term “alternative dispute resolution” is not defined as such by the Act. However, recital (61) states that “AI systems intended to be used by alternative dispute resolution bodies … should also be considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects for the parties.” In the Commission Proposal, there was no mention of ADR mechanisms. Subsequently, the European Parliament at first reading introduced it in annex III. See European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM (2021) 0206 – C9-0146/2021 – 2021/0106 (COD)) (Ordinary legislative procedure: first reading).

8 Regulation (EU) No 524/2013 of the European Parliament and of the Council of 21 May 2013 on online dispute resolution for consumer disputes and amending Regulation (EC) No 2006/2004 and Directive 2009/22/EC (Regulation on Consumer ODR) [2013] OJ L 165/1; Directive 2013/11/EU of the European Parliament and of the Council of 21 May 2013 on alternative dispute resolution for consumer disputes and amending Regulation (EC) No 2006/2004 and Directive 2009/22/EC (Directive on Consumer ADR) [2013] OJ L 165/63.

9 See Norbert Reich, “Party Autonomy and Consumer Arbitration in Conflict: A Trojan Horse in the Access to Justice in the EU ADR-Directive 2013/11” (2015) 4 Penn State Journal of Law & International Affairs 290, 308–12.

10 Manuel Penades Fons, “The Effectiveness of EU Law and Private Arbitration” (2020) 4 Common Market Law Review 1069, 1070–2.

11 In this context, a significant amount of literature has emerged on the intersection between AI and arbitration. See, e.g., Maxi Scherer, “Artificial Intelligence and Legal Decision-Making: The Wide Open?” (2019) 36(5) Journal of International Arbitration 539; Maxi Scherer “International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute Resolution” in Christian Klausegger and others (eds), Austrian Yearbook on International Arbitration (MANZ Verlag 2019) 503; Joe Liu, “The Human Impact on Arbitration in the Emerging Era of Artificial Intelligence” (2024) 17(1) Contemporary Asia Arbitration Journal 91; Lucy Reed, “AI Versus IA: End of the Enlightenment?” in Cavinder Bull, Loretta Malintoppi and Constantine Partasides (eds), Arbitration’s Age of Enlightenment? (ICCA Congress Series No 21) (Wolters Kluwer 2013) 65; Sara Migliorini, “Automation & Augmentation: Artificial Intelligence in International Arbitration” (2024) 1(1) Jus Mundi Arbitration Review 119; Maroof Rafique, “Why Artificial Intelligence Is a Compatible Match for Arbitration” (2022) 88(2) Arbitration: The International Journal of Arbitration, Mediation and Dispute Management 310; Abhishek Das and Bhanu Ranjan, “Assessing the Impact of Artificial Intelligence on the Arbitration Process” (2024) 17(2) Contemporary Asia Arbitration Journal 133; Derick H Lindquist and Ylli Dautaj, “AI in International Arbitration: Need for the Human Touch” (2021) 1 Journal of Dispute Resolution 39; Mohammad Azam Hussain and others, “The Potential Prospect of Artificial Intelligence (AI) in Arbitration from the International, National and Islamic Perspectives” (2023) 19(1) Journal of International Studies 95; Jordan Bakst and others, “Artificial Intelligence and Arbitration: A US Perspective” (2022) 16(7) Dispute Resolution International 7.

12 See, e.g., “Lexis Plus AI” (LexisNexis) <www.lexisnexis.com/en-us/products/lexis-plus-ai.page> accessed 2 January 2025 and “Jus AI” (Jus Mundi) <https://jusmundi.com/en/jus-ai> accessed 2 January 2025.

13 See generally João Ilhão Moreira and Jiawei Zhang, “ChatGPT as a Fourth Arbitrator? The Ethics and Risks of Using Large Language Models in Arbitration” (2024) Arbitration International 1, 3–5.

14 See, e.g., Guangzhou Arbitration Commission, “The Guangzhou Arbitration Commission Introduces Its Pioneering AI Arbitration Secretary in Guangzhou’s Nansha District” (Guangzhou Arbitration Commission, 31 August 2023) <www.gzac.org/gzxw/6302> accessed 31 January 2025.

15 Maxi Scherer, Ole Jensen and Russell Childree, ‘”Regulating the Use of Artificial Intelligence in International Arbitration: The EU AI Act and Beyond” (2024) 3 Cahiers de l’Arbitrage (Paris Journal of International Arbitration) 653; Elizabeth Chan and others, “Harnessing Artificial Intelligence in International Arbitration Practice” (2023) 16 Contemporary Asia Arbitration Journal 263. See also Gustavo Moser, “The EU AI Act and International Arbitration: A New Era for Regulatory Competition?” (Gustavo Moser, 28 November 2024) <https://gustavomoser.com/the-eu-ai-act-and-international-arbitration/> accessed 6 February 2025.

16 As noted by Bermann, “International arbitration concerned itself very little, if at all, with what mattered most to the EU. Except in the case of State-to-State disputes, international arbitration’s diet for years consisted almost entirely of private contract-based commercial disputes.” George A Bermann, “European Union Law and International Arbitration at a Crossroads” (2019) 42(3) Fordham International Law Journal 967, 969. See also Morten Broberg and Niels Fenger, “The Law of Arbitration and EU Law – Like Oil and Water?” (2022) 7(1) European Investment Law and Arbitration Review 87, 111; George A Bermann, “Navigating EU Law and the Law of International Arbitration” (2012) 28 International Arbitration 397.

17 Federico Ferre, “EU Internal Market Law and the Law of International Commercial Arbitration: Have the EU Chickens Come Home to Roost?” (2020) 22 Cambridge Yearbook of European Legal Studies 133, 134.

18 For example, the European Parliament issued a report mentioning that EU law does not regulate commercial arbitration, and there is currently a diversity of arbitration laws and practices across the EU. See European Parliament (Directorate General for Internal Policies – Policy Department C: Citizens’ Rights and Constitutional Affairs), Legal Instruments and Practice of Arbitration in the EU (2014) 186 <www.europarl.europa.eu/RegData/etudes/STUD/2015/509988/IPOL_STU(2015)509988_EN.pdf> accessed 2 January 2025. Since its origin, the European regime of free circulation of court judgments deems arbitration to be a self-contained system, benefiting from the New York Convention on the Recognition and Enforcement of Foreign Arbitral Awards (adopted 10 June 1958, entered into force 7 June 1959) 330 UNTS 3, and excludes it from its scope of application. See 1968 Brussels Convention on Jurisdiction and the Enforcement of Judgments in Civil and Commercial Matters [1968] OJ L 299, Art. 1; Regulation (EU) No 1215/2012 of the European Parliament and of the Council of 12 December 2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters (recast) [2012] OJ L 351/1, Art. 2(2)(d). Report by P Jenard on the Convention of 27 September 1968 on jurisdiction and the enforcement of judgments in civil and commercial matters [1979] OJ C 59/1. This approach seems confirmed by the latest Commission’s report on the possible future recast of the Brussels Ibis Regulation: European Commission, ‘Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Application of Regulation (EU) No 1215/2012 of the European Parliament and of the Council of 12 December 2012 on Jurisdiction and the Recognition and Enforcement of Judgments in Civil and Commercial Matters (Recast)’ COM(2025) 268 final.

19 A number of legislative instruments, based on Art. 81 of the TFEU, exclude arbitration from their scope of application. The relevant provisions and instruments include Art. 1(2)(d) of Regulation (EU) No 1215/2012 of the European Parliament and of the Council of 12 December 2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters (recast) [2012] OJ L 351/1; Art. 1(2)(e) of Regulation (EC) No 593/2008 of the European Parliament and of the Council of 17 June 2008 on the law applicable to contractual obligations (Rome I) [2008] OJ L 177/6; and recital (73) of Regulation (EU) 2015/848 of the European Parliament and of the Council of 20 May 2015 on insolvency proceedings (recast) [2015] OJ L 141/19.

20 For example, in Marc Rich, the CJEU held that if the substance of a lawsuit concerns arbitration procedures – as it did in the case – any litigation relating to the appointment of arbitrators should fall outside of the scope of the Brussels Ibis Regulation, thereby adopting a broader interpretation of the arbitration exclusion in the Brussels Ibis Regulation. See Case C-190/89, Marc Rich & Co AG v Societa Italiana Impianti PA [1991] ECR I–3855, paras 26–29.

21 See Regulation on Consumer ODR (n 8); Directive on Consumer ADR (n 8). See also Pavel Loutocký, “Practical Impacts of the EU Regulation on Online Dispute Resolution for Consumer Disputes” in Klára Drličková and Tereza Kyselovská (eds), COFOLA INTERNATIONAL 2016: Resolution of International Disputes Public Law in the Context of Immigration Crisis (Masarykova Univerzita Nakladatelství 2016) 254, 255–6. Another sector where EU law has regulated ADR is digital services. See Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) [2022] OJ L 265/1; Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L 277/1. See also the discussion in Fons (n 10).

22 George A Bermann, “European Union Law and International Arbitration at a Crossroads” (n 16); Ferre (n 17).

23 Fons (n 10) 1071.

24 In Mostaza Claro, the CJEU held that a national court reviewing an action for annulment of an arbitration award must assess the validity of the underlying arbitration agreement. Specifically, if the agreement includes a term deemed unfair under the Unfair Contract Terms Directive, the court is obligated to annul the award. This requirement holds true even if the consumer did not raise the issue of invalidity during the arbitration proceedings but only in the subsequent annulment action (see Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts [1993] OJ L 95/29, Art. 3 and annex; Case C-168/05, Elisa María Mostaza Claro v Centro Móvil Milenium SL [2006] ECR I-10421, para 39).

25 Some commentators have argued that additional constraints on arbitrability may be derived from the CJEU’s decisions in Ingmar (Case C-381/98, Ingmar GB Ltd v Eaton Leonard Technologies Inc [2000] ECR I-09305) and Unamar (Case C-184/12, United Antwerp Maritime Agencies (Unamar) NV v Navigation Maritime Bulgare [2013]), which recognised certain provisions of EU secondary law as “overriding mandatory provisions.” See Giesela Rühl, “Extending Ingmar to Jurisdiction and Arbitration Clauses: The End of Party Autonomy in Contracts with Commercial Agents?” (2007) 15(6) European Review of Private Law 891.

26 An obstacle to the recognition and enforcement of awards may arise from the EU rules on competition (Case C-126/97, Eco Swiss China Time Ltd v Benetton International NV [1999] ECR I-3055) as well as more broadly from the principle of effectiveness as applied by the CJEU. See Fons (n 10) 1079. See also Case C-700/20 London Steam-Ship Owners’ Mutual Insurance Association Ltd v Kingdom of Spain [2022] ECLI:EU:C:2022:488.

27 See, generally, Karl-Heinz Böckstiegel, “The Role of Party Autonomy in International Arbitration” (1997) 52(3) Dispute Resolution Journal 1.

28 Jack Brett, “EU Law and Procedural Autonomy in International Commercial Arbitration” 29(4) European Review of Private Law 583, 589. See also João Ilhão Moreira, The Regulation of International Commercial Arbitration: Arbitrators’ Duties and the Emerging Arbitral Market (Bloomsbury Publishing 2024) 164.

29 Alexis Mourre, “Is the Free-Market of Adjudication Dysfunctional?” in Albert Jan van den Berg (ed), International Arbitration: The Coming of a New Age? (ICCA Congress Series No 17) (Wolters Kluwer Law & Business 2013) 67–72.

30 On the mechanisms that regulate commercial arbitration, see Moreira (n 28).

31 George A Bermann, “The Self-Styled ‘Autonomy’ of International Arbitration” (2020) 36(2) Arbitration International 221, 226–8. See also Brett (n 28) 590.

32 For example, as mentioned above, in Marc Rich, the CJEU concluded that if the substantive nature of the proceedings is related to arbitration, the proceedings fall within the scope of the arbitration exception. See Case C-190/89, Marc Rich, paras 19–23. In the subsequent case Van Uden, the CJEU reiterated that arbitration should be entirely excluded from the scope of the Brussels regime. See Case C-391/95, Van Uden Maritime BV, trading as Van Uden Africa Line v Kommanditgesellschaft in Firma Deco-Line [1998] ECR I-7091. In addition, in Eco Swiss, the CJEU elaborated on the concept of EU public policy and determined that violations of EU public policy should be a ground for parties to seek annulment or nonenforcement of arbitral awards (Case C-126/97, Eco Swiss, paras 36–40). See also Case C-700/20 London Steamship, paras 41–47, and Case C-40/08, Asturcom Telecomunicaciones SL v Cristina Rodríguez Nogueira [2009] ECR I–09579, paras 39–42.

33 Art. 3(1) of the EU AI Act defines AI systems as “machine-based systems” designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

34 However, its broad scope may also capture a variety of other software types, which could lead to excessive regulation in certain areas. See Martin Ebers and others, “The European Commission’s Proposal for an Artificial Intelligence Act – A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)” (2021) 4 Multidisciplinary Scientific Journal 589, 590–91. See also C(2025) 924 final Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act) <https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application> accessed on 27 February 2025.

35 EU AI Act, Art. 3(66).

36 At the time of writing, the classification under the Act of generative AI products, such as ChatGPT, appears to be controversial. When the draft version of the final text of the AI Act was made public in January 2024, it appeared that generative AI products – and not only the underlying algorithms – would fit into the category of AI models. See Sandra Wachter, “Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond” (2024) 26(3) Yale Journal of Law & Technology 671, 674–93. Conversely, others have argued persuasively that the definition of AI systems adopted by the Act encompasses generative AI products insofar as it includes tools capable of producing “output, such as texts, images and videos.” See David Fernández-Llorca and others, “An Interdisciplinary Account of the Terminological Choices by EU Policymakers Ahead of the Final Agreement on the AI Act: AI System, General Purpose AI System, Foundation Model, and Generative AI” [2024] Artificial Intelligence and Law. Also appearing to confirm this view, see Emilija Leinarte, “The Classification of High-Risk AI Systems Under the EU Artificial Intelligence Act” (2024) 1 Journal of AI Law and Regulation 262. Along these lines, a literal reading of the EU AI Act appears to suggest that generative AI products are indeed included in the definition of AI systems, whereas the category of AI models is limited to the underlying algorithms that permit generative AI products to function. See also Guidelines on the definition of an artificial intelligence system (n 34).

37 This category was inserted at a later stage in the legislative process. The EU Commission’s initial draft had targeted only AI systems (see Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending Certain Union Legislative Acts, COM/2021/206 Final). However, the popularisation of generative AI tools, such as ChatGPT, prompted the EU lawmakers to add a separate category of “general-purpose AI models.” The text of the Act defines only “general-purpose AI models” (Art. 3(63) of the EU AI Act). However, recital (97) also uses the expression “AI Models” interchangeably. This is also the case, for example, of the European Parliament’s explainer on the EU AI Act. See European Parliament, “EU AI Act: First Regulation on Artificial Intelligence” (1 June 2023) <www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence> accessed 13 January 2025.

38 First, AI models demonstrate generality, enabling them to perform a diverse range of tasks. Second, they differ from traditional AI systems in their design, as they are usually trained on extensive datasets using methods like self-supervised, unsupervised, or reinforcement learning, and they can be further refined into new models (see Art. 3(63) of the EU AI Act). Recital (98) further explains that “models with at least a billion of parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks.” Conversely, Art. 3(63) of the Act remains neutral regarding how AI models are placed on the market. Recital (97) also specifies the methods of commercialisation that may be used for AI models, such as “libraries, application programming interfaces (APIs), as direct download, or as physical copy.”

39 Fernández-Llorca and others (n 36) 7–9.

40 Ibid.

41 This is because developers of AI models target broader audiences and seek greater scalability of their models; they do not specifically target certain sectors.

42 These products include both commercial generative AI products and those tailored specifically to the legal field, such as Robin AI or Harvey, as well as general-purpose generative AI products such as ChatGPT. These tools and the impact of EU AI Act on them are discussed in Section 3.

43 The concept of the “AI value chain,” which is adopted but not defined in the EU AI Act, describes the stages involved in developing, deploying, and utilising AI technologies. See EU AI Act, Art. 3(8).

44 In particular, public bodies and agencies can be “providers”, “deployers”, and “downstream providers”, according to the respective definitions of these terms under Art. 3.

45 EU AI Act, recital (9).

46 EU AI Act, Art. 3(3).

47 EU AI Act, Art. 3(68).

48 EU AI Act, Art. 3(6).

49 EU AI Act, Art. 3(7).

50 EU AI Act, Art. 3(4). Providers of general-purpose AI systems can also be “downstream providers” when the AI model is “provided by themselves and vertically integrated.”

51 On the Act’s extraterritorial reach, see, e.g., Yan Wang, “Do Not Go Gentle into That Good Night: The European Union’s and China’s Different Approaches to the Extraterritorial Application of Artificial Intelligence Laws and Regulations” (2024) 53 Computer Law & Security Review 105965, 105967. See also Y Marta Cantero Gamito and Christopher T Marsden, “Artificial Intelligence Co-Regulation? The Role of Standards in the EU AI Act” (2024) 32(1) International Journal of Law and Information Technology 1, 3.

52 EU AI Act, Art. 2(1)(d). In addition, the Act applies to importers and distributors of AI systems entering the EU market (Art. 2(1)(b)). Additionally, it covers authorised representatives of providers established outside the EU (Art. 2(1)(f)), as well as product manufacturers who market or put into service an AI system under their own name or trademark (Art. 2(1)(e)).

53 EU AI Act, Art. 2(1)(g).

54 EU AI Act, Art. 2(1)(a).

55 EU AI Act, Art. 2(1)(c). Recital (12) explains that, for the purposes of the Act, “outputs generated by the AI system reflect different functions performed by AI systems and include predictions, content, recommendations or decisions.”

56 Under this approach, the EU AI Act categorises AI systems into four tiers based on the risks they pose. In particular, certain AI systems are banned entirely due to their inherent dangers (Art. 5). All other systems are permitted but are subject to different risk classifications and obligations. See Martin Ebers, “Truly Risk-Based Regulation of Artificial Intelligence – How to Implement the EU’s AI Act” [2024] European Journal of Risk Regulation, 1, 3–4; Giovanni De Gregorio and Pietro Dunn, “The European Risk-based Approaches: Connecting Constitutional Dots in The Digital Age” (2022) 59(2) Common Market Law Review 473, 475–6; Irina Carnat (n 3) 9–11; Wolfgang Hoffmann-Riem, “Artificial Intelligence as a Challenge for Law and Regulation” in Thomas Wischmeyer and Timo Rademacher (eds), Regulating Artificial Intelligence (Springer 2020); Regine Paul, “European Artificial Intelligence ‘Trusted Throughout the World’: Risk-based Regulation and the Fashioning of a Competitive Common AI Market” (2024) 18(4) Regulation & Governance 1065, 1075–8; Johanna Chamberlain, “The Risk-Based Approach of the European Union’s Proposed Artificial Intelligence Regulation: Some Comments From a Tort Law Perspective” (2023) 14 European Journal of Risk Regulation 1, 4–9.

57 Art. 5 AI Act. See generally Rostam J Neuwirth, “Prohibited artificial intelligence practices in the proposed EU artificial intelligence act (AIA)” (2023) 48 Computer Law & Security Review 48 105798.

58 EU AI Act, Recital 48.

59 EU AI Act, ch III, ss 2 and 3. The applicability of these provisions is staggered over time and depends on certain factors (Article 111 AI Act). Here it is assumed that, by 2027, most AI systems qualifying as high risk and available on the market will be subject to the Act’s relevant provisions.

60 EU AI Act, Art. 95.

61 EU AI Act, Art. 6(1).

62 EU AI Act, Art. 6(1).

63 EU AI Act, Recital (52); Art. 6(2).

64 As previously noted, ADR was not classified as a high-risk sector under the initial proposal submitted by the European Commission but was introduced during the legislative procedure by the European Parliament. See (n 37).

65 All the use cases in annex III are defined by the expression “AI systems intended to be used” for a certain activity or task. See also Fernández-Llorca and others (n 36).

66 EU AI Act, Art. 3(12).

67 Natali Helberger and Nicholas Diakopoulos, “ChatGPT and the AI Act” (2023) 12(1) Internet Policy Review 1, 4.

68 Fernández-Llorca and others (n 36).

69 EU AI Act, Art. 6(3).

70 To fulfill this exception, at least one of the following conditions must be met: (a) The AI system is designed to carry out a specific procedural task, or (b) the AI system aims to enhance the results of a prior human activity, or (c) the AI system is focused on identifying decision-making patterns or deviations from past patterns and is not intended to replace or influence previous human evaluations without appropriate human oversight, or (d) the AI system is intended to assist in tasks that are preparatory to assessments relevant to the use cases outlined in annex III.

71 For an analysis of the application of these conditions to judicial authorities, see Carnat (n 3).

72 EU AI Act, Art. 6(5). The deadline for the adoption of the guidelines is February 2026.

73 EU AI Act, Art. 6(5).

74 EU AI Act, recital (143).

75 EU AI Act, Art. 7.

76 With respect to the removal of use cases, Art. 7(3) provides that the Commission is empowered to remove certain AI systems from the list of high-risk systems in annex III when both of the following conditions are met: (a) the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2; (b) the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.

77 EU AI Act, Art. 99.

78 EU AI Act, ch. III, s 2.

79 EU AI Act, Art. 9.

80 EU AI Act, Art. 10.

81 EU AI Act, Art. 11.

82 EU AI Act, Art. 12.

83 EU AI Act, Arts. 49 and 71. This obligation remains applicable even if the provider has concluded that the system is not in fact high-risk according to the exception provided in Art. 6(3) (Arts. 6(4) and 49(2)). Moreover, Art. 80 empowers national surveillance authorities to evaluate the AI system based on the criteria set out in Art. 6(3) and the Commission Guidelines. If the evaluation determines that the AI system is indeed high-risk, the authority will require the provider to take the necessary steps to ensure compliance with the EU AI Act, subject to fines.

84 EU AI Act, Art. 26.

85 EU AI Act, Art. 26(2).

86 EU AI Act, Art. 26(6).

87 EU AI Act, Art. 26(5).

88 EU AI Act, Art. 25(1)(c).

89 Vera Lúcia Raposo, “Ex Machina: Preliminary Critical Assessment of the European Draft Act on Artificial Intelligence” (2022) 30(1) International Journal of Law and Information Technology 88, 98.

90 EU AI Act, Art. 96(1)(c).

91 EU AI Act, Art. 4.

92 The codes of conduct are expected to include clear objectives and key performance indicators to assess their effectiveness, focusing on several critical areas.

93 EU AI Act, Art. 95(1).

94 EU AI Act, Art. 95(3).

95 EU AI Act, Art. 95(2).

96 The full text of annex III, point 8 reads “AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution.”

97 See, e.g,, “Lexis Plus AI” (n 12); “Jus AI” (n 12).

98 “Jus AI” (n 12).

99 EU AI Act, recital (64).

100 As noted by sociological studies of the arbitral community, arbitrators are often partners at law firms. See Florian Grisel, “Competition and Cooperation in International Commercial Arbitration: The Birth of a Transnational Legal Profession” (2017) 51(4) Law & Society Review 790, 815–16; João Ilhão Moreira, “Arbitration Vis-à-Vis Other Professions: A Sociology of Professions Account of International Commercial Arbitrators” (2022) 49(1) Journal of Law and Society 48, 61–3.

101 “Harvey AI” (Harvey) <www.harvey.ai> accessed 2 January 2025.

102 “Reports” (Robin AI) <www.robinai.com/reports> accessed 2 January 2025.

103 From October 2022, users of ICC Arbitration are encouraged to file their requests for arbitration via ICC Case Connect, which enables more streamlined communication and file-sharing among parties, the arbitral tribunal, and ICC case management teams. See “ICC Partners with Opus 2 to Shape Future of Dispute Resolution” (ICC, 14 June 2023) <https://iccwbo.org/news-publications/news/icc-partners-with-opus-2-to-shape-future-of-dispute-resolution/> accessed 2 February 2025.

104 Ahmet Cemil Yıldırım, “The Use of Technology in Case Management in International Investment Arbitration: A Realistic Approach” (2024) 40 Arbitration International 233, 239.

105 “ClauseBuilder AI (Beta)” (American Arbitration Association) <https://adr.org/index.php/blog/clausebuilder-api> accessed 31 January 2025.

106 “Faro de Transparencia (Transparency Lighthouse)” (Arbitration and Mediation Center of the Lima Chamber of Commerce) <www.arbitrajeccl.com.pe/en/test-gpt/> accessed 1 February 2025.

107 This AI tool functions like a general LLM for interactive questioning and may also be used to identify different speakers during arbitral proceedings, transcribing their words into text. After a hearing concludes, it is able to automatically generate a record and provide a summary of the hearing. See Moreira and Zhang (n 13). See also Guangzhou Arbitration Commission, The Guangzhou Arbitration Commission Introduces Its Pioneering AI Arbitration Secretary in Guangzhou’s Nansha District (Guangzhou Arbitration Commission, 31 August 2023) <www.gzac.org/gzxw/6302> accessed 31 January 2025

108 EU AI Act, recital (53).

109 Federico Galli and Claudio Novelli, “The Many Meanings of Vulnerability in the AI Act and the One Missing” (2024) 2024 (1) BioLaw Journal (Rivista di BioDiritto) 53, 57.

110 EU AI Act, recital (53).

111 See, generally, Chris Guthrie, Jeffrey J Rachlinski and Andrew J Wistrich, “Inside the Judicial Mind” (2000) 86 Cornell Law Review 777, 779–84; Claire I Tsai, Joshua Klayman and Reid Hastie, “Effects of Amount of Information on Judgment Accuracy and Confidence” (2008) 107(2) Organizational Behavior and Human Decision Processes 97; Irwin P Levin and others, “Framing Effects in Judgment Tasks With Varying Amounts of Information” (1985) 36(3) Organizational Behavior and Human Decision Processes 362.

112 EU AI Act, Art. 49. See also Henry Fraser and José-Miguel Bello y Villarino, “Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough” (2024) 15 European Journal of Risk Regulation 431, 436.

113 EU AI Act, Art. 71.

114 EU AI Act, Art. 80.

115 “ClauseBuilder” (ClauseBuilder) <www.clausebuilder.org> accessed 2 January 2025.

116 EU AI Act, Art. 4.

117 EU AI Act, Art. 95. See also recital (165).

118 Recital (165) states that providers and, as appropriate, deployers of all AI systems (high-risk or not) and AI models should also be encouraged to apply on a voluntary basis additional requirements related for example, to the elements of the Union’s Ethics Guidelines for Trustworthy AI, environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems, including attention to vulnerable persons and accessibility to persons with disability, stakeholders’ participation with the involvement, as appropriate, of relevant stakeholders such as business and civil society organisations, academia, research organisations, trade unions and consumer protection organisations in the design and development of AI systems, and diversity of the development teams, including gender balance.

119 As already noted by Scherer, Jensen and Childree (n 15) 545–6.

120 EU AI Act, annex III, point 8(a).

121 There have been reports of judges and arbitrators using ChatGPT in their decision-making: ‘Hibaq Farah, Court of Appeal Judge, Praises “Jolly Useful” ChatGPT After Asking It for Legal Summary’ The Guardian (15 September 2023) <www.theguardian.com/technology/2023/sep/15/court-of-appeal-judge-praises-jolly-useful-chatgpt-after-asking-it-for-legal-summary>; Nate Raymond, “US Judge Runs ‘Mini-Experiment’ With AI to Help Decide Case” Reuters (7 September 2024) <www.reuters.com/legal/transactional/us-judge-runs-mini-experiment-with-ai-help-decide-case-2024-09-06>.

122 This possibility is also implied by the language of recital (86).

123 EU AI Act, Art. 3(4).

124 EU AI Act, Art. 26(2).

125 Scherer, Jensen and Childree (n 15) 554.

126 In the SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration, Guideline 6 provides: “An arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall apply to the arbitrator’s decision-making process. The use of AI tools by arbitrators shall not replace their independent analysis of the facts, the law, and the evidence.” See also Moreira and Zhang (n 13) 8–10; Scherer, Jensen and Childree (n 15) 554; Moser (n 15).

127 At the time of writing, Art. 86 is the object of the first question for preliminary ruling regarding its interpretation with respect to the extent of disclosure: C-806/24, Request for a preliminary ruling, 25 November 2024, Yettel Bulgaria.

128 SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration, Guideline 3. See also Moreira and Zhang (n 13); Elizabeth Chan and Katrina Limond, “Striking the Right Balance: Approaching Disclosure of Generative AI-Assisted Work Product in International Arbitration” (2024) 2024(1) Belgian Review of Arbitration 1, 5.

129 Leinarte (n 36) 278–9.

130 EU AI Act, Arts. 6(2) and 25(1)(c).

131 How this requirement would be interpreted and applied in the context of arbitration remains uncertain. It appears that fulfilling all the provider’s obligations would be challenging for individual arbitrators.

132 EU AI Act, annex III, 8(a).

133 Scherer, Jensen and Childree (n 15) 549–51.

134 Regarding this point, there are parallels to be drawn between the tasks that AI might perform, and the role fulfilled by an arbitral secretary. See Moreira and Zhang (n 13) 7.

135 Other technologies have been shown to influence decision making. For a discussion on the influence of technology in the evaluation of evidence see Mihaela Apostol, “Arbitration Tech Toolbox: Blind Spots in Arbitration – When Technology Distorts Evidence Without Direct Human Intervention” (Kluwer Arbitration Blog, 1 February 2025) <https://arbitrationblog.kluwerarbitration.com/2025/02/01/arbitration-tech-toolbox-blind-spots-in-arbitration-when-technology-distorts-evidence-without-direct-human-intervention/> accessed 6 February 2025.

136 EU AI Act, Art. 2(1). See also Yan Wang (n 51); Scherer, Jensen and Childree (n 15) 545; Wachter (n 36) 676; Carlo Casonato and Giulia Olivato, “AI Regulation in Europe: Exploring the Artificial Intelligence Act” in A Fabris and S Belardinelli (eds), Digital Environments and Human Relations (Springer 2024) 87–112.

137 This derives from the distinction between “seat” and “venue” in arbitration. The “seat” of an arbitration refers to the location selected by the parties as the legal place of arbitration. By contrast, the “venue” of arbitration refers to the specific geographical location where arbitration and parties agree to meet. See generally Jonathan Hill, “Determining the Seat of an International Arbitration: Party Autonomy and the Interpretation of Arbitration Agreements” (2014) 63(3) International & Comparative Law Quarterly 519. In the context of the AI Act, see Scherer, Jensen and Childree (n 15).

138 Scherer, Jensen and Childree (n 15) 547.

139 Ibid 7–8.

140 Ibid 8.

141 Manuel Wörsdörfer, “Mitigating the Adverse Effects of AI with the European Union’s Artificial Intelligence Act: Hype or Hope?” (2024) 43(3) Global Business and Organizational Excellence 106.

142 EU AI Act, Art. 99.

143 Additional fines and nonmonetary sanctions are to be specified by the Member States. See EU AI Act, Art. 99(1).

144 AI has also been noted to have the potential to widen the justice gap between parties, given that only corporations can bear the cost of sophisticated technology. See Chloe J Duge, “AI: Increasing Alternatives in Alternative Dispute Resolution” (2024) 12(1) Resolved: Journal of Alternative Dispute Resolution 21, 54.

145 See, generally, Ramón Mullerat and Juliet Blanch, “The Liability of Arbitrators: A Survey of Current Practice” (2007) 1(1) Dispute Resolution International 99.

146 See Horst Eidenmüller and Faidon Varesis, “What Is an Arbitration? Artificial Intelligence and the Vanishing Human Arbitrator” (2020) 17 New York University Journal of Law & Business 49, 75–81; Moreira and Zhang (n 13) 7; Scherer, Jensen and Childree (n 15) 541. See also David Horton, “Forced Robot Arbitration” (2023) 109 Cornell Law Review 679, 710–12.

147 See, generally, Nobumichi Teramura and Leon Trakman, “Confidentiality and Privacy of Arbitration in the Digital Era: Pies in the Sky?” (2024) 40(3) Arbitration International 277.

148 Scherer, Jensen and Childree (n 15) 548; Horton (n 126) 688.

149 See, e.g., Ruixiang Tang, Yu-Neng Chuang and Xia Hu, “The Science of Detecting LLM-Generated Text” (2024) 67(4) Communications of the ACM 50.

150 Mourre (n 29).

151 Moreira (n 28).

152 Reich (n 9) 306–07; Fons (n 10) 1075–6. See also João Ilhão Moreira, “The Limits to Voluntary Arbitration in Establishing a ‘Fair,’ ‘Independent’ and ‘Accessible’ Dispute Resolution Mechanism Outside Large Contractual Disputes” in Leonardo VP de Oliveira and Sara Hourani (eds), Access to Justice in Arbitration: Concept, Context and Practice (Wolters Kluwer 2020).

153 But see Catherine A Rogers, “The Arrival of the Have-Nots in International Arbitration” (2007) 8 Nevada Law Journal 341, documenting the increasingly complex landscape of arbitration, which contemplates a more diverse plethora of actors.

154 On the practice and choice among different delegated powers see Liesbet A Campo, “Delegated Versus Implementing Acts: How to Make the Right Choice?” (2021) 22(2) ERA Forum 193; Annalisa Volpato, Delegation of Powers in the EU Legal System (Routledge 2022).

155 See Article 1 of the UNCITRAL Model Law (UNCITRAL Model Law on International Commercial Arbitration 1985 with amendments as adopted in 2006).

156 Directive on consumer ADR, arts 1 and 4(1)(a) and (b).

157 In the context of the New York Convention and national legislation, a wider definition of commercial arbitration is often used. See, generally, Gary Born, International Commercial Arbitration (3rd edn, Wolters Kluwer 2023) s 2.03.

158 The Commission has begun adopting guidelines to clarify the interpretation of the Act under article 96, with more such documents expected to follow in the future. See Guidelines on the definition of an artificial intelligence system (n 35); Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act, <https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act>.

159 The EU Commission engaged in stakeholder consultations when adopting its first guidelines under Art. 96 of the Act to specify prohibited practices (Guidelines on prohibited artificial intelligence (AI) practices (n 138)). It also utilised stakeholder input to develop interpretative guidelines under Art. 35(3) of the Digital Service Act (European Commission, “Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes” (8 February 2024); Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance) PE/30/2022/REV/1, [2022] OJ L 277, p 1.)