Introduction
Regulation (EU) 2024/1689 (hereinafter the “EU AI Act” or the “Act”),Footnote 1 which entered into force on 1 August 2024, marks a landmark step in the governance of artificial intelligence (AI).Footnote 2 It establishes an ambitious framework designed to address potential risks and encourage innovation across a wide range of sectors.Footnote 3 The Act also represents the centrepiece of the EU’s strategy to position itself as a global leader in AI regulation, with significant implications that are expected to extend far beyond the EU’s borders.Footnote 4
Within its broad scope, the Act specifically addresses the use of AI systems in both adjudication and alternative dispute resolution (ADR).Footnote 5 Specifically in the field of ADR, the Act classifies certain usages of an AI system as potentially “high-risk”Footnote 6 when the dispute resolution process produces legal effects for the parties involved.Footnote 7 Although the Act does not distinguish among types of ADR, its provisions on high-risk applications are particularly relevant to arbitration procedures. This is because other forms of ADR often do not produce legal effects for the parties.
Arbitration is, however, used across highly diverse contexts, ranging from consumer disputes to high-stakes transnational commercial cases involving sophisticated parties with access to specialised legal counsel. In acknowledgement of these differences, traditionally, the EU legislator has regulated arbitration in certain sectors, most notably consumer protection,Footnote 8 with intervention aimed at ensuring procedural guarantees.Footnote 9 By contrast, commercial arbitration has remained largely unregulated by EU law,Footnote 10 especially with respect to procedural aspects.
This EU regulatory intervention is particularly significant given the transformative impact AI is having on the practice of arbitration.Footnote 11 Companies are increasingly developing AI-driven platforms tailored to arbitration, offering tools that streamline access to legal databases, generate procedural documents and summarise case materials.Footnote 12 Arbitral institutions are integrating these technologies into their case management systems or developing proprietary AI tools to enhance efficiency and transparency.Footnote 13 Meanwhile, individual arbitrators are turning to commercially available AI systems and law-firm-developed tools to support their decision-making processes.Footnote 14 This growing reliance on AI by arbitration practitioners compounds the need to examine how the EU AI Act is likely to impact the future of the field.
Building on the emerging literature on the interface between the EU AI Act and commercial arbitration,Footnote 15 this article calls for a full or partial carve-out of commercial arbitration from the scope of the “high-risk” provisions of the Act. To support this proposal, this article highlights the challenges that the Act is likely to pose to legal tech providers, arbitral institutions and arbitrators. The identified issues stem mainly from the difficulty of applying a rigid regulatory framework to arbitration, which relies on party autonomy, especially for procedural matters. Confidentiality in arbitration also makes enforcement of the Act more difficult, while the Act’s extraterritorial reach complicates its application in cross-border disputes.
This article is structured as follows: Section 1 explains how the EU AI Act disrupts the long-standing balance between EU law and commercial arbitration. Section 2 outlines the Act’s key provisions relevant to arbitration. Section 3 analyses the Act’s impact on the primary stakeholders in commercial arbitration – companies, arbitral institutions and arbitrators – and the difficulties of reconciling the Act’s framework with specific aspects of the field. Finally, the conclusion offers a perspective on how the EU legislator could exclude commercial arbitration from the scope of the Act’s high-risk provisions.
1. The EU’s traditional approach to commercial arbitration
Commercial arbitration has generally been portrayed as an independent legal order, evolving alongside, but separately from, the EU legal framework, with only limited and sporadic interactions between the two.Footnote 16 This perspective is often reiterated:
“Although the relationship between EU law and international arbitration has long been treated as one of mutual indifference, the two systems increasingly interact and conflict with each other rather than simply coexisting neutrally. Nonetheless, EU law does not regulate commercial arbitration, and there is a current diversity of arbitration laws and practices across the European Union.”Footnote 17
Over the years, this relationship has been tacitly reaffirmed by EU institutions,Footnote 18 and despite the potential legislative avenues available, the EU’s competence to regulate arbitration has been significantly underutilised. For example, Article 81 of the Treaty on the Functioning of the European Union (TFEU) empowers the EU to adopt harmonisation measures for “the development of alternative methods of dispute settlement.” However, with few exceptions, EU lawmakers have systematically excluded arbitration from the scope of harmonisation measures based on Article 81 of the TFEU.Footnote 19 In addition, the Court of Justice of the European Union (CJEU) has traditionally interpreted these exclusions broadly, confirming the EU’s general restraint in the regulation of arbitration.Footnote 20 On occasion, the EU legislator has utilised other legal bases to regulate dispute resolution mechanisms, notably in the context of consumer protection.Footnote 21 Yet, commercial arbitration, as a whole, has remained largely untouched by direct EU regulatory intervention across all potential legal bases. Such legislative and judicial abstention has sustained the doctrinal view that the EU’s legal order and commercial arbitration would evolve in parallel, with minimal interference between them.Footnote 22
Therefore, prior to the EU AI Act’s entry into force, the interaction between EU law and commercial arbitration was confined primarily to issues of substantive law. Accordingly, the classic problem of EU law and commercial arbitration has been the risk that arbitrators may commit errors of law and “the very limited avenues that, unlike in litigation, exist to correct these errors of law.”Footnote 23 In all instances, the concern has been the tension between ensuring the application of certain important provisions of EU substantive law and upholding party autonomy as well as the principle of the finality of arbitral awards. The relevant cases have concerned matters such as arbitrability,Footnote 24 public policy,Footnote 25 and their interaction with the judicial procedures for the annulment, recognition and enforcement of arbitral awards.Footnote 26 These issues have been addressed by EU rules, derived mostly from the cited decisions of the CJEU, which, for the most part, has subscribed to a conception of the EU’s legal order and commercial arbitration as evolving separately.
By contrast, procedural matters have largely remained outside the scope of EU law. While potentially contending with issues of due process and procedural fairness, EU law has not established rules or mandates concerning how commercial arbitral proceedings are to be conducted. Purely procedural aspects of commercial arbitration are generally determined by agreement between the parties, often by reference to standardised arbitration rules adopted by private arbitral institutions.Footnote 27 Arbitration laws stipulate additional requirements or provide further guidance in matters of procedure, but in this last respect, EU law has been consistently absent. Instead, rules on issues such as arbitrators’ duty to disclose conflicts of interest or the production of evidence have been left to national legislatures and, more importantly, to the work of the arbitral community itself.Footnote 28
This approach aligns with the preferences of the arbitral community, which has consistently advocated for minimal top-down regulation.Footnote 29 The reluctance of EU decision-makers to intervene in the arbitration market has allowed the arbitral community to design its own institutions, governance mechanisms and unique methods for selecting service providers that dominate the field.Footnote 30 Arbitral institutions and practitioners have successfully preserved arbitration as a largely autonomous system in which most regulatory norms are community-driven. In particular, the arbitral community has been successful in advocating the doctrine of party autonomy, which posits that, as a key tenet of arbitration, parties are free to determine the mechanism through which their disputes will be resolved. This principle has gained widespread acceptance, with legislators typically endorsing the notion that arbitration should evolve according to its own internal logic, free from excessive external interference.Footnote 31 The CJEU has repeatedly adopted a narrative that is extremely favorable to party autonomy and limits the extent to which EU law might interfere with commercial arbitration.Footnote 32
It is against this backdrop that the EU AI Act’s decision to regulate ADR – without carving out commercial arbitration – departs significantly from the EU’s established approach to the field. In particular, with the EU AI Act, the EU legislator has laid down rules that directly constrain commercial arbitration, restricting the role of party autonomy and of the institutions and professionals that have traditionally self-regulated the market.
2. Main features of the EU AI Act
Before analysing how the AI Act applies to arbitration, it is important to first outline the Act’s key features. This section focuses on the provisions of the EU AI Act that are relevant to commercial arbitration, presenting an overview of their structure and content. This explication serves as the foundation for Section 3, which examines the Act’s implications for commercial arbitration.
The section begins by discussing the Act’s scope of application, identifying the types of AI systems it targets and the entities subject to its obligations. It then addresses the Act’s territorial reach, which, under certain conditions, extends to entities outside the EU. Next, it explores the classification of “high-risk” AI use cases, including those related to ADR. Finally, the section outlines the compliance obligations assigned to entities commercialising and using high-risk systems.
2.1. Material scope of the EU AI Act
The EU AI Act regulates AI-based tools. To this end, it differentiates between two main categories, namely “AI systems” and “AI models.” In addition, the category of AI systems includes a subcategory of tools called “general-purpose AI systems.” These three concepts are briefly described in turn below.
AI systems are the Act’s main object of regulation. Under the definition adopted by the Act, AI systems are machine-based systems that (i) are designed to have varying levels of autonomy; (ii) may exhibit adaptiveness after deployment; and (iii) infer, from the input they receive, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.Footnote 33 This definition encompasses some of the most prominent and recognisable applications of AI, such as biometric systems, voice assistants and recommendation systems.Footnote 34
Within the category of AI systems, the Act further defines a subcategory called “general-purpose AI systems.” These are “AI systems based on a general-purpose AI model serving a variety of purposes, both for direct use as well as for integration into other AI systems.”Footnote 35 Generative AI commercial products, such as ChatGPT, fall into this category.Footnote 36
The Act’s second regulatory focus is on “AI models,”Footnote 37 which are distinguished from AI systemsFootnote 38 most notably because they cannot be utilised directly by users; rather, they must be integrated into commercial products that offer additional functionalities, such as a user interface.Footnote 39 A prime example of an AI model is the GPT-4 large language model (LLM), which serves as the foundation for the commercial chatbot ChatGPT.Footnote 40
At the time of writing, the AI models category holds limited relevance for companies and other actors offering services in commercial arbitration.Footnote 41 Conversely, within the context of commercial arbitration, companies and arbitral institutions are currently developing or utilising AI tools that can be classified as AI systems or general-purpose AI systems under the Act. Arbitrators, in turn, have access to a variety of nonspecialised, commercial AI systems that also fall within the same categories.Footnote 42 Consequently, this article focuses on the provisions of the AI Act that pertain to AI systems and general-purpose AI systems; it does not discuss the provisions specifically related to AI models.
2.2. Personal scope of the EU AI Act
The EU AI Act creates obligations for a series of actors along the AI value chain,Footnote 43 namely “providers,” “downstream providers,” “importers,” “distributors” and “deployers.” These entities, which may be public or private,Footnote 44 are collectively defined as “operators” and are assigned different obligations under the Act.Footnote 45
Providers are natural or legal persons that either develop an AI system or have an AI system developed by a third party and then commercialise it under their own name or trademark, whether for payment or free of charge.Footnote 46 In addition, the EU AI Act applies to downstream providers, which are “providers of an AI system, including a general-purpose AI system, which integrates an AI model.”Footnote 47 The EU AI Act also applies to importers (i.e., entities located in the EU that place on the EU market an AI system bearing the name or trademark of an entity established in a non-Member State)Footnote 48 and distributors (i.e., entities other than the provider or the importer that make an AI system available on the EU market).Footnote 49 Finally, the Act applies to entities that use an AI system. Users of AI systems are called “deployers” and are defined as the “natural and legal persons using an AI system under their own authority, except where the AI system is used in the course of a personal, non-professional activity.”Footnote 50
2.3. Geographical scope of the EU AI Act
The EU AI Act’s vast geographical reach extends beyond entities and situations entirely connected to EU territory.Footnote 51 This geographical reach is determined by Article 2(1) of the Act, which enumerates various scenarios to which its provisions apply.
First, the Act applies to entities established or located within the EU. Specifically, these include deployers of AI systems that have their place of establishment within the EU,Footnote 52 with the Act further applying to all “affected persons” located within the EU.Footnote 53 Second, the Act’s applicability extends to certain entities, regardless of their geographical location – notably, it applies to all providers placing AI systems or general-purpose AI models on the EU market.Footnote 54 Third, providers and deployers of AI systems based in third countries must comply with the EU AI Act when “the output generated by their AI system is utilised within the Union.”Footnote 55
2.4. The definition of “high-risk” AI systems
The EU AI Act adopts a risk-based regulatory framework, under which AI systems are classified in three different categories, based on the perceived risk that they pose to fundamental rights.Footnote 56 Certain AI systems are deemed to pose unacceptable risks and are prohibited.Footnote 57 Other AI systems are considered “high-risk” because they pose potential threats to the fundamental rights of individuals, including the right to an effective remedy and to a fair trial as protected by the EU Charter of Fundamental Rights.Footnote 58 These high-risk AI systems are subject to stringent requirements by the Act, such as transparency and human oversight.Footnote 59 All other AI systems that do not fall within the prohibited or high-risk categories are regulated mostly through industry-led codes of conduct.Footnote 60
The Act identifies high-risk AI systems by reference to two distinct definitions.Footnote 61 One definition includes AI systems that also qualify as products or safety components of products under existing EU product safety legislation.Footnote 62 Another open-ended definition includes all other AI systems that are deployed in specific sectors involving fundamental rights protected by the EU Charter of Fundamental Rights.Footnote 63 These high-risk sectors are outlined in annex III of the AI Act and include areas such as border control, employment, education, and – particularly relevant to the topic of this article – adjudication and ADR.Footnote 64
The definition of high-risk AI systems in annex III is based on the concept of intended purpose.Footnote 65 According to the Act, an AI system’s intended purpose is determined by the information made available by the provider, which includes not only technical documentation and instructions for use but also promotional materials and public statements.Footnote 66 This means that providers have some level of influence over whether their system is classified as high-risk.Footnote 67 However, this flexibility mainly applies in situations where the AI system does not have a clear single usage, such as in the case of general-purpose AI systems like ChatGPT.Footnote 68
The Act also includes an exception to the classification of an AI system as high-risk. AI systems employed in one of the critical sectors listed in annex III are not deemed high-risk if they do not pose a significant threat to “the health, safety, or fundamental rights of individuals, including by not materially affecting decision-making outcomes.”Footnote 69 According to a literal reading of the Act, this exception is triggered exclusively under one of four specified conditions,Footnote 70 which are further clarified by recital (53) with specific use cases.
Under the combined reading of article 6(3) and recital (53), a series of use cases are excluded from the high-risk category.Footnote 71 For example, AI systems designed to transform unstructured data or classify documents are categorised as performing “narrow procedural tasks” and are therefore not high-risk. Also excluded are AI applications that enhance the language of documents, where this function is an adjunct to human effort rather than a replacement, and thus, the role of human judgment is maintained in the process. Another exception encompasses AI systems that identify deviations from established decision-making patterns, which are intended to complement human assessments, thereby augmenting oversight without undermining human decision-making authority. Finally, AI systems engaged in preparatory tasks that have minimal impact on subsequent evaluations are also included in this classification. Such tasks encompass indexing and data processing activities that provide support without directly influencing high-stakes assessments.
The Act allows the Commission to clarify the exception outlined in Article 6(3) by adopting guidelines that detail its practical implementation.Footnote 72 These guidelines must include a comprehensive list of practical examples distinguishing high-risk from non-high-risk use cases of an AI system.Footnote 73 In formulating these guidelines, the Commission is to pay special attention to the needs of small and medium-sized enterprises and to sectors most likely to be impacted by the Act.Footnote 74 Furthermore, under certain conditions, the Commission has the authority to issue delegated acts that can modify annex III;Footnote 75 this includes the ability to add new use cases for high-risk AI systems, change existing ones or remove specific cases from the list altogether.Footnote 76
2.5. Allocation of obligations along the AI value chain
When an AI system is classified as high-risk under the Act, sections 2 and 3 of chapter III lay out a significant number of requirements and obligations that are allocated to different entities. Failure to comply with all the requirements and obligations laid out by the Act for deployers and providers is heavily sanctioned.Footnote 77
Providers and downstream providers of high-risk AI systems (‘providers’, as previously defined, are entities that place an AI system on the market or into service) are subject to the most stringent set of obligations.Footnote 78 For example, they are required to establish a risk management system,Footnote 79 complete procedures for the validation and testing of used data sets,Footnote 80 and draw up technical documentation related to the AI system.Footnote 81 They are also under an obligation to preserve an AI system’s automatically generated logsFootnote 82 and to register the AI system in a newly created registry kept by the Commission.Footnote 83
The Act also delineates obligations for deployers of high-risk AI systemsFootnote 84 (“deployers,” as previously defined, are entities using an AI system under their authority in the course of a professional activity). In particular, the Act lists a series of obligations, including assigning human oversight of the AI system to natural persons who have the necessary competence and training,Footnote 85 preserving the AI system’s automatically generated logs,Footnote 86 and monitoring the operation of the high-risk system and informing relevant authorities if certain serious risks arise.Footnote 87
The Act also provides for the case in which a deployer might assume the role of provider because of modifications they make to an AI system.Footnote 88 In particular, under Article 25(1)(c), a deployer assumes all the obligations of a provider when “they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service.” Such modification must alter the intended purpose of the AI system “in such a way that the AI system concerned becomes a high-risk AI system by Art. 6(3) of the Act.” Notwithstanding these indications, the scope of substantial modification under Article 25(1)(c) remains unclear.Footnote 89 The Commission is empowered to adopt guidelines to clarify the practical implementation of the provisions related to this substantial modification.Footnote 90
When an AI system is not classified as high-risk, providers and deployers are still subject to certain obligations. For example, providers and deployers have a duty to take measures that ensure, to the maximum possible extent, a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI systems on their behalf.Footnote 91 In addition, the EU AI Act establishes a framework for the adoption of codes of conduct reflecting contemporary technological solutions and industry best practices. Footnote 92 Codes of conduct may also provide for the extension to non-high-risk systems of all the obligations and requirements applicable to high-risk systems.Footnote 93 They may be developed by individual AI providers or in partnership with representative organisations, including civil society and academic institutions,Footnote 94 and their establishment is facilitated by the AI office and national authorities.Footnote 95
3. Implications of the EU AI act for commercial arbitration
This section discusses the implications of the EU AI Act for key stakeholders in commercial arbitration, taking into account the current state of AI usage in the field. First, the section focuses on AI-based tools specifically designed for arbitration and the entities that are developing them, which are classified as “providers” under the Act. The second part of this section focuses on the challenges surrounding the use of AI tools by arbitrators themselves, distinguishing between the use of high-risk systems and general-purpose AI systems such as ChatGPT. Unlike the entities that develop AI tools specifically for arbitration, arbitrators utilising specialised or more open-ended AI-based tools for different tasks will primarily be considered “deployers” of AI systems under the Act. Third, this section discusses the additional complications relating to the Act’s vast geographical scope of application. Finally, the section discusses the consequences of infringing the Act and the suitability of the Act’s enforcement structure for the field of commercial arbitration.
3.1. Implications for companies and arbitral institutions providing AI-based tools for commercial arbitration
The Act categorises AI systems used in commercial arbitration as high-risk when they are intended to be used for two types of tasks, namely (1) assisting an arbitrator in researching and interpreting facts or the law, and (2) applying the law to a specific set of facts.Footnote 96 Because the market for AI tools for legal professionals in commercial arbitration is growing rapidly, many existing products designed to assist arbitrators may qualify as “high-risk” systems under this definition. This classification imposes a substantial compliance burden on entities classified as providers of such systems. The first part of this subsection briefly describes the landscape of AI usage in commercial arbitration. The second part discusses the implications of the Act for the actors developing these tools.
3.1.1. AI tools for arbitration developed by companies and institutions
Specialised companies are releasing increasingly sophisticated tools for commercial arbitration that merge multiple functions. For example, some specialised companies provide AI systems that enable the research of legal materials across multiple commercial databases and arbitral institution repositories,Footnote 97 in addition to offering summarisation and text generation functions.Footnote 98 These AI systems, which are specifically designed for commercial arbitration and promoted as such in their providers’ marketing materials, might be classified as high-risk systems. Such classification could stem from the fact that the “intended purpose” of these systems, as defined by the providers, is to be used in researching and interpreting the law.Footnote 99
In addition, certain tools that have been developed for use in law firms to carry out various legal tasks might be used to support the work of arbitrators in ways that may trigger their classification as “high-risk.”Footnote 100 For example, “Harvey,”Footnote 101 adopted by several law firms internationally, offers an AI-based assistant that allows legal practitioners to analyze large volumes of documents and also answers legal, regulatory and tax questions. Similarly, another system with widespread adoption, “Robin AI,”Footnote 102 drafts contract summaries, allows for the retrieval of specific contract data and can generate reports focusing on terms or obligations. All these uses might, in certain contexts, qualify as “researching and interpreting facts and the law” and “applying the law to a concrete set of facts.”
Arbitral institutions are starting to integrate AI tools into their processes, and although this trend is still in its early stages, it is expected to grow rapidly in the coming years. For example, the International Court of Arbitration (ICC) announced a collaboration with an AI company to roll out an online case management system, “ICC Case Connect,”Footnote 103 in which all case documents will be centralised, making them accessible as data to be processed by other AI tools.Footnote 104 In the United States, the American Arbitration Association (AAA) has developed ClauseBuilder AI (Beta), a tool designed to streamline the drafting of arbitration and mediation clauses.Footnote 105 Another example is the Arbitration and Mediation Center of Peru’s Lima Chamber of Commerce, which offers a platform that integrates ChatGPT and allows users to receive assistance through the chatbot.Footnote 106 In China, the Guangzhou Arbitration Commission has developed an AI arbitration secretary designed to help with many of the administrative tasks that would otherwise need to be completed by the parties, the arbitral tribunal or the institution.Footnote 107
3.1.2. Implications of the act for companies and arbitral institutions
Although the above-mentioned tools certainly qualify as AI systems under the Act, one could contend that some of the tasks do not fall within annex III, point 8, or, alternatively, that they qualify for the exception set forth in Article 6(3) and therefore are not high-risk. For example, AI systems that assist arbitrators with tasks such as proofreading, data retrieval, presenting search results in a structured format or highlighting deviations from established decision-making patterns may arguably be exempt from high-risk classification.Footnote 108 However, the conditions governing this exception, as well as the specific use cases that may qualify for exemption, are articulated in an open-ended manner,Footnote 109 which increases uncertainty for companies investing in the development of these tools.
Moreover, even though some use cases are expressly identified as exempt under the Act,Footnote 110 a significant number of tasks performed by AI systems in commercial arbitration occupy a grey area and may be perceived as influencing decision-making and fulfilling one or more of the conditions in Article 6(3). For instance, activities such as summarising documents or selecting specific documents or excerpts from a larger set could be construed as having a material impact on a tribunal’s decision because how information is curated and presented can shape human decision-makers’ understanding and perceptions.Footnote 111
An additional layer of unpredictability arises from the requirement to register high-risk systems, which also extends to AI systems that may qualify for exemption from the high-risk classification under Article 6(3).Footnote 112 Companies and institutions designated as providers will be required to prepare extensive documentation for the registration process, which is likely to be a complex and resource-intensive endeavor.Footnote 113 Further, there is a looming risk that the competent authorities might reassess and reclassify these AI systems registered as non-high risk under Article 6(3), potentially changing their status to high-risk after registration.Footnote 114
Finally, it is worth noting that even if certain AI tools do not qualify as high-risk under the AI Act, they may still face regulatory hurdles. For instance, AAA ClauseBuilder, mentioned earlier, merely assists with drafting arbitration clauses and is unlikely to be classified as high-risk;Footnote 115 however, its designation as an AI system under the Act still subjects it to certain obligations, such as ensuring adequate AI literacy among staffFootnote 116 as well as complying with voluntary codes of conduct.Footnote 117 Although these codes are not mandatory, they may put pressure on stakeholders to comply with strict standards,Footnote 118 conceivably imposing restrictive effects on innovation and obstacles to market access for smaller players.
3.2. Implications of the EU AI Act for arbitrators using AI systems
In addition to impacting companies and arbitral institutions, the EU AI Act is likely to have significant implications for arbitrators who use AI-based tools to facilitate decision-making.Footnote 119 The “high-risk” classification of certain AI systems used in dispute resolution, combined with the Act’s stringent compliance obligations, raises questions for arbitrators intending to rely on AI tools during arbitral proceedings.
Two distinct scenarios must be considered to understand how the Act applies to arbitrators. The first scenario involves the use of AI systems designed specifically to assist arbitrators in researching legal facts, interpreting the law or applying the law to the facts of a particular dispute.Footnote 120 In this case, arbitrators using such systems are classified as deployers of high-risk AI systems under the Act and are consequently subject to the relevant obligations. The second scenario concerns arbitrators’ use of general-purpose AI systems, such as ChatGPT, for similar tasks.Footnote 121 Although general-purpose AI systems are not designed specifically for arbitration, their use in ways that align with high-risk activities may nonetheless bring them within the scope of the high risk provisions of the Act.Footnote 122
3.2.1. Usage of high-risk AI systems by arbitrators
In the first scenario, where arbitrators use AI systems classified as high-risk, they are subject to the obligations set forth in Article 26 of the Act. This is because, under such circumstances, arbitrators qualify as deployers.Footnote 123 In particular, under Article 26(2) of the Act, arbitrators must ensure human oversight of the AI system to prevent outputs that might adversely affect fundamental rights, health or safety.Footnote 124 This requirement obliges arbitrators to retain ultimate decision-making authority and intervene whenever an AI system’s outputs compromise the fairness or legality of proceedings.Footnote 125
This requirement is intended to ensure due process – a cornerstone of arbitration – by guaranteeing that a human is involved in all material decisions. At the same time, it constrains party autonomy, given that it limits the extent to which arbitrators can rely on AI tools, even in situations where the parties have explicitly agreed to their unrestricted use. Nonetheless, it is worth noting that there already appears to be a prevailing understanding within the arbitral community that arbitrators must not replace their independent analysis of the facts, the law and the evidence with AI-generated outputs.Footnote 126
In addition, Article 86 of the Act requires arbitrators to disclose to the parties whenever a high-risk AI system is used in the decision-making process. More specifically, Article 86 confers on individuals affected by a decision based on high-risk-system output the right to obtain from the deployer “clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.” Therefore, based on the language of Article 86, arbitrators will have to not only disclose that they used an AI system but also provide information about the system in question and how it impacted the decision-making process.Footnote 127 This transparency obligation ensures that parties are fully informed about the usage of AI within the arbitration proceeding, thereby fostering accountability and preserving trust in the arbitral process. This disclosure obligation also reflects an emerging consensus within the arbitral community: guidelines and opinions are converging on the principle that the use of AI systems in arbitration must be disclosed to the parties.Footnote 128
3.2.2. Usage of general-purpose AI systems by arbitrators
The second scenario, involving the use of general-purpose AI systems by arbitrators, raises more complex questions. The use of general-purpose AI systems is not per se classified as high-risk under the Act.Footnote 129 However, if used for high-risk activities listed in annex III of the Act, their classification may change. Articles 6(2) and 25(1)(c) of the Act suggest that when arbitrators use general-purpose AI tools in a manner that aligns with high-risk activities, they may be classified as providers under the Act.Footnote 130 Consequently, arbitrators would not only be subject to obligations typically imposed on deployers (i.e., those established under Article 26, such as assigning human oversight and monitoring the operation of the high-risk AI system) but also to certain obligations applicable to providers, such as ensuring data quality and maintaining compliance documentation under Article 16.Footnote 131
Determining whether the use of a general-purpose AI system constitutes high-risk use depends on the specific task for which the system is deployed. As previously described, annex III, particularly point 8(a), identifies two activities as high-risk: “researching and interpreting facts and the law” and “applying the law to a concrete set of facts.”Footnote 132
Relevant again here is the language of annex III and the exception established in Article 6(3). If an AI system is used for tasks such as proofreading or enhancing the language and tone of a decision, such usage does not fall within the scope of annex III, point 8(a), because it does not intend to assist an arbitrator in assessing facts and applying the law. Moreover, proofreading or enhancing the language of a text would also be classified as merely improving “the result of a previously completed human activity” under Article 6(3)(b) and, therefore, would not qualify as high-risk. Conversely, the use of a general-purpose system to perform any core decision-making task in the proceedings would unequivocally be classified as high-risk under Article 25(1)(c).
Many tasks, however, fall into a grey area between these two extremes. Ancillary tasks, such as legal research, drafting procedural documents and reviewing parties’ claims and evidence, can often be accomplished with the assistance of general-purpose AI systems not specifically designed for these purposes. Whether such tasks qualify as high-risk would then depend on whether they materially influence the outcome of decision-making, as per Article 6(3) and the clarification provided by recital (53) of the Act.Footnote 133 In practice, these ancillary tasks, although not constituting decision-making per se, may nonetheless shape how the arbitral tribunal interprets facts and applies the law.Footnote 134 Consequently, the EU AI Act’s broad language may be interpreted as significantly curtailing the use of LLMs and other general-purpose AI systems for such uses, given the potential for these tools to influence the outcome of the arbitral process.Footnote 135
3.3. Implications of the Act’s broad geographical scope
The Act’s broad extraterritorial scope introduces further complexities for companies, arbitral institutions and arbitrators. As noted, the Act applies not only to providers and deployers located within the EU but also to those who provide or deploy AI systems whose outputs are used within the EU, as well as to all affected parties based in the EU. Footnote 136 Given the inherently cross-border nature of many commercial arbitration proceedings, determining the circumstances in which the Act applies is not always straightforward.
In certain cases, the applicability of the Act is clear. This is the situation for EU-based companies, arbitral institutions and arbitrators who either market or deploy AI systems for arbitration within the Union. Such actors are directly subject to the Act’s provisions under Article 2(1)(a) and (b).
However, the Act’s extraterritorial reach, as outlined in Article 2(1)(c), significantly broadens its scope. Under this provision, as previously described, the Act applies where the output of an AI system is used within the EU. Consequently, it can be argued that the Act extends to non-EU arbitral institutions and arbitrators who, although physically outside the EU, conduct arbitral proceedings that are legally seated in the EU.Footnote 137 In such cases, the connection to the EU arises from the legal framework governing the arbitration rather than the geographic location of the arbitrators or the institution.
The broad and ambiguous language of the Act may also encompass situations with even more tenuous connections to the EU. For example, if the concept of “use of the output of an AI system” is interpreted to include the enforcement of an arbitral award, the Act could apply to proceedings where enforcement is sought in an EU court, even if the arbitration itself was conducted outside the EU and administered by a non-EU arbitral institution.Footnote 138 Similarly, the notion of an “affected party located in the EU” as per Article 2(1)(g) could further extend the Act’s applicability. In such cases, the Act may apply simply because one party to the arbitration has a presence in the EU, even if the arbitration is conducted entirely outside the EU, administered by a non-EU institution and involves facts with no substantive ties to the Union.
In practice, the Act’s application will, therefore, entirely depend on how key terms – such as “use of an output” and “affected party” – are interpreted.Footnote 139 For arbitrators and institutions operating outside the EU, this represents a significant level of legal uncertainty because they may find themselves inadvertently subject to the Act’s stringent compliance requirements, even in cases with only marginal connections to the EU.Footnote 140
3.4. Enforcement and fines
Despite the uncertainty stemming from the Act’s requirements and its broad geographical scope, noncompliance with the Act exposes companies, arbitral institutions and other stakeholders to significant risks.Footnote 141
The most immediate consequence of noncompliance with the Act is the risk of significant fines. Article 99(4) provides for penalties of up to €15 million or 3 per cent of a company’s total global annual turnover, whichever is higher.Footnote 142 Additional fines and nonmonetary sanctions are to be stipulated by the Member States.Footnote 143 Such fines could be unsustainable for individual arbitrators, small companies or arbitral institutions operating as small enterprises. Consequently, these smaller stakeholders may be disincentivised from fully exploring the potential benefits of AI technologies.Footnote 144 This, in turn, risks consolidating the market for AI tools in arbitration around a few well-resourced providers, thereby undermining competition and reducing the diversity of solutions available to arbitral institutions and practitioners.
More indirectly, arbitrators who misuse AI systems may also suffer significant reputational damage, likely adversely affecting their professional standing and future opportunities, or even conceivably subjecting them to civil liability for violating the implied terms of their mandate.Footnote 145 Further, given the lack of case law in this area, there is now – and for the foreseeable future – a lack of clarity regarding the extent to which misuse of AI by an arbitral tribunal or an arbitral institution might affect the validity of an arbitral award.Footnote 146
In addition, enforcing compliance regarding the use of AI in arbitration poses unique difficulties. Given the confidential nature of proceedingsFootnote 147 and the inherent challenges in verifying the deployment and oversight of AI systems, it will often be difficult to determine the extent to which these systems were used and whether their usage complied with the Act.Footnote 148 In practice, it may be difficult to determine whether an AI tool has been used,Footnote 149 let alone whether its use was in compliance with the Act’s requirements. For example, an arbitrator’s reliance on an AI system for legal research or procedural document drafting may not be evident from the procedural record or the award itself.
4. Conclusion
The EU AI Act represents a bold regulatory intrusion into parties’ procedural autonomy, which has long underpinned commercial arbitration. By subjecting arbitration to the Act’s risk-based regulatory framework, the EU is set to affect the delicate equilibrium between self-regulation, party autonomy and due process that has allowed commercial arbitration to thrive as a flexible, efficient mechanism for resolving transnational disputes.Footnote 150
Although the Act’s ambition is to safeguard procedural fundamental rights and procedural fairness in adjudication, its coverage of all types of arbitration proceedings – lumped indistinctly within the “alternative dispute resolution” category of annex III – fails to account for the unique characteristics of commercial arbitration and the proven capacity of its stakeholders to respond and adapt to technological advancements without heavy-handed oversight.Footnote 151
Commercial arbitration operates in an ecosystem fundamentally distinct from other forms of ADRs. In consumer or employment disputes, where significant power asymmetries exist, robust procedural safeguards are necessary to protect vulnerable parties.Footnote 152 By contrast, commercial arbitration typically involves experienced and well-resourced parties negotiating procedural frameworks on equal footing, often with the assistance of specialised institutions.Footnote 153
The EU AI Act’s uniform regulatory approach fails to account for these differences, imposing compliance obligations that are not only impractical but also undermine arbitration’s foundational principles of flexibility and party autonomy. This one-size-fits-all approach also appears to disregard the long-standing EU approach to arbitration, which has traditionally left procedural matters untouched to ensure that commercial arbitration remains a party-driven and adaptable form of dispute resolution.
Therefore, to reconcile the EU AI Act’s objectives with the needs of commercial arbitration, the EU should address this sector specifically under its powers established in Articles 6 and 7 of the Act.Footnote 154 The most suitable solution would be to exclude commercial arbitration from annex III’s scope through a delegated act under Article 7(3). Such exclusion should be understood as covering matters arising from all relationships of a commercial nature, contractual or otherwise,Footnote 155 to the exclusion of those matters covered by the ADR Directive.Footnote 156 This carve-out would align with the principle of proportionality, thereby ensuring that regulatory interventions target only those arbitration proceedings – such as consumer arbitrations – where weaker parties require protection.Footnote 157
Alternatively, the Commission could issue guidelines under article 6(5) to narrow the scope of high-risk AI use cases in arbitration.Footnote 158 To achieve this objective, targeted consultation with arbitral institutions, practitioners and users would help differentiate between AI tools that genuinely threaten fundamental rights and those that enhance efficiency without compromising due process.Footnote 159 These guidelines could establish a tiered compliance system that exempts routine AI applications while reserving scrutiny for systems that directly impact adjudicative outcomes.