I. Introduction
The EU AI Act is worldwide the first comprehensive law that regulates the development and deployment of AI systems and general-purpose AI models (GPAIMs) with the aim to ensure “a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection … and supporting innovation.”Footnote 1 The obligations imposed by the Act are as general as its ambitious objective. For example, GPAIM providers have to assess and mitigate all “systemic” risks of a model.Footnote 2 To codify general requirements appears sensible under conditions of rapid technological change, when detailed rules would become quickly outdated.Footnote 3 Another benefit is that AI operators enjoy discretion on how to implement their obligations efficiently, while authorities are able to respond flexibly to novel regulatory challenges.Footnote 4
Details and updates regarding the statutory requirements will be laid out in non-legislative, tertiary instruments, several of which are provided for in the AI Act:Footnote 5
-
Delegated Acts to be adopted by the European Commission in a comitology procedure, for example in order to amend annexes to the Act in light of evolving technological developments;Footnote 6
-
Implementing Acts to be adopted in the same procedure, for example to establish “common specifications” for the obligations of providers of high-risk AI systems and GPAIMs;Footnote 7
-
Guidelines “on the practical implementation” of all parts of the AI Act, which set out the Commission’s interpretation and application of it;Footnote 8
-
Harmonised Standards requested by the Commission from European standardisation organisations to ensure that high-risk AI systems or GPAIMs meet the requirements under the Act;Footnote 9
-
Benchmarks, whose development shall be encouraged by the Commission in cooperation with relevant stakeholders and to which the AI Board and the Scientific panel of independent experts shall contribute, for example regarding the technical aspects of how to measure the appropriate levels of accuracy and robustness of high-risk AI systems;Footnote 10
-
Codes of Conduct (CoCs), encouraged and facilitated by the AI Office (AIO) and the Member States and drawn up by AI system operators or their representative organisations regarding inter alia the voluntary application of some or all requirements set out for high-risk AI system to non-high-risk AI systems;Footnote 11 and
-
Codes of Practice (CoPs), drawn up “at Union level” to facilitate the effective implementation of obligations to detect and label AI-generated or manipulated content and to contribute to the proper application of Chapter V of the Act regarding GPAIMs.Footnote 12
This article focusses on the last-mentioned regulatory tool, which is of particular interest for two reasons. Firstly, the AI Act’s CoPs present a novel type of co-regulation. A search for “Code of Practice” in the EUR-Lex database produces only one other result where an EU legal act uses this terminology in its operative parts, namely the “European Statistics Code of Practice,” which sets out 16 principles to govern the operations of EU and national statistical authorities.Footnote 13 This CoP was originally prepared as a “self-regulatory instrument” by national statistical institutes, endorsed by the Commission and later updated by the European Statistical System Committee, which is composed of the representatives of national statistical institutes and chaired by the Commission (Eurostat), with the aim to ensure public trust in European statistics.Footnote 14 As will be explained below, CoPs under the AI Act emerge from a very different process and have a different legal effect, which is more akin to yet still markedly different from “Codes of Conduct” under the General Data Protection Regulation (GDPR)Footnote 15 and the Digital Services Act (DSA).Footnote 16 The second reason why CoPs under the AI Act deserve a separate treatment is that the first CoP on GPAIMs was adopted in the Summer of 2025,Footnote 17 and a second process towards a CoP on the transparency of certain AI systems according to Art. 50(7) AI Act is ongoing at the time of writing.Footnote 18 In spite of this practical relevance, AI Act CoPs have not yet been studied in detail. This article aims to close this gap. It is based on publicly available documents and on the experiences that the author gained as one of the “independent experts” appointed by the European Commission to co-chair a Working Group of the GPAI CoP.Footnote 19
II. The AI Act’s framework for codes of practice
Before recounting the process towards the first GPAI CoP in detail, it is useful to outline the AI Act’s framework for CoPs. The AI Act refers to CoPs twice: in Chapter IV setting out “Transparency obligations for providers and deployers of certain AI systems,” and in Chapter V on GPAIMs. This places CoPs outside of the original proposal for the AI Act, now contained in Chapters II and III, regarding prohibited AI practices and high-risk AI systems, and thereby also beyond the references to the New Legislative Framework for product safety regulation, which only concern high-risk AI systems.Footnote 20 The history and structure of the AI Act thus already signal that CoPs are a new form of co-regulation.
The subject matter of CoPs also concerns novel technical issues regarding generative AI.Footnote 21 CoPs under Article 50(7) AI Act deal with the detection and labelling of artificially generated or manipulated content. In addition, CoPs under Article 56 AI Act address GPAIMs, i.e., foundational models developed with a large training compute and capable of generating language in various forms.Footnote 22 Except for the obligation of deployers of an AI system to disclose AI “deep fakes” according to Art. 50(4) AI Act, none of these issues were part of the Commission’s original proposal of 2021.Footnote 23 At this time, the potential but also the risks involved in generative AI were not yet fully recognised by the public. The release of ChatGPT in November 2022 changed the situation dramatically.Footnote 24 The EU Council quickly responded with additional rules for “general purpose AI systems” to which the requirements for high-risk AI systems would have applied, provided the Commission had adopted a respective implementing act via the standard comitology procedure.Footnote 25 The European Parliament (EP) instead proposed the codification of “general principles” of ethical and trustworthy AI plus more concrete statutory requirements for “foundation models.”Footnote 26 Only one of these obligations, namely the one concerning the environmental impact of foundation models, was subject to the publication of an applicable harmonised standard and to be complemented by a Commission Guideline.Footnote 27 The EP furthermore suggested that the AIO should institutionalise a “regular dialogue with the providers of foundation models about the compliance … with … this Regulation, and about industry best practices for self-governance.”Footnote 28
Thus, neither the Council nor the EP proposed to resort to CoPs or an equivalent instrument. CoPs were only added to Article 50 and Chapter V of the AI Act during the Trilogue, which is not publicly documented.Footnote 29 They apparently emerged from a November 2023 “non-paper,” in which the governments of Italy, France and Germany opposed to subject “foundation models” to “un-tested norms” and instead suggested “to build in the meantime on mandatory self-regulation through codes of conduct” and mandatory “model cards,” which could be complemented by future European standards.Footnote 30
In the final version of the AI Act, CoPs are indeed “central” tools “to facilitate the effective implementation” of Article 50(2) and (4) AI Act and to ensure a “proper application” of Articles 53 and 55 AI Act.Footnote 31 In Article 50 AI Act, they are the only co-regulatory instrument, while in Chapter V, CoPs function as a quick but preliminary solution “until a harmonised standard is published.”Footnote 32 Such a standard is, however, not on the horizon. The Commission has not even requested one.Footnote 33
The legal effect of a CoP is weaker than that of a published harmonised standard. While compliance with the latter grants providers a presumption of conformity,Footnote 34 adherence to a CoP is – like adherence to a GDPR CoC and similar to a DSA CoC – merely a tool to “demonstrate compliance” without constituting “conclusive” evidence of compliance, which is always to be assessed on a case-by-case basis.Footnote 35 This limited legal effect moreover presupposes that the Commission has “approved” a CoP.Footnote 36
The AI Act also reflects the somewhat paradoxical request in the three Member States’ “non-paper” to resort to “mandatory self-regulation.”Footnote 37 CoPs are voluntary tools that providers may or may not sign, but addressees of the AI Act “who do not adhere to an approved code of practice … shall demonstrate alternative adequate means of compliance for assessment by the Commission.”Footnote 38 The Commission has announced that signatories to a CoP will generally enjoy increased trust, and that enforcement measures will consequently focus on whether a signatory actually adheres to its commitments.Footnote 39 Commitments implemented in line with a CoP may furthermore be taken into account as a mitigating factor when fixing the amount of fines.Footnote 40 In contrast, non-signatories will have to report their alternative compliance measures to the AIO, and they may be subject to a larger number of requests for information.Footnote 41 There are thus significant incentives to sign a CoP and thereby reduce the exposure to enforcement actions.Footnote 42
The Commission may furthermore give a CoP “general validity within the Union” via an implementing act.Footnote 43 The consequences of such a Commission act are not entirely clear. The AI Act does not state that a CoP with “general validity” has a different legal quality than a regular CoP. It thus remains a tool to “demonstrate compliance.” “General validity” should consequently be interpreted to mean that the respective CoP presents the only viable set of rules with reference to which an addressee of the Act can demonstrate compliance – whether it is a signatory to the CoP or not.Footnote 44 Alternative means of compliance are categorically ruled out. This far-reaching consequence suggests that the Commission should only upgrade a CoP to this status if its rules have demonstrably contributed to the proper application of the AI Act and no adequate alternative means of compliance have emerged.
At the other end of the spectrum of possible CoP scenarios is the failure of a CoP process. If no CoP is finalised or if the AIO deems it not adequate, the Commission may provide, by means of implementing acts, “common rules.”Footnote 45 These Commission rules hang like a sword of Damocles over every CoP process. If the addressees of Articles 50, 53 and 55 AI Act fail to contribute to the “proper” implementation of their obligations, the Commission will fill the void in a potentially much more burdensome manner. This possibility is a significant incentive to actively participate in the drawing-up of a CoP.
In sum, CoPs are just one of several possible instruments to concretise the requirements of Chapters IV and V of the AI Act. The elaborate system of tertiary regulatory instruments mirrors the uncertainty under which the EU legislature hastily regulated generative AI. It not only added derivative, non-legal instruments to specify the general statutory requirements but even delegated the decision concerning the best mode of how to concretise the Act to the stakeholders and the Commission. At the same time, Articles 50(7) and 56 AI Act only establish a general framework as to how a CoP is to be drawn up and furnished with legal effect. Article 50(7) AI Act merely states that the AIO “shall encourage and facilitate the drawing up of codes of practice at Union level”Footnote 46 and that the Commission may adopt “implementing acts to approve those codes of practice in accordance with the procedure laid down in Article 56 (6).” This paragraph codifies two different processes concerning the assessment and approval of a CoP, while the preceding paragraphs list the actors that may participate in the drawing-up of a CoP and the substantive elements that the AIO “shall aim to ensure” to cover.Footnote 47 Although Article 50(7) second sentence AI Act only refers to paragraph 6 of Article 56 AI Act, that provision should be applied mutatis mutandis in its entirety to CoPs under Article 50(7) AI Act. The AIO appears to share this view because it has announced that the first Article 50 CoP will be drawn up in essentially the same way as the first GPAI CoP.Footnote 48
III. The first general-purpose AI code of practice
The next section documents how the EU institutions operationalised this sketchy statutory framework for the first GPAI CoP.Footnote 49
1. The organisation of the iterative drafting
According to Article 56(9) subpara. 1 AI Act, the CoP had to be “ready at the latest by 2 May 2025,” i.e., a mere nine months after the AI Act entered into force.Footnote 50 This statutory timeframe put intense time pressure on the drafting process.Footnote 51 In order to meet the ambitious deadline, the AIO, which the Commission had established on 24 January 2024,Footnote 52 published the call for expression of interest to participate in the drawing-up of the GPAI CoP on 30 July 2024 and thus already three days before the AI Act went into effect.Footnote 53 At the same time, the AIO launched a stakeholder consultation, which contained three sections regarding: (1) the transparency and copyright-related provisions for GPAIMs (Articles 53(1)(a)–(c) AI Act); (2) risk taxonomy, assessment and mitigation for GPAIMs with systemic risk (Article 55 AI Act); and (3) the review and monitoring of the CoP.Footnote 54 The Commission received 427 submissions in response.Footnote 55
The invitation to participate described the process as an “iterative drafting through multi-stakeholder engagement,” illustrated as follows:Footnote 56

The AIO also announced the major elements of the drafting process and a tentative timeline:Footnote 57

According to these infographics and the accompanying “call for expression of interest,” the AIO planned and indeed facilitated three drafting rounds.Footnote 58 Only twelve days after the public consultation had closed on 18 September 2024, nearly 1,000 attendees took part in the kick-off plenary.Footnote 59 The first draft was based on the responses to the consultation and a provider workshop and was published on 14 November 2024.Footnote 60 The second and third drafts, published on 19 December 2024 and 11 March 2025 respectively, were also preceded by consultation rounds among the participants of the Code process, plenary meetings, additional provider workshops and meetings with the AI Board and the AI Act Working Group of the European Parliament.Footnote 61 Though not entirely clear at the beginning, these written and oral exchanges also followed between the third draft and the publication of the final version.Footnote 62 Thus, a total of four rounds of written and oral feedback shaped the final code – a massive effort for all participants involved. In spite of all efforts to accelerate the drafting, the final version of the Code was published on 10 July 2025 and thus more than two months behind schedule.Footnote 63
2. The structure of the drafting process and the Code
The process deviated from the original plan also as regards its working group (WG) structure. Initially, the AIO established four WGs:Footnote 64
-
WG1, in charge for detailing the transparency obligations under Article 53(1)(a) and (b) AI Act and the copyright policy obligation under Article 53(1)(c) AI Act;
-
WG2, in charge for identification and assessment measures for systemic risks;
-
WG3 concerning technical risk mitigation measures for systemic risks; and
-
WG4 regarding the internal risk management and governance of providers of GPAIMs with systemic risk.
This structure finds little support in the AI Act. The transparency and copyright-related obligations under Article 53(1)(a) and (b) AI Act on the one hand and Article 53(1)(c) AI Act on the other have different scopes of application (regarding GPAIMs released under a free and open-source licence), and they pursue different aims.Footnote 65 Conversely, the formally separate WGs 2-4 were supposed to deal with various aspects of the obligations of providers of GPAIMs with systemic risk, which Article 55(1) AI Act codifies in a comprehensive manner.
The first draft of the GPAI CoP already moved the Code towards this AI Act structure. Section I set out an overarching preamble plus an equally overarching “Objectives of the Code,” which were presented as aims that signatories “can” achieve by adhering to the Code.Footnote 66 Section II contained the “rules for providers of [all, A.P.] general-purpose AI models,” i.e., the topics assigned to WG1, albeit with separate recitals and measures for “transparency” and “copyright.”Footnote 67 Section III provided a “taxonomy of systemic risks,” and Section IV covered all measures addressed to providers of GPAIMs with systemic risk.Footnote 68
The second draft differed substantially from the first.Footnote 69 It expanded the volume of the text from 36 to 66 pages, provided for “commitments,” “measures” and “key performance indicators” instead of “measures” and “sub-measures” and also merged Sections III and IV of the first draft into one single section concerning GPAIMs with systemic risk. Another remarkable feature of the second draft is an “Information box from the AI Office,” which stresses that the notion of “systemic risk” is defined in Article 3(65) AI Act and that therefore the respective considerations in the draft Code “might ultimately be better placed in related AI Office guidelines.”Footnote 70 This statement indicates uncertainties and also divergent opinions about the proper scope of the GPAI CoP in relation to other tertiary instruments, in particular a Commission guidance. It furthermore highlights the fact that several actors were involved in the development of the GPAI CoP and that their respective influence on and responsibility for the outcome are not entirely clear.Footnote 71
The third draft was again presented in a very different manner and structure.Footnote 72 Instead of a single document, the Commission published four. The first document included an overarching preamble, objectives and all eighteen commitments, which were grouped in two parts: one for providers of all GPAIMs (with sub-sections for transparency and copyright), and one with sixteen commitments for GPAIMs with systemic risk. The three other documents contained the “Transparency,” “Copyright” and “Safety and Security” sections – a clear concession that these three issues had to be considered separately. The drifting apart of the Transparency, Copyright and Safety and Security WGs is also observable from divergent drafting styles. The Transparency section includes a unique “model documentation form,” the Copyright section, drafted by two law professors, is written in dense legalistic language, whereas the sixty-one-page Safety and Security section contains explanatory materials (drawings, summaries of the changes), “potential material for future” recitals, a glossary, an appendix with the systemic risk taxonomy, and a “recommendation to the AI Office” regarding the need for regular reviews of the Code and implementation support.
The final version was published by the Commission in three separate documents, “consisting of three separately authored chapters” on Transparency, Copyright, and Safety and Security.Footnote 73 In view of this mode of presentation, it has become doubtful whether the process resulted in one code or in three. This is a relevant legal question because three separate codes could clearly be signed selectively,Footnote 74 whereas a partial reliance on a single code could be considered insufficient for demonstrating compliance with the obligations covered by the code. The AI Act fails to provide clear answers to these issues. At times it speaks of CoPs in the plural, in other contexts in the singular, in particular as regards a Commission “approval” of “a” code.Footnote 75 The Commission’s FAQ on the GPAI CoP is silent on the question whether providers may sign the three Chapters selectively or not.Footnote 76 Other public statements by the Commission are equally inconclusive. On the one hand, the Commission consistently referred to the GPAI CoP as a single code throughout the drafting process. The verbatim “Objectives” of the three final Chapters also speak of “this Code of Practice (‘Code’).” In the same vein, the signature form provided by the AIO states that a signatory “hereby commits to follow the Code of Practice” for providers of GPAIMs as published on 10 July 2025 and that it reserves the right “to opt out of signing the GPAI Code of Practice” within a certain period of time.Footnote 77 On the other hand, Article 56(7) second sentence AI Act explicitly allows providers of GPAIMs not presenting systemic risks to limit their adherence to a code “to the obligations provided for in Article 53, unless they declare explicitly their interest to join the full code.” Accordingly, such a signature of a provider of “simple” GPAIMs would only be effective for the Transparency and Copyright Chapters. In its “Guidelines on the scope of the obligations for general-purpose AI models” the Commission appears to generalise this selective approach by stating that “[a]ny opt-out from chapters of the code of practice results in losing the benefits of facilitating the demonstration of compliance in that respect.”Footnote 78 It confirmed this view by announcing that the provider “xAI signed up to the Safety and Security Chapter; this means that it will have to demonstrate compliance with the AI Act’s obligations concerning transparency and copyright via alternative adequate means.”Footnote 79 Since this scenario is not covered by Article 56(7) AI Act (which regulates the inverse case: signature of providers of simple GPAIMs of only the Transparency and Copyright Chapters), other selective signatures, for example only of the Transparency or the Copyright Chapters, should also be permissible.
As a result, there are de facto three separate GPAI CoPs, namely one covering the obligations under Article 53(1)(a) and (b), one for Article 53(1)(c) and one for Article 55 AI Act. Since this fragmentation benefits GPAIM providers, it is difficult to imagine a case in which the Court of Justice of the European Union would be called upon to decide whether the Commission practice is in accordance with Article 56 AI Act. Those who sign all or parts of the GPAI CoP will hardly argue that what they voluntarily rely on lacks a proper legal basis. And as long as the Commission accepts a selective signature, no dispute will ensue. For reasons of simplicity, this article nevertheless continues to speak of “the” GPAI CoP.
3. Actor groups and their influence on the code
The Commission presented the GPAI CoP process as a “multi-stakeholder” engagement.Footnote 80 Article 56 AI Act indeed mentions several actors potentially involved in the drawing up and approval of a CoP: the AI Office, the Commission, the AI Board, relevant national competent authorities and private stakeholders such as GPAIM providers, civil society organisations, industry, academia, downstream providers and independent experts. The GPAI CoP process sheds light on the roles of these actors and their influence on the outcome, which cannot be attributed to any of them alone.Footnote 81
The actor mentioned first and most often in Article 56 AI Act is the AIO, which forms part of the Commission’s Directorate-General for Communication Networks, Content and Technology.Footnote 82 The AIO indeed “played a pivotal role throughout the process.”Footnote 83 In its call for expression of interest to participate, the AIO announced that it will “oversee[s] and facilitate[s] the entire drafting process.”Footnote 84 It phrased and structured the questions in the initial consultation, came up with and executed the iterative drafting process including the publication of the drafts and the final version on the Commission website. The AIO also selected the chairs responsible for the drafting, with whom it constantly “work[ed] … to ensure consistency”Footnote 85 and to whom it gave “legal advice,”Footnote 86 sometimes together with other departments of the Commission, in particular the Copyright Unit within DG CNECT.Footnote 87 Finally, the AIO prepared the “template” for the training content summary due under Article 53(1)(d) AI Act entirely in-house, although Article 56(2)(b) AI Act expressly mentions this issue as one item to be covered in a GPAI CoP.Footnote 88 It only presented its preliminary ideas regarding the template to the participants of the Code process once and allowed them to provide written feedback.Footnote 89 The legal implications of keeping the template separate from the Code are significant. While the GPAI CoP is merely a voluntary tool to demonstrate compliance with the AI Act, the training data template must be filled in and published by all GPAIM providers, irrespective of whether they signed up to the GPAI CoP or not.Footnote 90
The European AI Board, i.e., the AI Act’s governance body representing the Member States at Union level, also features prominently in Article 56 AI Act.Footnote 91 However, and in accordance with Article 66 first sentence AI Act, its role was largely of an advisory or even observatory nature, possibly due to the fact that the Board and a special steering group on GPAIMs were only established at the end of 2024 after the Code process had already begun.Footnote 92 Even smaller, namely zero, was the influence of national competent authorities, probably because no such authority was operational before the Code process came to an end.Footnote 93 Although the very idea for a GPAIM CoP can be traced back to a Member State initiative,Footnote 94 Member States thus played a minor role in the first realisation of this regulatory concept, primarily because they were not yet ready to contribute meaningfully to the highly accelerated drafting process.Footnote 95 Member States simply could not keep up with the speed at which the GPAI CoP process was executed by the AIO.
In contrast, the European Parliament, though not referenced in Art. 56 AI Act, was consulted in all iterations of the Code drafting.Footnote 96 These interactions might have been conducive to a generally positive reaction of lead MEPs to the final version of the Code, in which they stated that despite “intense lobbying, time pressure, and geopolitical tensions, … the AI Office and the independent drafters have delivered a compromise that retains core protections.”Footnote 97
Private parties were eligible to participate in one or several Code WGs of their choice under certain conditions, established and checked prior to the launch of the process by the AIO. According to these AIO rules, GPAI model and downstream system providers needed to demonstrate existing or planned operations in the EU; civil society organisations had to be present in the EU and have a legitimate interest in participating; and academics and other experts needed to show relevant expertise but required no presence in the Union.Footnote 98 The AIO has not publicly announced whether it rejected an application to participate.Footnote 99 In the event this happened, no documented court case ensued. The relatively small group of GPAIM providers occupied a special position among all participants in that providers were invited to dedicated workshops with the WG chairs.Footnote 100 The live interaction between the chairs and all other stakeholders was instead limited to four “plenary” sessions, in which only few stakeholders could take the floor. The privileged position of providers raised concerns, but it finds support in Article 56(3) AI Act, which, in its first sentence, assigns to GPAIM providers the role “to participate in the drawing-up of codes of practice,” whereas all other civil society actors may, according to the second sentence, only “support the process.” To grant GPAIM providers a special status and access to the WG chairs also appears justified because they are the only addressees of the relevant AI Act obligations and the only group of actors eligible to sign the GPAI CoP.Footnote 101 Last but not least, the companies that develop the technology at stake know best how it works and what its limits are. Chairs were, in any case, free to hold additional bilateral meetings with all stakeholders who requested such exchanges.Footnote 102
The WG chairs had been appointed by the AIO based on their expertise and independence and not paid for performing their role.Footnote 103 Their task was to moderate WG discussions, synthesise stakeholder submissions and, most importantly, prepare the drafts and the final version.Footnote 104 The chairs indeed presented all three drafts in their name,Footnote 105 and they are named on the front pages of the three final Chapters. Their authorship is also acknowledged in a Commission statement according to which the Commission “received the final version” of the GPAI CoP, which was “developed by 13 independent experts.”Footnote 106 Most of the chairs had backgrounds in computer science and/or AI governance studies; only the chairs in charge of the copyright-related rules were lawyers.Footnote 107 Remarkably, a majority of chairs was affiliated with non-EU institutions during the Code process, namely universities in the United States (Stanford: three vice-chairs), the United Kingdom (Oxford and Cambridge) and Canada (Montréal and Ottawa), and a US/UK-based think tank.Footnote 108 Contrary to its original plan, the AIO did not designate a chair “to take on a central coordination role for ensuring coherence and consolidating drafts across all Working Groups”Footnote 109 – a decision that certainly contributed to the growing fragmentation of the three de facto separate WGs. All chairs had to sign a Confidentiality Declaration, which prohibits them from disclosing details on the drafting process for a period of five years after the publication of the GPAI CoP (i.e., until 10 July 2030) without written approval by the AIO.Footnote 110
To my knowledge, there is no other non-legislative EU instrument where the drafting has been outsourced to independent experts in a similar manner. GDPR and DSA CoCs were prepared as self-regulatory documents by data controllers/processors, online intermediaries and other relevant stakeholders or their representative bodies.Footnote 111 The drafting of these codes may be facilitated by consultants contracted by the stakeholders, who nevertheless retain control and take full responsibility for the outcome.Footnote 112 European harmonised standards are also drafted in a different, “market-driven” manner by European standardisation organisations and are to be based on consensus.”Footnote 113 The CEN/CENELEC WGs established to this end are led by a “convenor” tasked with preparing drafts, but also to ensure consensus.Footnote 114 The chairs of the GPAI CoP were not bound by such a requirement. They were in principle free to draft and adopt rules that some or even most stakeholders rejected, and this is what happened.Footnote 115 In view of the influence of WG chairs on the outcome of the GPAI CoP process, it is remarkable that the AI Act does not expressly recognise their role.Footnote 116 Article 56(3) AI Act mentions “independent experts” merely among “other relevant stakeholders” who “may support the process.” Recital 116 third sentence AI Act also describes a different scenario in which the AIO should “consult with … experts … for the drawing up” of codes. However, what the GPAI CoP chairs did went far beyond mere consultation with the AIO. They undertook the drafting, and not always in a way that was embraced by the AIO/Commission.Footnote 117
4. “Approval” of the code
What the chairs could not effectuate was to give legal effect to the Code.Footnote 118 This upgrade required an endorsement of the Code by three other actors.
Firstly, GPAIM providers were invited by the AIO to sign the Code as “finalised” on 10 July 2025Footnote 119 and to thereby declare that they want to rely on the Code to demonstrate compliance with their AI Act obligations.Footnote 120 Twenty-six companies of different sizes responded to this invitation, among them major US companies such as Anthropic, Google, Microsoft and OpenAI but also smaller European players like Aleph Alpha (Germany) and Mistral (France).Footnote 121
Secondly, only the adherence to a Code that is “approved” allows signatories to demonstrate their compliance with the AI Act.Footnote 122 It is doubtful who is supposed to bring about this approval and what the procedure should be. While it is clear that a harmonised standard produces legal effects once the Commission has published a reference in the Official Journal,Footnote 123 Article 56(6) AI Act provides for two different procedures. According to subparagraph 1, the AI Office and the Board shall assess whether a CoP adequately covers the obligations under Articles 53 and 55 AI Act and shall publish their adequacy assessment.Footnote 124 Subparagraph 2 states that the “Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the Union,” and that this approval shall be adopted in accordance with the comitology procedure.Footnote 125 Neither references to Article 56 in other provisions of the Act nor the only recital on this point are helpful in deciding which of the two procedures is the one through which a CoP is properly “approved.”Footnote 126 The wording points to Article 56(6) subpara. 2 AI Act, which also uses the verb “approve.”Footnote 127 The Commission’s FAQs on the GPAI CoP appear to share this view by stating that “[i]f approved via implementing act, the Code of Practice obtains general validity, meaning that adherence to the Code of Practice becomes a means to demonstrate compliance with the AI Act.”Footnote 128 Other Commission statements published during the Code process remained vague.Footnote 129 Only in its guidelines on the scope of application of the AI Act, which were published eight days after the final version of the GPAI CoP, did the Commission unequivocally announce that GPAIM providers can demonstrate compliance with their obligations by adhering to a code of practice “that is assessed as adequate by the AI Office and the Board.”Footnote 130 The respective adequacy assessments were published just in time on 1 August 2025.Footnote 131
To read “approved” as meaning “assessed as adequate” brings the AI Act in line with the regulation of CoCs under the DSAFootnote 132 and the GDPR, which clearly distinguishes between the approval of a GDPR CoC through an opinion of a supervisory authority on the code’s appropriateness and the decision of the Commission to grant such an “approved” code general validity.Footnote 133 Such an interpretation of Article 56 AI Act pays tribute to the fact that the DSA and the GDPR appear to have informed the wording of Article 56 AI Act,Footnote 134 and it also improves the consistency of the EU digital rulebook. Furthermore, it makes sense and mirrors the structure of Article 56(6) AI Act to tie the limited legal effect of a CoP to an adequacy assessment as a first step, whereas the more profound effect of a declaration of “general validity”Footnote 135 is subject to a more formal Commission implementing act as a second step. There are thus good reasons to assume that a CoP acquires legal effect and thereby becomes part of EU lawFootnote 136 through an adequacy assessment according to Article 56(6) subpara. 1 AI Act.
The conclusion of the GPAI CoP process just in time before Chapter V became applicable proves that the AI Act sets the right incentives for co-regulation via CoPs.Footnote 137 Another indicator of success is that the AIO sticks to the approach executed for the first GPAI CoP for the first CoP under Article 50 of the AI Act.Footnote 138
IV. Conclusion
The experience with the first GPAI CoP revealed both practical and normative problems, two of which shall be highlighted in conclusion: tech-induced acceleration and the diffusion of responsibility.
The AI Act and the GPAI CoP were adopted remarkably quickly. In both instances, technology functioned as an accelerator of regulation. As regards the AI Act, the ChatGPT shock triggered significant public concerns, to which the EU legislature responded with comprehensive requirements for generative AI systems and GPAI models and tight deadlines for their further specification in non-legislative, tertiary instruments. Chapters IV and V of the AI Act were largely hammered out during the Trilogue, the notorious “black box” of EU lawmaking.Footnote 139 The rush in which the rules were drafted certainly contributed to the ambiguities that plague, inter alia, the rules on CoPs. The first GPAI CoP was also prepared under intense time pressure.Footnote 140 In this context, digital technology was not only the reason for but also the enabler of acceleration. Meetings were held nearly exclusively online, often on short notice and with participants present in different time zones. Hundreds of written submissions were processed via the Futurium platformFootnote 141 and other cloud solutions. The sharing of documents for drafting purposes and the publication of drafts could be accomplished at the touch of a button. “Iterative drafting through multi-stakeholder engagement” across continents is only possible under such conditions of digitality. The GPAI CoP is an inherently digital form of regulating the digital.
Digitising and thereby accelerating regulation can also be problematic. Firstly, it can result in unclear or insufficient rules – in short, low-quality law. Secondly, networked regulation with multi-stakeholder engagement tends to diffuse responsibility and accountability.Footnote 142 The GPAI CoP proves this point. As explained above, several actors influenced its content, in particular the WG chairs in charge of the drafting, GPAIM providers and the AIO/Commission.Footnote 143 These three actor groups (plus the just established AI Board) had to find common ground in order to arrive at a relevant outcome: a finalised, signed and approved code. According to the order of events, chairs made a final proposal, providers indicated their willingness to sign and the Commission and the AI Board eventually rubber-stamped the outcome. In practice, however, the final text published on 10 July 2025 already reflected the positions and red lines of all key actors. Otherwise, the signature and approval processes would not have been concluded within the three following weeks.
On the one hand, the support by stakeholders, independent experts, the Commission and the Member States represented in the AI Board provides relatively solid input legitimacy.Footnote 144 On the other hand, accountability remains unclear because no formal voting took place and no actor controlled all three necessary steps (drafting, signing and approval).Footnote 145 Ultimately, the strongest sources of input legitimacy are the positive assessments of the Code by the Commission and the AI Board, which also resulted in the Code becoming part of EU law.
In view of the potential but also the problems of CoPs, future governance research should closely monitor iterations of the GPAI CoP process, which is expected to happen “at least every two years.”Footnote 146 Another promising area for future research would be a holistic study of the AI Act’s menu of non-legislative instruments.Footnote 147 Is the great variety of tertiary acts, standards and codes justified by practical needs of the respective subject matter, or is the EU digital rulebook overly complex in this respect too?Footnote 148 In any case, a simplification of non-legislative rule-making should not be done in a hurry. For haste is never conducive to good law.
Competing interests
From September 2024 to July 2025, the author was Co-Chair of a Working Group of the General-Purpose AI Code of Practice, responsible for the drafting of the copyright-related rules.