In Chapter 4, I analysed how algorithmic regulation (discussed in Chapter 2) affects the rule of law and its principles (discussed in Chapter 3), and identified a threat which I conceptualised as algorithmic rule by law, marking a deviation of the rule of law’s ideal, facilitated by the use of algorithmic systems. In light of that analysis, let me now consider how the EU legal framework deals with this threat, and what safeguards it offers to counter it.
Two legal domains are of particular relevance in this regard: regulation pertaining to the protection of the rule of law (the EU’s rule of law agenda), and regulation pertaining to (automated) personal data processing and the use of algorithmic systems (the EU’s digital agenda). Each of these domains is vast and consists of a broad range of legislation, including not only primary and secondary EU law, but also soft law. In what follows, I therefore confine my investigation to those areas of legislation that are most relevant for the identified concerns, with a primary focus on binding legislation. In terms of safeguards, drawing on the conclusions of Chapter 4, I will be evaluating EU legislation based on whether it provides effective mechanisms enabling prior and continuous oversight and accountability over algorithmic regulation, also as regards the upstream choices; public participation mechanisms; private and public enforcement, at national and EU level; constitutional checks and balances; and the availability of contestability, as well as opportunities for internal critical reflection.
After some preliminary remarks about the EU’s competence to take legal action in this field (Section 5.1), I respectively examine safeguards provided by regulation pertaining to the rule of law (Section 5.2), to personal data (Section 5.3) and to algorithmic systems (Section 5.4), before concluding (Section 5.5).
5.1 A Note on EU Competences in the Field
Pursuant to the principle of conferral, the European Union can only act based on competences explicitly conferred to it.Footnote 1 Conversely, “competences not conferred upon the Union in the Treaties remain with the Member States”.Footnote 2 Accordingly, whenever the EU seeks to undertake legal action, whether in the form of proposing the harmonisation of national legislation, or in the form of challenging a Member State’s action in court, it needs to be able to rely on a legal basis to do so.Footnote 3 For each action at the EU level, whether preventative or mitigative in nature, one must hence first identify a legal basis that enables its execution.Footnote 4
In addition, EU action is also constrained by the principles of subsidiarity and proportionality. Pursuant to the former, whenever the EU seeks to act in an area that does not fall under its exclusive competence, “the Union shall act only if and in so far as the objectives of the proposed action cannot be sufficiently achieved by the Member States, either at central level or at regional and local level, but can rather, by reason of the scale or effects of the proposed action, be better achieved at Union level.”Footnote 5 Pursuant to the latter, “the content and form of Union action shall not exceed what is necessary to achieve the objectives of the Treaties”.Footnote 6 Collectively, the constraints posed by the principle of conferral, subsidiarity and proportionality can be considered as giving expression to the principle of legality at the level of the EU, underpinned by the fact that the EU legal order is based on the rule of law, and that the actions of EU institutions likewise need to adhere to its principles.Footnote 7
This demarcation of competences is particularly relevant in an area that deals so intricately with a matter laying close to Member States’ national identity (protected by Article 4(2) TEUFootnote 8), namely the way in which national public authorities exercise their power and take administrative actions vis-à-vis their citizens. Accordingly, “striking a balance between taking the effective action necessary to defend the Rule of Law and respecting the limitations placed on the EU’s competences is tricky”.Footnote 9 At the same time, Article 4(3) TEU also enshrines the principle of sincere cooperation, demanding that the Union and the Member States shall, in full mutual respect, assist each other in carrying out tasks which flow from the Treaties. Furthermore, it also establishes the obligation for Member States to “take any appropriate measure, general or particular, to ensure fulfilment of the obligations arising out of the Treaties or resulting from the acts of the institutions of the Union.”Footnote 10 Member States can hence not invoke ‘national identity’ as an excuse to escape their obligations under EU law.Footnote 11
This raises the question of how much ‘diversity’ EU Member States can maintain as regards the exercise of public power by national authorities, without overly endangering ‘unity’ in their respect for values that are considered to be common to all.Footnote 12 The answer to this question is highly complex, and not one that I will tempt to discuss in this book. Instead, I will formulate a different question, focusing on which EU obligations currently exist that relate to Member States’ need to respect the rule of law, and that can provide protection against the risks posed by algorithmic regulation. After all, “while the EU owes respect to its Member States’ right to organize their government, the latter must observe the rule of law as it is understood in the EU legal order”.Footnote 13
5.2 Regulation Pertaining to the Rule of Law
The EU legal framework counts several mechanisms that are aimed at ensuring Member States’ compliance with the rule of (EU) law. In what follows, I respectively analyse the protection afforded by Article 2 TEU in combination with the procedure of Article 7 TEU (Section 5.2.1), the Conditionality Regulation (Section 5.2.2), and the role played by infringement procedures and challenges before national courts – including through the preliminary reference procedure (Section 5.2.3).
5.2.1 Article 2 and 7 TEU
The rule of law is listed in Article 2 TEU as one of the foundational values of the EU, common to all Member States. It needs to be respected by states who aspire EU membership,Footnote 14 and it needs to be respected throughout a state’s EU membership.Footnote 15 At the same time, the Treaty does not detail what, precisely, it means by ‘to respect the value of the rule of law’. The drafters of the Treaty explained their selection for the values to be listed in Article 2 TEU based on the fact that these values “have a clear non-controversial legal basis so that the Member States can discern the obligations resulting therefrom which are subject to sanction”.Footnote 16 Given that I had to develop an analytical framework in Chapter 3 to concretise, based on an extensive examination of legal sources, the rule of law requirements that Member States must meet when exercising public power, I would beg to differ.Footnote 17
Be that as it may, Article 7 TEU provides a mechanism of protection in case the values listed in Article 2 TEU are threatened or infringed by a Member State. Although Article 7 TEU hence protects multiple values, it is typically referred to as the rule of law protection mechanism, since it not only protects the rule of law as one value amongst others, but it also consists of a legal provision that literally embodies the protective role of the law in a liberal democratic system. Article 7(1) TEU enables the Council, acting by a majority of four fifths of its members, to determine that there is a “clear risk of a serious breach by a Member State of the values referred to in Article 2”. This determination must be based on a reasoned proposal by one third of the Member States, by the European Parliament or by the European Commission, and it can only be taken after the European Parliament consented. The mechanism of Article 7(1) TEU requires only the clear risk of a serious breach,Footnote 18 and hence serves as a warning mechanism.Footnote 19 In that case, the Council will hear the Member State in question and can address recommendations to it.Footnote 20 While I will not elaborate on this point here, it is worth highlighting that the analysis of Chapter 4 did indicate that the increasingly widespread use of algorithmic regulation, without the adoption of appropriate protection mechanisms to counter its risks, can entail a clear risk of a serious breach of the rule of law.
Article 7(2) TEU goes a step further, as it enables the determination of the ‘existence’ of a ‘serious and persistent’ breach by a Member State of the values referred to in Article 2. In this case, however, the determination can only be made through a unanimous decision of the European Council, based on a proposal by one third of the Member States or by the Commission and after obtaining the consent of the European Parliament.Footnote 21 This is because the stakes of this determination are higher. Once a decision has been taken, Article 7(3) TEU enables the Council, acting by a qualified majority, to decide to suspend certain rights deriving from the application of the Treaties to the Member State in question, including voting rights in the Council. Before the determination of a ‘clear risk’ of a serious breach, or of the ‘existence of a serious and persistent breach’ of EU values, the Member State concerned always has the right to be heard and to submit its observations.
Thus far, the European CommissionFootnote 22 and the European Parliament sought to trigger Article 7(1) against both Poland and Hungary,Footnote 23 and the Council has organised several hearings to hear the countries’ positions. However, despite the calls by scholars, civil society organisations and even EU institutions to take further action,Footnote 24 and despite the fact that the concerns remain unaddressed (especially in Hungary),Footnote 25 the Council has come to the determination of neither a ‘clear risk’ nor the ‘existence of a serious and persistent breach’ of the rule of law. In addition, the Council’s unwillingness to ensure that the Commission’s recommendations to ameliorate the situation are implemented has led to much frustration.Footnote 26 Given the role of the Council (consisting of representatives from the twenty-seven EU Member States), the procedure in question is primarily political in nature. Considering the extensiveness of the procedure’s potential consequences and the sheer symbolically weighty nature of its invocation, it is only considered a measure of last resort. Moreover, as long as there is at least one other Member State who vetoes the determination of the existence of a ‘serious and persistent breach’, the procedure of Article 7(2) stands no chance given the requirement for unanimity.Footnote 27
The inability of Article 7 TEU to counter the ongoing rule of law threats in some EU Member States and, in particular, the reluctance to deploy its mechanism has been subjected to heavy criticism. It also pinpoints the dilemma that the EU and its Member States are faced with, in between maintaining the cooperation and goodwill of the Member States on other fronts, and ensuring adherence to the rule of (EU) law without further alienating infringing states. Either way, by the time Article 7 TEU finally comes into play, if ever, a lot of damage will already have been done.
One problem, which is also relevant to the risks identified in the context of algorithmic regulation, is that the erosion of the rule of law often occurs incrementally rather than suddenly.Footnote 28 However, as previously noted, incremental changes will rarely trigger a sufficiently strong counter-reaction so as to lead to the adoption of a measure which is deemed as far-reaching as Article 7 TEU. This gives rise to a tragic observation: the mechanism designed to avoid systemic breaches of the rule of law is, precisely because of the systemic and incremental nature of those breaches, unable to carry out its proper function.
Setting the ineffectiveness of Article 7 TEU aside, the EU’s increased attention for the (dis)respect of the rule of law in Member States (perhaps precisely due to this ineffectiveness) did give rise to a number of soft-law initiatives that monitor the rule of law situation across the Union.Footnote 29 Indeed, in recent years, the Commission’s rule of law toolbox has expanded, and now encompasses several evidence-gathering and documentation mechanisms based on rule of law-indicators. Examples are the ‘European Semester’,Footnote 30 the ‘EU Justice Scoreboard’,Footnote 31 and the annual ‘Rule of Law Reports’Footnote 32 that essentially institutionalise a dialogue between the Commission and Member States. These mechanisms also provide useful information that could support the triggering of Article 7 TEU, the Conditionality Regulation or infringement actions, which I discuss in the following sections.Footnote 33 While non-binding, and not providing any guarantee that they will contribute to a successful triggering of the Article 7 TEU procedure, unlike the former, these initiatives are able to play a preventative role rather than a mere reactive one.Footnote 34 Furthermore, they enable the dissemination of information about the status of the rule of law in every EU Member State to the public at large, thereby contributing to transparency and potentially accountability at the national level too.
Regrettably, however, none of these initiatives currently have indicators about the potentially adverse impact of algorithmic regulation on the rule of law, or on the checks and balances that have been implemented to counter such impact. Quite to the contrary, if discussed at all, ‘digitalisation’ is presented as a desirable feature that Member States should ideally swiftly implement in light of the ‘efficiencies’ it can enable.Footnote 35 The risks attached to it, particularly in the area of public administration, are left unspoken and unmonitored, despite their systemic nature and potential contribution to the systemic breaches of the rule of law. In other words, the effects of algorithmic regulation on the rule of law remains a blind spot. As it currently stands, neither Article 7 TEU nor the soft-law mechanisms aimed at monitoring the rule of law situation seem to offer safeguards against the threat conceptualised under Chapter 4.
5.2.2 The Conditionality Regulation
Besides soft-law initiatives, the European Union also sought to expand its rule of law toolbox with binding mechanisms. The adoption of the Conditionality Regulation, or Regulation 2020/2092 of the European Parliament and of the Council of 16 December 2020 on a general regime of conditionality for the protection of the Union budget, can be seen as a highlight in this regard.Footnote 36 In addition to its legal relevance, I already noted in Chapter 3 that the Conditionality Regulation also has significant definitional relevance, as it provides “the first comprehensive all-encompassing internal-oriented definition of the rule of law adopted by the EU co-legislators”.Footnote 37 While the European Commission had already proposed a conceptualisation of the rule of law and its six principles in 2019,Footnote 38 by means of the Conditionality Regulation, this conceptualisation was also formally adopted by the European Parliament and the Council and enshrined in a piece of secondary legislation.Footnote 39
The Regulation foresees a mechanism to suspend the transmission of EU funds to ‘rogue’Footnote 40 Member States – or, more diplomatically, it conditions the transmission of EU funds to compliance with the rule of law. Its primary aim is hence to protect the EU budget and ensure the sound financial management thereof by Member States, while at the same time incentivising rule of law compliance through financial means. In essence, the regulation seeks to prevent that EU funds are being used for authoritarian or rule of law-infringing ends.Footnote 41 Some scholarsFootnote 42 noted that the European Union already had a similar mechanism at its disposal through Regulation 1303/2013, laying down common provisions regarding the governance of a number of EU funds, including the European Regional Development Fund, the European Social Fund, the Cohesion Fund, the European Agricultural Fund for Rural Development and the European Maritime and Fisheries Fund.Footnote 43 Article 142 of that Regulation foresees a procedure for the suspension of payments by the Commission if, inter alia, “there is a serious deficiency in the effective functioning of the management and control system of the operational programme, which has put at risk the Union contribution to the operational programme and for which corrective measures have not been taken”.Footnote 44 Based on the idea that “surely, a country without the rule of law cannot generate effective management and control systems”, Kelemen and Scheppele argue that this provision already enabled the suspension of EU funds to rule of law-infringing Member States, yet they “note with disappointment that the Commission has not yet had the will to use the power already in its hands”.Footnote 45
Despite the seeming pre-existence of similar remedies, the adoption of the Conditionality Regulation, as a general regime of conditionality, was not without difficulty or controversy.Footnote 46 As soon as it was adopted, both Hungary and Poland challenged the validity of the regulation’s legal basis before the Court.Footnote 47 They claimed that the EU did not have the legal competence to adopt a regulation that defines the rule of law or determines criteria to establish breaches thereof.Footnote 48 Moreover, they stated that the Regulation was not compatible with Article 7 TEU, which already, in their view, exhaustively, provides for a mechanism to protect the rule of law, thus precluding the EU to adopt another one.Footnote 49 As noted in Chapter 3, the EU does not have a ‘general’ legal competence to enforce the rule of law, and hence had to resort to a domain-specific legal basis. Given the link of the Regulation’s suspension mechanism with the multi-annual financial framework (MFF), some have argued that the Regulation would need to be adopted under the MFF’s legal basis, Article 312 TFEU, which requires adoption through unanimity in the Council.Footnote 50
Instead, the Commission resorted to Article 322(1)(a) TFEU as the Regulation’s legal basis, well aware that the unanimity requirement of Article 312 TFEU would prevent the Regulation to ever see the light of day, since Poland and Hungary would never agree to it. Article 322(1)(a) TFEU provides that the European Parliament and the Council, acting in accordance with the ordinary legislative procedure (i.e. without unanimity), can adopt regulations to set the financial rules determining “the procedure to be adopted for establishing and implementing the budget and for presenting and auditing accounts”. Accordingly, the Conditionality Regulation can be seen as an example of the EU’s use of legal competences in one particular field (e.g. the EU budget) to promote other aims in a more indirect way (e.g. the rule of law), thereby overcoming the hurdles of potentially limited competences.Footnote 51 This approach, while not uncriticised, is not new. Various legislative initiatives which are based on Article 114 TFEU, aimed at advancing the harmonisation of the EU’s internal market, also contribute (directly or indirectly) to the protection of fundamental rights, an area in which the EU’s competences are also not generalised.Footnote 52
In the past, the Court has already frequently been called upon to ensure that the EU does not overstep its competences by adopting an erroneous legal basis, and did not shy away from invalidating legislation where it deemed this to be the case.Footnote 53 However, in this case, the challenge raised by Poland and Hungary was unsuccessful, and the Court upheld the Regulation’s validity. It found that the Regulation does not create a general rule of law protection mechanism, and only limits itself to those rule of law-infringements that are directly linked to the EU budget, hence remaining within the confines of Article 322 TFEU.Footnote 54 The Court also distinguished the Regulation’s mechanism from the procedure in Article 7 TEU, which hence did not preclude its adoption.
With the Court’s blessing, the Commission thus no longer faced an obstacle or excuse to launch the suspension mechanism.Footnote 55 In April 2022, it therefore announced the initiation of the procedure against HungaryFootnote 56 – the first step of a lengthy procedure.Footnote 57 Indeed, before the suspension mechanism could actually be implemented, the procedure set out by the Regulation had to be followed, including a written notification to the Member State concerned, setting out the factual elements and specific grounds on which the findings are based; the Member State’s response and potential proposal of remedial measures; the Commission’s notification of its intention to propose an implementing decision in case it considers the remedial measures to be insufficient; yet another opportunity for the Member State to share its observations, particularly regarding the proportionality of the Commission’s envisaged measure; and, finally, an implementing decision by the Council acting by qualified majority, based on the Commission’s proposal.Footnote 58 In March 2022, the Commission also adopted guidelines in which it set out how it will apply the regulation in more detail.Footnote 59
Importantly, the suspension mechanism can only be invoked with regards to a limited set of rule of law-infringements that have a link with the EU budget, hence not providing a general suspension clause for all rule of law-infringements. After all, its purpose and its legal basis concern the protection of the EU budget rather than the rule of law in general. Article 3 of the Regulation sets out which Member State actions may be ‘indicative’ of breaches of principles of the rule of law for the purpose of the regulation, namely:
(a) endangering the independence of the judiciary;
(b) failing to prevent, correct or sanction arbitrary or unlawful decisions by public authorities, including by law-enforcement authorities, withholding financial and human resources affecting their proper functioning or failing to ensure the absence of conflicts of interest;
(c) limiting the availability and effectiveness of legal remedies, including through restrictive procedural rules and lack of implementation of judgments, or limiting the effective investigation, prosecution or sanctioning of breaches of law.
In addition to these ‘general’ indications, Article 4 sets out that action can be undertaken where it is established that “breaches of the principles of the rule of law in a Member State affect or seriously risk affecting the sound financial management of the Union budget or the protection of the financial interests of the Union in a sufficiently direct way”.Footnote 60 To be deemed sufficiently direct, the breaches of the principles of the rule of law should fall under one of the following exhaustively listed categories:Footnote 61
(a) the proper functioning of the authorities implementing the Union budget, including loans and other instruments guaranteed by the Union budget, in particular in the context of public procurement or grant procedures;
(b) the proper functioning of the authorities carrying out financial control, monitoring and audit, and the proper functioning of effective and transparent financial management and accountability systems;
(c) the proper functioning of investigation and public prosecution services in relation to the investigation and prosecution of fraud, including tax fraud, corruption or other breaches of Union law relating to the implementation of the Union budget or to the protection of the financial interests of the Union;
(d) the effective judicial review by independent courts of actions or omissions by the authorities referred to in points (a), (b) and (c);
(e) the prevention and sanctioning of fraud, including tax fraud, corruption or other breaches of Union law relating to the implementation of the Union budget or to the protection of the financial interests of the Union, and the imposition of effective and dissuasive penalties on recipients by national courts or by administrative authorities;
(f) the recovery of funds unduly paid;
(g) effective and timely cooperation with OLAFFootnote 62 and, subject to the participation of the Member State concerned, with EPPOFootnote 63 in their investigations or prosecutions pursuant to the applicable Union acts in accordance with the principle of sincere cooperation;
(h) other situations or conduct of authorities that are relevant to the sound financial management of the Union budget or the protection of the financial interests of the Union.
Following the Regulation’s application to Hungary, on 15 December 2022, the Council decided to freeze about €6.3 billion of budgetary commitments,Footnote 64 citing rule of law breaches pertaining to public procurement procedures and prosecutorial action, as well as conflicts of interest and concerns around the fight against corruption (though not the concerns around judicial independence). At the same time, inspired by the Conditionality Regulation’s approach of financial incentivisation, the Commission also started relying on legal provisions in other funding mechanisms that tie a Member States’ receipt of funding to certain conditions (such as the Resilience and Recovery RegulationFootnote 65 and the Common Provisions RegulationFootnote 66), including the obligation to uphold the Charter of Fundamental Rights. On this basis, on 22 December 2022 the Commission decided to freeze about €21.7 billion of EU cohesion funds until Hungary adopted several measures regarding LGBTQI+ rights, academic freedom, asylum policies and judicial independence.Footnote 67
These actions were initially applauded, yet the applause for the Commission was short-lived. A year later, on 13 December 2023, it decided to unblock €10.2 billion of the frozen EU cohesion funds, justifying its decision based on a ‘thorough assessment’ of Hungary’s newly adopted measures to strengthen the judiciary’s independence.Footnote 68 While critics have characterised those measures as mere window dressing, others have pointed to the politically strategic move of the Commission, as its decision to unfreeze the funds came a day before European leaders needed Hungary’s cooperation to approve new aid to Ukraine.Footnote 69 In March 2024, the European Parliament therefore decided to challenge the Commission’s decision before the Court of Justice, claiming that the Commission failed to fulfil its obligation to protect the EU budget and ensure that taxpayers’ money is not misused.Footnote 70 If anything, this saga highlights that, despite the availability of several legal mechanisms to freeze EU funds, their application and effect are still highly dependent on the political willingness of EU actors and Member States to do so, and the bridging of political hurdles.
Let us, however, bracket this political aspect for a moment, and examine to what extent the Conditionality Regulation could at least theoretically serve as a mechanism to counter the threat of algorithmic rule by law. While it was certainly not conceived to offer protection in the specific context of algorithmic systems, it is worth asking whether it can nevertheless play a role in this context. Based on the contours set out above, a number of observations can be made.
First, given the Regulation’s focus on the governance and sound management of EU funds, it appears to be most suitable in countering actions relating to fraudulent behaviour by officials or corrupt public procurement practices. Conversely, many of the examples discussed under Chapter 4, for instance in the area of social welfare or criminal risk assessments, would hence not easily fall under its scope. As regards the examination of tax fraud (an area in which algorithmic regulation is already widely used) the Regulation requires a link with the EU budget, and hence would not apply to all fraud examinations (for instance under points (c) and (e)). That said, the Court’s judgment in Ackerberg FranssonFootnote 71 already clarified that, since the European Union’s own resources include revenue from Member States’ collection of VAT, the investigation and prosecution of tax fraud relating (even in part) to VAT affects the financial interests of the European Union.Footnote 72 It could hence reasonably be argued that, when public authorities deploy algorithmic systems to detect and prosecute tax fraud (including VAT), and they do so in a manner that does not comply with the six rule of law principles set out above, this would likewise fall under the Regulation’s scope.Footnote 73
Yet one can also raise a more straightforward example of where algorithmic regulation might play a role. When the analysis of elements and risks based on which a procurement contract is allocated (or not) occurs with the help of algorithmic systems,Footnote 74 one could argue that the potentially problematic design and use of these systems can adversely impact the rule of law in a way that is relevant for the purpose of the regulation. Procurement processes are increasingly supported by analyses carried out by algorithmic systems, and one can easily imagine that ‘objectivity’ might also be invoked to this end. And while such automation could make procurement processes more ‘efficient’,Footnote 75 at the same time, one might also imagine the risk that, whether through negligence or deliberate design choices, the outcomes of the automated process favour some tenderers over others. This scenario would, at least in theory,Footnote 76 fit rather neatly under Article 4(2)(a) of the Conditionality Regulation, and hence be susceptible to trigger (the first step of) the suspension mechanism. Accordingly, it appears that a number of applications of algorithmic regulation could relevantly fall under the scope of the Conditionality Regulation.
However, that fact in and of itself does not yet help us much further. Certainly, the Regulation could in theory disincentivise Member States to adopt algorithmic regulation in an irresponsible manner in the abovementioned areas. However, for this to be the case, Member States must first be aware of the risks posed thereby to the rule of law, which is not a given. Furthermore, these provisions will still not ensure that they set up appropriate transparency, control and oversight mechanisms over the algorithmic systems’ design and deployment process. And while, in theory, the Regulation could provide a means to penalise Member States who do not take the risks emanating from algorithmic regulation seriously, by the time such penalisation arrives, a lot of damage may already have occurred – damage that could potentially have been prevented or at least mitigated in case of ex ante oversight. Furthermore, as already noted previously, any ex post remedy will in any case be dependent upon the realisation that the algorithmic system was inappropriately designed or used, a realisation that is difficult to achieve if the use of the system is not transparent and open to systemic review rather than mere individual review.Footnote 77
Consequently, while the Conditionality Regulation can at least help prevent that more EU funds are made available to Member States that, with or without the assistance of algorithmic regulation, infringe the rule of law’s principles, its relevance remains confined to areas that have an explicit link with the EU financial interests, and to Member States who disregard the rule of law to such extent that the Commission feels itself forced to launch the mechanism provided in the regulation. As a preventative tool for Member States who are negligent rather than deliberate in countering the adverse impact that their algorithmic systems can have on the rule of law, it will have little to no effect. It can thus be concluded that other safeguards, preferably of an ex ante character, are needed to address the threat of algorithmic rule by law.
5.2.3 Infringement Actions and Proceedings before National Courts
The two mechanisms I discussed above seek to counter, respectively, Member State breaches of EU values (including the rule of law) that are ‘serious and persistent’; and Member State breaches of the rule of law that directly affect the financial interests of the EU. Yet when it comes to the protection of the rule of EU law (namely Member States’ adherence to European Union law more generally), there are two other mechanisms that can be pointed out. The first concerns the infringement procedure enshrined in Articles 258–260 TFEU, which allows the European Commission or a Member State to seize the CJEU in case a fellow Member State failed to fulfil an obligation under the Treaties. The second concerns the ability to challenge a national act that breaches EU law before a national court, potentially through the preliminary reference procedure laid down in Article 267 TFEU. Both of these procedures enable the judicial review of the legality of Member States’ actions in light of their obligations under EU law.Footnote 78 Indeed, Member States are obliged to take “any appropriate measure, general or particular, to ensure fulfilment of the obligations arising out of the Treaties or resulting from the acts of the institutions of the Union,”Footnote 79 including obligations they might have under both primary and secondary EU legislation.
By virtue of the infringement procedure of Article 258 TFEU, the Commission may sue a Member State before the CJEU when it considers that it has failed to fulfil an obligation under the Treaties.Footnote 80 It must first deliver a reasoned opinion on the matter and give the State concerned the opportunity to submit its observations. In the absence of the State’s compliance with the opinion within the period laid down by the Commission, it may take the matter to Court.Footnote 81 Article 259 TFEU also foresees that a Member State which considers that another State failed to fulfil an obligation under the Treaties can bring the matter to Court – after first passing by the Commission.Footnote 82 If the Court finds that the Member State has indeed failed to fulfil an obligation, the State shall be required to take the necessary measures to comply with the judgment of the Court.Footnote 83
Importantly, if the Commission subsequently finds that the Member State did not comply with the Court’s judgment, it may take the matter before the Court again, this time with the request that the Member State be imposed a lump sum or penalty payment.Footnote 84 Accordingly, the infringement procedure also provides the Commission with financial leverage to ensure that Member States comply.Footnote 85
Note that the threshold for such a procedure does not consist of a ‘serious’, ‘persistent’ or ‘systemic’ breach of EU law, but simply of the failure to fulfil an obligation. This can also concern the mere obligation to transpose a directive into national law within the specified deadline.Footnote 86 It should be noted that the European Commission has discretion as to whether or not it decides to bring a case to court, and it has no obligation to do so.Footnote 87 That said, pursuant to Article 17(1) TEU, it is the Commission’s task to ensure that EU law is duly applied.Footnote 88
Some have argued that the infringement procedure avenue constitutes an important tool for the Commission to counter rule of law infringements in the EU.Footnote 89 Others have claimed, however, that: “despite ten years of EU attempts at reining in Rule of Law violations and even as backsliding Member States have lost cases at the Court of Justice, illiberal regimes inside the EU have become more consolidated: the EU has been losing through winning”.Footnote 90 Admittedly, the infringement procedure is not designed to address systemic deficiencies of the rule of law, but rather provides a mechanism to ensure the enforcement of Member States’ specific EU law obligations. However, used tactically, it can serve to challenge Member States’ actions that adversely impact the rule of law, and the Commission is increasingly doing so. The Court is thereby enabled to carry out a judicial review of the conformity of a Member State’s actions with EU law, and to call out their illegality in case of non-conformity.Footnote 91
Another, more indirect, route to enable judicial review of Member States’ actions at EU level is provided by Article 267 TFEU, which establishes a procedure that can be initiated by natural and legal persons before their national courts, hence complementing public enforcement with private enforcement.Footnote 92 National courts play an essential role in the EU legal order, as they safeguard the application of EU law at the national level. Given the primacy of EU law and its direct effect in national legal orders,Footnote 93 national courts must interpret national acts in line with EU law, and must even leave them aside when they breach EU law.Footnote 94 When they are uncertain about the way in which they should interpret EU law to assess the validity of a national (administrative) act, a national court can initiate a preliminary reference procedure and seek guidance from the CJEU.Footnote 95 In theory, the Court can only express itself about the interpretation of EU (primary and secondary) law rather than taking a stance about the validity of the national act. However, it should provide the national court with all the information it needs to undertake that assessment by itself.Footnote 96 When based on such guidance, which is binding for all courts in the EU, the national court concludes that a legal act at national level breaches EU law, it needs to set it aside.
The disadvantage of this procedure is that a natural or legal person already needs to be involved in a legal challenge before a national court in order to request a preliminary reference procedure. Moreover, the invalidation of the national act does not necessarily imply an invalidation of the public authority’s working method more generally, as the judicial review is limited to the particular act in question rather than being systemic in nature, which showcases the limits of ex post judicial review rather than an ex ante preventative approach at the system level. But at least it offers natural and legal persons a legal avenue to protect the rights they derive from EU law, and to ensure that Member States fulfil their EU law obligations. In the context of Member States’ citizen surveillance, for instance, this procedure has already proven to be effective in invalidating national (and even EU) legislation that undermines the fundamental right to privacy and data protection.Footnote 97
The question then is: which EU law obligations are relevant for the context of algorithmic regulation deployed by public authorities, and can pertinently be the object of an infringement procedure or of a legal challenge before a national court? Indeed, to trigger the use of these procedures, a specific provision of EU law must be breached – hence requiring the existence of a relevant EU law provision in the first place.
There is currently no general EU regulation or directive setting out the obligations that public authorities must fulfil to comply with the rule of law when taking administrative acts, regardless of whether they do so through reliance on algorithmic systems. The functioning of public administrations largely remained a matter of national competence, given its entwinement with the exercise of national powers. In situations where algorithmic regulation by public authorities leads to infringements of purely domestic law rather than EU law, the procedures mentioned above cannot be invoked and national remedies – to the extent available – need to be relied on instead. That said, in a wide array of public sector domains, a link to EU law can nevertheless be established given the increasing harmonisation of national law in fields that influence public administration, thus rendering purely domestic situations ever more rare. The EU has, for instance, adopted legislation in the area of migration law,Footnote 98 social security,Footnote 99 public procurement,Footnote 100 environmental protection,Footnote 101 international transport and border control,Footnote 102 data (re)useFootnote 103 and criminal justice.Footnote 104
Whenever Member States implement EU law, they must respect fundamental rights as enshrined in the CharterFootnote 105 (including, for instance, the right of defence, the presumption of innocence and the right to a good administration) as well as general principles of EU law.Footnote 106 Therefore, if public authorities rely on algorithmic regulation while implementing EU law in a way that breaches individuals’ fundamental rights or general principles of EU law – for instance the general principle of equality, due to the discriminatory design or use of an algorithmic system – this constitutes the breach of a Member State’s obligation that can become the object of an infringement action or preliminary reference procedure. Yet when does a public authority implement EU law?
The most obvious case concerns the situation in which Member States’ public authorities act based on an EU regulation, directive or other legal act with binding force. To provide an example of an area of administrative law that has been (partially) harmonised, consider the domain of migration law, and more specifically the right to asylum. This right is enshrined in Article 18 CFREU, and has been further specified by secondary EU legislation that sets out which obligations Member States should respect vis-à-vis asylum applicants, and how they should evaluate an asylum application.Footnote 107 Accordingly, when a mistranslation from law to code (whether deliberate or inadvertent) occurs in the context of algorithmic regulation that helps assess asylum applications, this can give rise to an EU law infringement, both in terms of the public authority’s failure to comply with the EU regulation or directive, and in terms of a potential breach of a fundamental right or general principle of EU law.
Besides harmonisation in vertical domains, there are also relevant pieces of legislation of horizontal nature. Consider, for instance, Council Directive 2000/43 that prevents discrimination based on racial or ethnic origin in a number of areas like social security and healthcare, thus setting out obligations that Member States must comply with.Footnote 108 When algorithmic regulation affects the rights of individuals based on their ethnicity, this can hence constitute a breach of EU law when the administrative act pertains to social security, the provision of healthcare, or another area of administration listed in the directive. Moreover, beyond situations of implementing EU legislation, the case law of the CJEU has indicated that also acts “that constitute derogations from provisions of EU law, or acts adopted by the national authorities that only remotely are connected with EU law”,Footnote 109 can fall under the heading of ‘EU law implementation’.Footnote 110
Finally, some authors have argued that Article 2 TEU could in and of itself be understood as the basis for an EU law obligation based on which an infringement action can be launched in case of non-adherence to the rule of law.Footnote 111 For instance, Scheppele, Kochenov and Grabowska-Moroz have argued that Article 2 TEU could be relied upon to group isolated yet systemic infringements of EU law and, on that basis, to launch an infringement action, denoting this approach as a ‘tool of militant democracy’ through the launch of ‘systemic infringement procedures’.Footnote 112 They believe that this approach could enable the Court, which has already shown itself an innovative protector of the principle of judicial independence by relying on Article 19(1) TEU, to play a more significant role in protecting other rule of law principles, which Article 7 TEU currently cannot due to the abovementioned political impasse. However, this understanding of Article 2 TEU is controversial, and as the authors themselves have noted, it is not widely supported.Footnote 113 Accordingly, the majority of scholars seem to consider that in order to be the object of an infringement procedure, the legal provision in question and the precise obligation it imposes on Member States needs to be more concrete, and cannot be brought under a collective Article 2 TEU umbrella.
That said, in December 2022, the Commission for the first time launched an infringement action against Hungary for rule of law-related breaches which not only cites violations of secondary EU legislation, but also the direct violation of Article 2 TEU itself.Footnote 114 Regardless of the outcome of this case, Bonelli and Claes point out that “the autonomous enforceability of Article 2 TEU remains a controversial legal construction, one that, if accepted by the Court, could put its legitimacy and authority under strain”.Footnote 115 They also rightfully question whether “the full judicialization of questions of ‘values’” is truly “the most promising and effective response to the challenges that constitutional backsliding processes create”.Footnote 116
Furthermore, notwithstanding the increasing harmonisation of national legislation, it should be stressed that a link with EU law cannot always be established, as there are still situations that are purely governed by domestic law. As I will discuss below,Footnote 117 this ups the game for any (new) EU legislative act that can provide safeguards for citizens whenever Member States deploy algorithmic regulation, as it can legally create a link with EU law (and hence with EU remedies) even in situations that are in principle considered domestic. Indeed, adopting EU legislation that governs the responsible use of algorithmic regulation by Member States in line with the rule of law would open up an avenue for the enforcement of these provisions both through the infringement procedure of Articles 258–260 TFEU, and through legal challenges brought by individuals before national courts (with the associated preliminary reference procedure pursuant to Article 267 TFEU).
At the same time, the utility of these remedies, even in situations where a concrete obligation under EU law exists, must not be overstated. While the judicial review they enable can certainly play a role in the protection of the rule of (EU) law at Member State level, this is woefully insufficient to tackle the adverse effects of algorithmic regulation as identified in Chapter 4.
First, the ex post nature of these procedures means that the damage has already been done. As noted previously, if the damage is irreversible, any ex post remedy will be of little consolation to those adversely affected. If the damage is not (entirely) irreversible, in any event, a lot of time will inevitably pass between the damage caused and the judicial action. In the case of an infringement action, it typically takes the European Commission (which can decide to launch an action at its sole discretion) months if not years to collect sufficient evidence and arrive at a decision to initiate proceedings, if it decides to act at all.Footnote 118 In the case of a legal challenge brought by an individual before a national court, the speed of the potential remedy will depend on how swift the administration of justice in a particular country is organised. Moreover, the count starts not from the moment that problematic algorithmic regulation is used, but from the moment that someone is aware of such use and decides to bring a case.Footnote 119 By the time a condemning judgment arrives, significant harm can have occurred, and it may be too late to meaningfully remedy the situation. In addition, if the national court decides to stay proceedings to submit a request for a preliminary reference by the CJEU, the waiting time is extended by on average at least another year.Footnote 120 I invite the reader to reflect how much damage an infrastructure of algorithmic regulation, and the mass-decision-making it enables regarding millions of individuals, can cause in the time span of one year if left unaddressed.
Second, for an individual to bring a case before a court, she must not only be aware that algorithmic regulation is used (this is not straightforward given that many algorithmic applications are used in an untransparent mannerFootnote 121) but she must also have a sufficient incentive to start litigation.Footnote 122 If the damage at individual level is relatively small, the person may be unwilling to incur time and costs to do so, even if the damage at societal level may be significant. Furthermore, in many jurisdictions one needs to be able to demonstrate individual harm to have standing in court, which may not always be easy to prove when the harm primarily manifests itself at societal level or over the longer term, rather than at individual level and in the short term.Footnote 123 And if individual harm can be proven (for instance when a right was erroneously denied) we have seen that courts may not always be well-equipped to deal with the systemic problems raised by the scaled use of algorithmic systems, as they are typically tasked with case-by-case reviews only.Footnote 124 This means that the upstream design choices will remain untouched. The person may be re-allocated her benefits that were wrongfully denied, but this does not necessarily mean that choices at the upstream level will become transparent and contestable, and that harm to other interests will be avoided.
Finally, to invoke the procedures discussed above, a link with EU law must first be argued. As explained, while such a link can often be found, there are also situations that may not be governed by current EU law, hence precluding reliance on an existing EU remedy. Moreover, even if such a link is present, in certain Member States, the executive power is already exercising undue influence on national courts, and compromised their independence and impartiality.Footnote 125 In those jurisdictions, which already showcase authoritarian tendencies, one can question the effectiveness of national judicial review as a means to prevent the exacerbation of those very tendencies.
For all these reasons, the mechanisms discussed above are inadequate to ensure that the implementation of algorithmic regulation by public authorities occurs in accordance with the rule of law, and that it does not exacerbate the threat of algorithmic rule by law. As stressed in Chapter 4, the extent and scale of the risks associated with algorithmic regulation require preventative action, inter alia through the organisation of ex ante and continuous oversight over the crucial decisions taken in the design and deployment process of algorithmic systems, rather than mere remedial action. Yet the currently available EU mechanisms that deal with the protection of the rule of (EU) law (Article 7 TEU, the Conditionality Regulation, or Articles 258–260 TFEU and 267 TFEU) are, in my view, not tailored to prevent the adverse impact of algorithmic regulation on the rule of law. This is not only so because they lack specific references to algorithmic systems as the potential cause of such adverse impact – and hence lack specific requirements as regards their use – but also because they only enable ex post solutions, which risk being too little too late. In sum, EU regulation pertaining to the protection of the rule of law does not seem to be sufficiently extended to tackle the risks of algorithmic regulation.
The question is now whether EU regulation that pertains to algorithmic systems can play a meaningful role in the protection of the rule of law. While there is, as of yet, no piece of EU legislation that deals specifically with public authorities’ rule of law obligations in the context of algorithmic regulation, there are several regulations that apply to public and private organisations alike when they inform or take their decisions with the assistance of algorithmic systems.Footnote 126 For reasons of time and space, I will focus on the two most relevant ones for the purpose of my investigation: the General Data Protection Regulation (and neighbouring Law Enforcement Directive), which I discuss in Section 5.3, and the Artificial Intelligence Act, which I discuss in Section 5.4.
5.3 Regulation Pertaining to Personal Data: The GDPR
Few pieces of legislation are as relevant for the use of algorithmic regulation today as the General Data Protection Regulation (GDPR)Footnote 127 – and, in the context of criminal matters, the Law Enforcement Directive (LED).Footnote 128 The regime provided in these (highly similar) pieces of legislation shields individuals against infringements of their fundamental right to personal data protection,Footnote 129 by granting them protective rights and imposing obligations on organisationsFootnote 130 that process their personal data.Footnote 131 Since a significant part of algorithmic systems used to inform or adopt administrative acts process personal data, these rights and obligations play an important role in this domain. The scope of application of the GDPR and LED is wider than algorithmic regulation though, as they apply to “the processing of personal data wholly or partly by automated means”.Footnote 132 These legal instruments set out a dense legal framework and, in what follows, I will only point out some of its most relevant features.
Pursuant to their obligations under these legal instruments, when public authorities deploy algorithmic systems to inform or adopt administrative acts and in the course thereof process personal data, such data must be processed in line with a number of principles, including the principle of lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; integrity and confidentiality; and accountability.Footnote 133 Moreover, the lawfulness of the data processing relies on the availability of a legal basis which, in the context of public authorities, will often come down to the fact that “processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller”.Footnote 134
5.3.1 Need for a Legal Basis
To legitimise such data processing, Member States should in principle adopt a law that sets out the purpose of the processing, and that contains more specific provisions about the types of data that are processed; the data subjects concerned; the entities to, and the purposes for which, the personal data may be disclosed; the purpose limitation; storage periods; and any processing operations and procedures.Footnote 135 This is also why the implementation of algorithmic systems by public authorities must have a lawful basis, which typically requires the adoption of a specific law that authorises the use of the system in line with the provisions of the GDPR (or the LED). The legal basis should indicate that the introduction of the algorithmic system, which can be invasive and impactful given the data processed and the scale at which it is deployed, is necessary and proportionate.
Importantly, the existence of a legal basis also renders it possible to subsequently challenge the system’s use if the basis on which it was adopted does not provide sufficient protection against potential adverse impacts of the system, or is not in accordance with human rights law. This is precisely what happened in the Dutch SyRI case, which centred around an algorithmic system aimed at identifying natural and legal persons which, based on a set of risk-indicators, ought to be further examined for social security or tax fraud.Footnote 136 According to the government, the purpose of the system concerned “the prevention of and combating the unlawful use of government funds and government schemes in the area of social security and income-dependent schemes, preventing and combating taxes and social security fraud and non-compliance with labour laws”.Footnote 137 The system allowed for the exchange of data amongst a variety of public authorities to facilitate the identification of fraud. Based on the system’s risk-indications, a risk report was made concerning an individual’s fraud potential. The SyRI-law defined a ‘risk report’ as
the provision of individualised information from [SyRI] containing a finding of an increased risk of unlawful use of government funds or government schemes in the area of social security and income-dependent schemes, taxes and social security fraud or non-compliance with labour laws by a natural person or legal person, and of which the risk analysis, consisting of coherently presented data from [SyRI], forms part.Footnote 138
The system was already in use for years when, in 2014, a law was finally adopted to ensure a legal basis for its deployment, known as the SyRI-law. When the law was challenged in court, it was, however, found to be an insufficient legal basis to justify the system’s use. The court concluded that it did not provide sufficient safeguards to protect individuals against the impact on their right to privacy, after which the government halted the system’s use. For instance, the risk reports established by the system were put into a register.Footnote 139 This left a clear trace for other public authorities which risked stigmatising those individuals, even if the flagging turned out to be erroneous. Moreover, the persons concerned were not informed of the fact that a risk report was made about them (unless they specifically asked for this information by themselves).Footnote 140 And while the law also foresaw that public authorities had to justify the necessity and proportionality of the data exchange for the purpose of the risk analysis, no safeguards were included to ensure an adequate and comprehensive review of those justifications.
This case demonstrates that the GDPR is able to offer individuals protection by ensuring that the use of algorithmic systems needs to have an adequate legal basis. At the same time, it should be noted that the provisions of the GDPR, as such, do not necessarily provide insight (let alone participation) in any of the upstream decisions made by the coders, such as the assumptions that underlie the system, the interpretation and translation choices, or the model’s optimisation function. Moreover, in many cases, the legal basis can be overly vague or provide overly extensive processing powers to avoid that the law must be amended whenever new processing activities take place, thus also undermining the protection it offers towards those subjected thereto.Footnote 141 Note that the processing of special categories of personal data, such as data revealing racial or ethnic origin, political opinions, biometric data for the purpose of uniquely identifying a natural person, or data concerning a person’s sexual orientation, is in principle prohibited, yet exceptions exist,Footnote 142 and public authorities can typically rely on them to exercise their functions if such processing is authorised by law.Footnote 143
5.3.2 Automated Decision-Making
The GDPR and LED allocate certain rights to persons whose personal data is processed, such as the right to information about the data processing, the right of access to their data, the right to rectify their data and the right to erasure.Footnote 144 In addition to these general rights, natural persons also have specific rights in the context of automated decision-making. It should be pointed out that, for the purpose of the GDPR, automated decision-making is much broader than the adoption of an administrative act, but covers any type of decision taken through automated means, including profiling.Footnote 145 Accordingly, all intermediate decisions that are taken by automated means to inform an administrative act (for instance the automated classification of individuals in one category or another) also count as such.Footnote 146 Pursuant to Article 22 of the GDPR,Footnote 147 whenever a decision is being taken about an individual based solely on automated processingFootnote 148 which produces legal effects concerning her, or similarly significantly affects her, that individual has the right not to be subject to such decision.Footnote 149 While inclusion of the word ‘solely’ seemingly excludes situations where a decision is merely recommended by an algorithmic system and subsequently reviewed and adopted by a human being, such review should in principle be meaningful in order to warrant the exclusion. Merely “fabricating” human involvement will not do.Footnote 150
Moreover, under those circumstances, the data controllerFootnote 151 has the obligation to provide the individual concerned with information about the existence of automated decision-making and “meaningful information about the logic involved”, as well as the significance and the envisaged consequences of such processing for her.Footnote 152 Recital 71 of the GDPR also states that “such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision”. Yet given that recitals are non-binding, scholars disagree as to whether an actual individualised ‘right to explanation’ of an automated decision exists.Footnote 153
These rights, along with other rights listed in the GDPR, can be restricted by Member State law as long as this restriction “respects the essence of the fundamental rights and freedoms and is a necessary and proportionate measure in a democratic society to safeguard” a range of interests, including national and public security, the prevention of criminal offences, and even – rather generally – “other important objectives of general public interest of a Member State”.Footnote 154 In that case, the Member State law should however contain provisions that describe the data processing activity and its purpose, as well as the safeguards to prevent abuse.Footnote 155
Also noteworthy is the fact that, whenever a type of processing (“in particular using new technologies”) is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data, referred to as a Data Protection Impact Assessment or DPIA).Footnote 156 Pursuant to Article 35(3)(a) of the GDPR, a DPIA is particularly required in the case of “a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person”.
Data controllers are, however, not obliged to make this assessment public, and input from the individuals concerned is only warranted “where appropriate” and “without prejudice to the protection of public interests or the security of processing operations”.Footnote 157 Finally, to oversee compliance with these rights and obligations, the GDPR established national data protection authorities, acting “with complete independence in performing its tasks and exercising its powers”, which is especially important when supervising the data processing activities of public authorities.Footnote 158
5.3.3 Evaluation: Necessary but Not Sufficient
With this in mind, to which extent can the safeguards afforded by the GDPR help counter the adverse impact of algorithmic regulation on the rule of law? Unfortunately, the conclusion is not overly optimistic. Certainly, the GDPR establishes a critical and necessary set of EU obligations that Member States should respect (which can also become the object of a procedure under Articles 258–260 TFEU or Article 267 TFEU) and provides individuals with important rights they can directly invoke in a national court against public authorities. And since algorithmic regulation very often implies personal data processing, those rights and obligations are unquestionably relevant in this context. That said, these safeguards cannot be called comprehensive.
As observed by Maja Brkan, the provision that aims to protect individuals against the adverse effects of automated decision-making, by containing “numerous limitations and exceptions, looks rather like a Swiss cheese with giant holes in it”.Footnote 159 While individuals should be informed of the fact that decisions are being made about them in an automated way, recall that the right not to be subjected to such decisions is only present in case of a ‘solely’ automated decision with ‘legal effects’ or similar, that exceptions exist in the ‘public interest’ and that no transparency is foreseen about the algorithm’s functioning and the normative assumptions that underpin its design. In general, the GDPR contains no mechanisms that enable prior oversight over the upstream design of the algorithmic system (for instance to ensure its outcomes are accurate and non-biased); no obligatory mechanisms of public participation; no constitutional checks and balances as regards the translation from law to code; and no mechanisms that foster friction and internal critical reflection, for instance by mandating meaningful human oversight.
This is not to say that the GDPR and the LED, along with primary EU law protecting the rights to privacy and data protection, cannot play a role in ensuring that governments process the personal data of their citizens in a responsible manner, as previous legal challenges have demonstrated.Footnote 160 For instance, in June 2022, the CJEU was seized by a preliminary reference procedure regarding the Passenger Name Record (PNR) Directive and the Belgian law implementing it.Footnote 161 The case concerned the automated processing of passenger data by public authorities in the context of border control, enabled by the establishment of large databases to search and identify passengers involved in a terrorist offence or serious crime. While the Court found that the Directive poses an “undeniably serious interference” with the rights to privacy and data protection,Footnote 162 it nevertheless concluded it was compatible with the Charter, as it required the predetermination of the criteria based on which the database could be searched, and required these criteria to be non-discriminatory.Footnote 163 In the same breath, however, the Court noted that this “requirement precludes the use of artificial intelligence technology in self-learning systems (‘machine learning’), capable of modifying without human intervention or review of the assessment process and, in particular, the assessment criteria on which the result of the application of that process is based as well as the weighting of those criteria.”Footnote 164 It added that
given the opacity which characterises the way in which artificial intelligence technology works, it might be impossible to understand the reason why a given program arrived at a positive match. In those circumstances, use of such technology may deprive the data subjects also of their right to an effective judicial remedy enshrined in Article 47 of the Charter, for which the PNR Directive, according to recital 28 thereof, seeks to ensure a high level of protection, in particular in order to challenge the non-discriminatory nature of the results obtained.Footnote 165
Accordingly, the PNR judgment affirms the important role that EU privacy legislation can play in the context of algorithmic regulation, and the importance of transparency about the way in which individuals’ data are being processed.Footnote 166
At the same time, the lack of ex ante oversight mechanisms means that external accountability remains largely confined to an ex post stage, when the damage already occurred. Moreover, the focus lays primarily on harm to individual rather than to societal interests, such as the rule of law. It does not touch upon the intricacies of translating legal rules to code, nor does it provide more systemic remedies and oversight.Footnote 167 In sum, these legal instruments do not provide sufficient safeguards to counter the threat of algorithmic rule by law, or even properly to counter the risks raised by algorithmic systems more generally. While this conclusion may be rather glum, it was not only reached by other scholars, but also by the European Commission itself, who in February 2020, when it published its White Paper on AI,Footnote 168 observed that the current framework, including the GDPR, insufficiently protects people against the adverse impact of algorithmic systems.Footnote 169 Consequently, in April 2021, it put forward a proposal for an AI Act in order to fill these legal gaps. I will therefore examine this Act next.
5.4 Regulation Pertaining to Algorithmic Systems: The AI Act
The AI Act has been heralded as the first comprehensive regulation of AI systems in the world, aiming to tackle risks to people’s health, safety and fundamental rights in a horizontal manner – including in the public sector, thus meriting a more extended discussion in this book. While the European Union has certainly not been the only jurisdiction partaking in the global race to AI regulation, other jurisdictions have thus far primarily focused their legislative efforts on sector- or application-specific regulations.Footnote 170
After three years of extensive negotiations, and numerous amendmentsFootnote 171 suggested by the European Parliament and the Council (respectively in December 2022Footnote 172 and in June 2023Footnote 173), the new regulation was formally adopted in 2024.Footnote 174 Much ink has already been spilled about the AI Act’s merits and flaws, long before its adoption. Yet the question I am interested in here concerns the extent to which the AI Act’s provisions are able to tackle the threat of algorithmic rule by law. Its novelty rendered it an ideal vehicle to introduce new legal safeguards to address the many concerns identified by scholars and civil society organisations over the past few years, and to bridge the gaps left open by the GDPR. However, does the new regulation fulfil this expectation?
To answer this question, I will respectively analyse the AI Act’s scope and rationale (Section 5.4.1), its regulatory architecture (Section 5.4.2), the set of systems used by public authorities that fall under its provisions (Section 5.4.3), the requirements to which high-risk algorithmic regulation systems are subjected (Section 5.4.4), and the repercussions of its maximum approach to harmonisation (Section 5.4.5). Drawing on that analysis, I will assess the regulatory potential of the AI Act and conclude that it fails to provide a sufficient level of protection. Moreover, despite the critique provided thereon in Chapter 4, I argue that the AI Act effectively reinstates ‘techno-supremacy’ through its legal infrastructure, resulting in a relatively grim overall evaluation of this new regulation (Section 5.4.6).
5.4.1 The AI Act’s Goals and Scope
5.4.1.a The AI Act’s Origins
To understand the regulation’s rationale, it is useful to briefly revisit its history. In essence, the AI Act builds on the work of the High-Level Expert Group on Artificial Intelligence, set up by the European Commission in June 2018 with the aim of drafting AI Ethics GuidelinesFootnote 175 and Policy Recommendations.Footnote 176 At that time, one month after the GDPR came into force and modernised European privacy law, new legislation on the use of algorithmic systems seemed unnecessary, as it was the prevailing opinion at the Commission that existing rules already sufficed to protect individual and societal interests. Gradually, this stance changed, with the rise of both internal and external pressure to take action beyond the promotion of non-binding guidelines. Moreover, when submitting its deliverables in the spring of 2019, the High-Level Expert Group concluded that new legislation was needed to fill existing legal gaps, claiming that the risks posed by certain systems required stronger safeguards.Footnote 177 Concretely, the Expert Group proposed a risk-based approachFootnote 178 to regulate AI,Footnote 179 combined with a principle-based approach that avoids over-prescriptiveness, and a precautionary approach “when the stakes are high”, highlighting hazards to some of the EU’s core values, such as human health, the environment and the democratic process.Footnote 180 It also noted that questions about which kinds of risks are deemed unacceptable “must be deliberated and decided upon by the community at large through open, transparent and accountable deliberation”.Footnote 181 As regards the public sector in particular, the group stressed that safeguards were needed to protect “individuals’ fundamental rights, democracy and the rule of law”,Footnote 182 the alignment of which was more generally stressed across its deliverables.Footnote 183
The Commission listened in part. It started mapping the legal gaps left open by existing pieces of legislation and, in February 2020, it outlined a blueprint for new AI-specific legislation through its White Paper on Artificial Intelligence.Footnote 184 After inviting feedback through a public consultation, the Commission put forward its proposal for an AI Act in April 2021, with clear references to the Expert Group’s work. The Expert Group’s Ethics Guidelines for ‘Trustworthy AI’ (a term coined by the experts to denote systems that are lawful, ethical and robust) listed seven key requirements that should be met throughout the life cycle of AI systems, based on fundamental rights.Footnote 185 In the Explanatory Memorandum of the AI Act, the Commission presented its proposal as providing “a legal framework for trustworthy AI”Footnote 186 and translated these key requirements into a series of legal requirements that should be met whenever AI systems are put into use or placed on the market.Footnote 187
5.4.1.b Objectives and Legal Basis
The AI Act has the dual aim of harmonising Member States’ national legislation to eliminate potential obstacles to trade on the internal market, and protecting the health, safety and fundamental rights of individuals against AI’s adverse effects – in that order. Indeed, as indicated by the AI Act’s very first recital, and as the below discussion will highlight, the creation of an internal market for the free circulation of AI and the promotion of its uptake is the regulation’s primary aim, with the protection of fundamental rights and other values being something to keep in mind while doing so.Footnote 188 Note that, despite the references thereto in the Expert Group’s deliverables, the protection of the rule of law was not mentioned as an objective in the Commission’s original proposal (an omission that scholars have criticised)Footnote 189 and is also barely mentioned in the AI Act’s final version.Footnote 190
Clearly, the regulation primarily pursues an internal market-oriented approach rather than a values-oriented one, in line with its underlying legal basis. Indeed, the Commission opted to rely on Article 114 TFEU (enabling the establishment and functioning of the internal market) as the Regulation’s legal basis. This is not surprising, as the EU lacks a general legal basis to regulate (technology’s adverse impact on) fundamental rights, democracy and the rule of law, and frequently relies on Article 114 TFEU to advance the protection of the interests for which it has no specific competence.Footnote 191 If the emphasis of the Regulation would have been on the protection of those values, Article 352 TFEU would arguably have been a more appropriate legal basis, as this Article allows the EU to adopt an act necessary to attain objectives laid down by the treaties whenever the treaties do not provide the necessary powers of action for this purpose.Footnote 192 Reliance on this legal basis, however, requires unanimity in the Council and is hence typically avoided.Footnote 193
By definition, a market-oriented legal basis entails certain limitations, as it renders regulatory intervention in the public sector (especially as regards law enforcement activities, public administration or the justice system) more difficult to justify. The legislator therefore also added Article 16 TFEU as a legal basis, yet only to the extent that, for the purpose of law enforcement, it contains specific rules on the protection of individuals’ personal data which concern restrictions of “the use of AI systems for remote biometric identification”, “the use of AI systems for risk assessments of natural persons” and “the use of AI systems of biometric categorisation”.Footnote 194 One can still question whether the combination of Article 114 TFEU and, very limitedly, Article 16 TFEU constitutes a sufficient legal basis to extend the AI Act to the use of algorithmic systems by public administrations, yet I will not be delving further into this question here. For the remainder of my analysis, I will therefore proceed under the assumption that the AI Act’s legal basis is valid.
5.4.1.c AI’s Definition
Before turning to the regulatory framework and content of the AI Act, let me make a brief note on how it defines AI. As previously stressed, the definition of artificial intelligence constitutes an important battleground as it sets out the contours of the technological applications that fall under the law’s scope, thereby also determining its regulatory relevance.Footnote 195 I shall not repeat here the definition’s history which I discussed in Section 2.1.5, but merely zoom in on the final version of the definition. To recap, AI is defined as a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.Footnote 196
Recital 12 of the AI Act clarifies that this covers knowledge- and data-driven methods alike. However, more narrowly than in the original proposal, it excludes from this definition “simpler traditional software systems or programming approaches” and “systems that are based on the rules defined solely by natural persons to automatically execute operations”. It remains to be seen how this definition will be interpreted by the AI Act’s implementers, but the recital raises doubts as to whether the algorithmic systems that are the subject of this book all fall under the AI Act’s scope.Footnote 197 Throughout this section, I will assume it can reasonably be argued that they do, and use the term ‘AI’ interchangeably with algorithmic systems used for algorithmic regulation.
I do wish to stress, however, that this narrowing of AI’s definition is rather unfortunate, as it creates the risk that harmful algorithmic systems can escape the AI Act’s requirements through definitional gaps. This limitation is also unnecessary. Indeed, a broad definition of AI could have easily been maintained, since the systems that fall under the scope of the Act’s requirements are not solely defined by this definition, but also by the regulation’s specific provisions that categorise AI systems and impose different obligations per category. If the legislator’s main focus had been the values it seeks to protect and the harmful conduct it wishes to avoid, it would not have mattered as much through which underlying algorithmic technique such harm occurred.
Let me also point out that the AI Act introduces a definition of general-purpose AI models, or models that are trained with a large amount of data using self-supervision at scale, that display significant generality and that are capable of “competently performing a wide range of distinct tasks regardless of the way the model is placed on the market” while also capable of being integrated into a variety of downstream systems or applications.Footnote 198
Finally, the regulation also introduces certain exceptions. AI systems and models that are developed and put into service for the sole purpose of scientific research and development fall outside its scope, as do systems that are used for national security, military and defence purposes. An exception also exists for AI systems that are released under free and open-source licences, unless they are placed on the market or put into service for a purpose that falls under one of the AI Act’s explicit categories, which I will discuss next.Footnote 199
5.4.2 The AI Act’s Regulatory Architecture
There are many different ways of regulating (human behaviour related to) AI systems.Footnote 200 The drafters of the AI Act have let themselves be inspired by product (safety) legislationFootnote 201 instead of, for instance, legislation dealing with the protection of fundamental rights. The regulation hence treats AI as a product or service that must adhere to certain (primarily technical) requirements, meticulously set out in the regulation. The High-Level Expert Group’s recommendation to adopt a principle-based approach to AI’s regulation rather than an overly prescriptive oneFootnote 202 has thus not been taken up. The legislator did take up the group’s suggestion for a risk-based approach, by distinguishing different categories of AI systems based on the extent of riskFootnote 203 they raise to health, safety and fundamental rights,Footnote 204 and imposes different obligations for each risk category. The regulation’s emphasis on obligations for AI providers and deployers, with only a very limited number of new rights for those subjected to the systems, further reflects its market-oriented vision.
The AI Act distinguishes fiveFootnote 205 categories:
(1) AI practices that pose an unacceptable level of risk and that are hence prohibited (with exceptions) (chapter II of the Act);
(2) AI systems that must comply with a set of requirements due to posing a high risk to health, safety or fundamental rights, and that must undergo a conformity assessment prior to their use or placement on the market (chapter III of the AI Act). These consist of two subcategories: a) AI systems that are (incorporated into products that are) already subjected to existing product safety legislation (Annex I of the AI Act); and (b) ‘stand-alone’ AI systems (Annex III of the AI Act);
(3) AI systems subjected to additional transparency obligations due to their risk of deceit or intrusiveness (chapter IV of the AI Act);
(4) General-purpose AI models, including a sub-category of models that pose a systemic risk due to their scale and capabilities (chapter V of the AI Act); and
(5) AI systems that are considered to pose only a minimal or no risk.
The last one is a residual category, including all systems and practices that are not explicitly listed under one of the other categories. These systems are not subjected to any new requirements, but can become the object of voluntary codes of conduct and guidelines (chapter X of the AI Act). AI systems that fall under the first four categories are described and listed either directly in the AI Act’s text or – in the case of high-risk systems – in annexes that can be updated over time.Footnote 206 The AI Act’s drafters hence coupled a risk-based approach with a list-based approach, to which I will come back later.
Categories and their requirements can overlap. Some systems can, for instance, be subjected to both the requirements imposed on high-risk systems and to the additional transparency obligations. AI providers and deployers must self-assess the category under which their AI system falls. For high-risk systems listed in Annex III, they can even self-assess whether their system – though listed in the annex – is nevertheless exempt from the high-risk requirements if it, in their view, “does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making”.Footnote 207 Coupled with the fact that the conformity assessment of virtually all high-risk systems can be carried out by the systems’ providers themselves, this provides a high ‘margin of appreciation’ for the very actors the AI Act is supposed to regulate – a highly contentious point to which I will return.
Before discussing which applications of algorithmic regulation fall under the AI Act’s respective categories, let me also briefly describe how the AI Act’s requirements are enforced. The enforcement architecture of the AI Act is relatively complex, and involves several actors at different levels. The main action takes place at the national level. Member States must establish or designate an independent national competent authority (the role of which can also be undertaken by the data protection authority) to oversee the AI Act’s requirements.Footnote 208 These take on the role of notifying authoritiesFootnote 209 and market surveillance authorities,Footnote 210 with the possibility for Member States to appoint different authorities for each taskFootnote 211 as long as there is a “single point of contact”.Footnote 212 The authorities’ oversight primarily takes place ex post, when an investigation reveals that an AI provider or deployer did not comply with the regulation. To coordinate the activities of the national authorities, the AI Act also establishes a European AI Board, composed of Member States’ representatives.Footnote 213
While the Commission’s initial proposal was limited to the above, both the Council and the Parliament underlined the need for a stronger enforcement role at the level of the EU, especially for systems that affect the EU population at large or that cannot easily be monitored by individual states. This resulted in the establishment of an AI Office, housed at the European Commission.Footnote 214 The AI Office is responsible for the enforcement of the requirements imposed on general-purpose AI models. Moreover, it provides the secretariat for the Board, convenes the Board’s meetings, and prepares its agenda.Footnote 215 To further complicate the landscape of relevant actors, Articles 67 and 68 also respectively set up an Advisory Forum and a Scientific Panel of Independent Experts. The former consists of a group of stakeholders that provide expertise and advice to the Board and the Commission.Footnote 216 The latter consists of experts that advise and support the AI Office in its enforcement tasks,Footnote 217 for instance by alerting it of possible systemic risks posed by general-purpose AI models or by developing methodologies to evaluate their capabilities.
Finally, let me point out three more characteristics of the AI Act’s architecture. First, to expedite compliance monitoring of the Regulation’s obligations, the AI Act establishes an EU-wide database, managed by the European Commission,Footnote 218 in which certain providers and deployers of AI systems need to register some basic information. Second, given the importance the EU attaches to AI-enabled innovation, the AI Act also provides measures ‘in support of innovation’, by setting up regulatory sandboxes in every Member State.Footnote 219 Third, compliance with the AI Act’s requirements is facilitated by the establishment of harmonised standards (or common specifications).Footnote 220 Conformity therewith offers a presumption of conformity with the AI Act, which means that, in practice, the standards’ interpretation of the AI Act’s requirements can become the regulation’s de facto authority. This approach is not without criticism, as European Standardisation Organisations are primarily populated by industry actors and technical experts, with little participation from civil society organisations and experts with an ethical or legal background.Footnote 221 More generally, one can also question whether requirements that are meant to ensure AI systems’ alignment with fundamental rights can ever be captured by a set of technical standards.Footnote 222
In what follows, let me now zoom in on the AI Act’s merits and pitfalls specifically in the context of algorithmic regulation, and the risks it poses to the rule of law.
5.4.3 Algorithmic Regulation in the AI Act
To what extent is algorithmic regulation as defined in this book – namely reliance on algorithmic systems to inform or take administrative acts – covered by the AI Act? At the outset, it can be noted that the AI Act does not fundamentally distinguish between systems used by the private and the public sector when it comes to the requirements it imposes on AI systems. In most cases, these requirements are the same, regardless of whether a system is deployed by a private or a public actor. There are two notable exceptions: the obligation for public authority deployers to register the high-risk systems they use in the EU database, and the obligation to carry out a fundamental rights impact assessment pursuant to Article 27 of the AI Act.Footnote 223 That said, applications of algorithmic regulation can be found in all of the AI Act’s categories, either explicitly (when a categorised system serves to inform or adopt administrative acts) or implicitly (when a categorised system can serve this purpose).
5.4.3.a Prohibited Practices
Article 5 of the AI Act enumerates eight practices that are prohibited in light of the unacceptable risk they pose.Footnote 224 Focusing only on the practices that are most relevant for the public sector, the AI Act prohibits generalised social scoring to evaluate or classify people based on their social behaviour (or based on their known, inferred or predicted characteristics), though only if it leads to their detrimental or unfavourable treatment in social contexts that are unrelated to those in which the data was collected, or to an unjustified or disproportionate treatment.Footnote 225 It also prohibits the use of AI systems to carry out risk assessments of natural persons that assess or predict the risk of a criminal offence based solely on the person’s profiling or the assessment of her personality traits and characteristics. However, the prohibition does not apply to systems used “to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity”.Footnote 226
Public and private actors are also not allowed to use AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage,Footnote 227 or to use biometric categorisation systems that individually categorise natural persons based on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. They can, however, still use the latter systems to infer other traits (as long as they comply with the GDPR and the LED), nor does this prohibition cover the “labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement”, which is hence exempt therefrom.Footnote 228
No prohibition is foreseen for public authorities’ use of emotion recognition systems,Footnote 229 despite the fact that their use is scientifically so unsound that it is nearly impossible to come up with a single legitimate purpose for authorities to rely thereon.Footnote 230 Moreover, the prohibition on the use of facial recognition (or biometric identification systems) is so limited that it hardly merits this designation. It only applies to ‘remote’ and ‘real-time’ biometric identification, only in publicly accessible spaces, and only for the purposes of law enforcement (and thus not for the purposes of e.g. border control or other areas of public administration).Footnote 231 Furthermore, even this limited prohibition is subjected to exceptions, as remote biometric identification systems can still be used for the targeted search of victims or missing persons, the prevention of certain imminent threats, and the localisation or identification of (even) suspects of some criminal offences.Footnote 232
Undoubtedly, this list of practices (very limited and full of exceptions) risks being incomplete, both in terms of problematic AI practices that already exist today, and practices that may pop up in the next few years. Article 5 will be subjected to a periodic assessment by the Commission of the need for amendments, which will be submitted to the Parliament and the Council. However, the only way to amend this list in practice would be to re-subject the regulation to the ordinary legislative procedure, which is unlikely to occur within the near future.
5.4.3.b Systems Requiring Additional Transparency
Three types of systems have additional transparency obligations, which in essence entail the mandatory disclosure that a person is subjected to such a system. First, systems that directly interact with natural persons – such as chatbots, which are increasingly used by public authorities to ‘more efficiently’ provide citizens with information – must be developed in such a way that people are informed they are interacting with an AI system.Footnote 233 Second, whenever public authorities deploy algorithmic systems for the purpose of emotion recognition or biometric categorisation, they are likewise required to inform individuals of the fact that they are being subjected thereto.Footnote 234 Last, the Article also imposes disclosure obligations on providers and deployers of systems that generate synthetic data or deepfakes, whether it concerns audio, image, video or text content.Footnote 235
Note how, in line with the AI Act’s focus on product requirements, this Article merely imposes obligations on AI providers and developers, rather than granting individuals a right to be informed. Furthermore, all of these provisions have exceptions where the system’s use “is authorised by law to detect, prevent, investigate or prosecute criminal offences”. Finally, it can be noted that this Article does not seem to be the object of a robust targeted periodic assessment or revision process.
5.4.3.c General-Purpose AI Models
The AI Act also imposes a set of obligations on providers of general-purpose AI models. Virtually all those providers are private actors, rendering these obligations less relevant for public authorities. That said, a growing number of authorities have started using systems that incorporate general-purpose AI models – referred to as ‘general-purpose AI systems’ in the AI Act – such as chatbots that help citizens to retrieve information, answer questions or fill in forms. This means these requirements are nevertheless relevant whenever authorities procure or develop such systems, especially for a high-risk purpose.
The AI Act’s requirements for general-purpose AI models mainly concern tailored transparency measures to enable downstream AI providers that rely on those models to comply with their own obligations under the AI Act. Providers of such models must, for instance, draw up and keep up to date the model’s technical documentation, including its training and testing process and the results of its evaluation.Footnote 236 They must also adopt a policy to ensure compliance with copyright law, and make available certain information and documentation to AI providers who intend to integrate the general-purpose AI model into their system (including information that enables those providers to have a good understanding of the capabilities and limitations of the model and to comply with their obligations under the AI Act).Footnote 237 In addition, they must make publicly available “a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office”.Footnote 238 Only the latter information is made accessible to the public at large, thus limiting the information that will be accessible for citizens who wish to learn more about the models underlying the systems used by public authorities.
As a reflection of the legislator’s risk-based approach, providers of general-purpose AI models that pose a ‘systemic risk’ are subjected to additional obligations. These models must be notified to the Commission in order to be designated as such, akin to the gatekeeper designation process established by the Digital Markets Act.Footnote 239 General-purpose AI models that pose a systemic risk are defined as having “high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks”.Footnote 240 What, precisely, counts as a systemic risk is not defined. Furthermore, despite the constantly evolving nature of AI systems’ computational capabilities, the drafters of the AI Act oddly enough decided to introduce a presumption of such capabilities “when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025”. This rather arbitrary threshold can be amended by the Commission through delegated acts to ensure it keeps reflecting “the state of the art”.Footnote 241
Providers of general-purpose AI models must also perform model evaluations (including adversarial testing), and assess and mitigate possible systemic risks at Union level. They must report to national competent authorities any relevant information about serious incidents and corrective measures to address them, and ensure an adequate level of cybersecurity protection for the model and its physical infrastructure. Interestingly, despite the systemic risks they pose, and their use and integration by countless downstream providers and deployers (including public authorities), none of these obligations introduces the need for an independent verification of the model’s compliance with EU law prior to its use.
5.4.3.d High-Risk Systems
Let me now turn to the category that is most relevant for the context of algorithmic regulation, namely high-risk AI systems. While the requirements that apply to such systems (listed in Articles 8 to 15 AI Act) are the same when it concerns a public or a private entity, Annex III does list numerous AI systems that are solely used in the public sector, reflecting the legislator’s recognition that many such uses merit heightened attention and responsibility, given the asymmetrical power relationship between public authorities and individuals.
Annex III lists eight domains or purposes for which AI systems can be used. To be considered high-risk, a system must be explicitly listed as a use-case under one of the eight domain headings. While these lists can be updated by the Commission,Footnote 242 the headings themselves can only be altered by amending the regulation.Footnote 243 In other words, if an AI system does not fall under any of the eight listed domains, despite posing a high risk, it cannot be categorised as such without revisiting the entire legislative process.
Applications of algorithmic regulation covered by Annex III include, under the heading of ‘biometrics’ systems used for remote biometric identification, for biometric categorisation based on sensitive or protected attributes, and systems used for emotion recognition.Footnote 244 The heading of ‘critical infrastructure’ is relevant too, for it includes systems “intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity”.
Under the domain of ‘education and vocational training’, systems used for student admissions, to assign them to educational institutions, and to evaluate their learning outcomes are considered high risk. Returning to the illustrations of Chapter 4, this means that a system that would be similar to Ofqual’s algorithm in the UK or the Admission Post Bac algorithm in France would be covered. The domain also includes systems to monitor and detect prohibited behaviour of students during tests, and to assess the level of education that an individual should receive or access.Footnote 245 To the extent public authorities rely on algorithmic regulation in recruitments, promotions, dismissals and the monitoring and evaluation of public officials, this is likewise considered as high risk, under the heading ‘employment, workers management and access to self-employment’.Footnote 246
Many of the other examples discussed in Chapter 4 fall under the heading ‘access to and enjoyment of essential private services and essential public services and benefits’.Footnote 247 This includes systems used to evaluate the eligibility of natural persons for essential public assistance benefits and services, as well as systems to grant, reduce, revoke or reclaim them.Footnote 248 Furthermore, systems used to dispatch or to establish priority in the dispatching of emergency first response services by police, firefighters and medical aid (including emergency healthcare) are likewise included.Footnote 249
As regards ‘law enforcement’, which has a separate heading, the annex covers an array of applications, including systems used to make an assessment of the risk of natural persons (re)offending or becoming victims of criminal offences, and systems used to profile natural persons in the course of criminal investigations. The list also includes systems that are used as polygraphs, and systems used to evaluate the reliability of evidence during investigations.Footnote 250
AI applications for ‘migration, asylum and border control management’ are grouped under a separate heading, and include inter alia systems to assess risks related to security, irregular migration and the health of individuals; systems to examine applications for asylum, visa and residence permits, and to examine complaints associated thereto; systems used to assess the reliability of evidence; and systems used to detect, recognise or identify individuals. In addition, notwithstanding their scientific unsoundness, systems used as polygraphs or to detect the emotional state of natural persons are also included in this list rather than being prohibited, despite the highly vulnerable state of the individuals subjected thereto.Footnote 251
The last heading of the annex is titled ‘administration of justice and democratic processes’.Footnote 252 This includes two sets of systems: those used by a judicial authority to assist it with researching and interpreting facts and the law and applying the law to a concrete set of facts, and those used to influence the outcome of an election or referendum (or people’s voting behaviour). Interestingly, in its position on the AI Act, the Parliament also suggested including AI systems used by an administrative body or on their behalf (and hence not only by a judicial authority) to research and interpret facts and the law, and apply the law to a concrete set of facts. This could have been an important addition, as it would have provided a broader basis to cover algorithmic regulation applications under the high-risk list. The Parliament’s suggestion, however, was rejected, leaving out of scope decisions taken by administrative authorities under this heading.
Let me pause here for a moment and make some observations. The systems listed above are without a doubt liable to cause adverse effects on the rights and interests of individuals (and society) if not used responsibly. The fact that they are listed as ‘high risk’ and that they will be subjected to mandatory requirements which need to be fulfilled before their use is hence a positive development. However, as already hinted at, some of these applications can reasonably be found to pose an ‘unacceptable’ risk rather than a ‘high’ risk, and their use, especially in a public sector context, may merit being prohibited altogether.Footnote 253
Second, this list appears to be legitimising the use of the systems it contains, as it provides that their use, though risky, is acceptable as long as the requirements attached thereto are fulfilled. Accordingly, public deliberation about whether certain of these applications should be used in the first place risks being bypassed. Some of these applications may require, in consonance with the GDPR, a separate legal basis in the form of a legislation that sets out the permissible uses of the technology, which would at least enable parliamentary debate and hence some level of democratic oversight prior to their implementation.Footnote 254 This, however, may not be the case for all these systems, and in any case depends on how (well) Member States fulfil their obligations under the GDPR, and how they interpret concepts like the ‘public interest’.
Third, I have already noted that the requirements these high-risk systems must meet (discussed in more detail in the next section) are subjected to a conformity self-assessment. This means the AI Act foresees no independent ex ante oversight over why or how these systems are designed and used, despite the significance of the impact they can have when deployed at scale by public authorities.Footnote 255 On the one hand, their inclusion in the high-risk annex signals the acknowledgement that the risk associated with these systems is high. On the other hand, however, independent oversight over their use and sound development is only possible ex post and, meanwhile, the system can simply be self-assessed (often by people who, as was already discussed above, have little clue about the intricacies of the application of general legal rules to specific cases).
Finally, this list-based approach to applications that constitute a high risk is deeply problematic, as it is bound to be under-inclusive, by overlooking other algorithmic systems that can also have an adverse impact on individual, collective and societal interests.Footnote 256 Why did the legislator not opt to include all algorithmic systems used to inform or adopt administrative acts? Can it not be reasonably argued that these systems are by definition ‘high risk’? Or even more broadly, could one not consider including all systems that can have an adverse impact on fundamental rights, the democratic process and the rule of law? Undoubtedly, such broader formulation provides less legal certainty for providers and deployers subject to the AI Act. Would it not, however, offer more protection for individuals subjected to the adverse effects of algorithm regulation? In Chapter 4, I discussed the importance of letting the law play its role. This includes embracing its inherent tensions, and the push-and-pull marriage between discretion and rules, flexibility and stability, vagueness and precision, openness and closeness. The list-based approach of the AI Act, unfortunately, risks overly emphasising the latter.Footnote 257 This is worrisome since, as noted above, a legalistic approach tends not to lead to justice, but to legalism.
What is more, during the negotiations, the EU legislator included a provision that introduces a so-called filter for high-risk systems, which enables the circumvention of the high-risk requirements if system providers can argue that – despite falling under Annex III – their particular AI application does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Accordingly, even if a system is classified as high risk by the AI Act, system providers can avoid the high-risk requirements if they self-assess that their system does not pose a significant risk.Footnote 258 To avoid abuse of this potential escape route, the AI Act does provide a procedure through which providers should justify their (rebuttable) exclusion from the high-risk requirements, and they still need to register their system in the EU database. Nevertheless, while the aim of this additional layer is to mitigate the over-inclusiveness of the high-risk list (yet not the under-inclusiveness), it will likely only add to the complexity of this Act’s regulatory architecture, lead to more red tape, and diminish legal certainty for all those involved.Footnote 259
For the sake of continuing my analysis, let me now, temporarily, bracket these concerns and examine the applications of algorithmic regulation that the AI Act does designate as high risk, arguably constituting the most relevant category in this context. To which extent do the requirements imposed on such systems provide protection against their adverse impact, particularly on the rule of law?
5.4.4 High-Risk Algorithmic Regulation
5.4.4.a Requirements for High-Risk Systems
Chapter III of the AI Act sets out the requirements that high-risk applications must comply with before being placed on the market or put into service. Article 9 provides that a ‘risk management system’ be established, implemented, documented and maintained, as part of a continuous iterative process running through the entire lifecycle of the system. This compels AI providers inter alia to identify and analyse the known and reasonably foreseeable risks associated with the system; to estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose and ‘under conditions of reasonably foreseeable misuse’; and to adopt ‘appropriate and targeted risk management measures’.Footnote 260 The systems also need to be tested to identify the most appropriate risk management measures, based on their ‘intended purpose’.Footnote 261
Reflection on and documentation of those elements is to be welcomed. At the same time, technical developers that are not trained in concepts like fundamental rights and the rule of law will hardly be able to identify risks pertaining thereto. The proper identification of such risks, which enables subsequent measures of mitigation, necessarily requires input from others, including public officials who are trained in the law’s application and who will be using the system, people with expertise about the ethical and legal impact of algorithmic systems and, most importantly, those who will be subjected to the system or can be adversely affected thereby. Unfortunately, the AI Act does not foresee the need to seek input and feedback from domain experts or from those who may be adversely impacted.
It can also be noted that Article 9(3) limits the risks that must be considered to “those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information”. One can wonder what this means for risks that cannot be mitigated through the development and design of the system or by providing technical information. Are those risks simply to be ignored? Furthermore, Article 9(5) provides that any ‘relevant residual risk’ that cannot be eliminated or mitigated, as well as the ‘overall residual risk’ of the system, must be judged ‘acceptable’. The judgment of whether or not a residual risk is acceptable hence resides solely with the system’s coders, not with those who will be subjected thereto. Yet, as noted elsewhere, outsourcing the ‘acceptability’ of ‘residual risks’ to high-risk AI providers is hardly acceptable.Footnote 262
As regards the requirement pertaining to data governance, Article 10 requires that the training, validation and testing data sets are subjected to appropriate data governance and management practices, which must include in particular: the relevant design choices; data collection processes; data preparation operations; the formulation of assumptions (with respect to the information that the data are supposed to measure and represent, which can be understood as the ‘proxies’ that are being used); an assessment of the availability, quantity and suitability of the data sets that are needed; the examination of possible biases as well as measures to detect and mitigate those biases; and the identification of relevant data gaps or shortcomings.Footnote 263 Furthermore, data sets must be relevant, sufficiently representative and ‘to the best extent possible’ free of errors and complete in view of the intended purpose. They should also have ‘appropriate statistical properties’, including as regards the persons or groups in relation to whom the system is intended to be used.Footnote 264 These elements (though not specific to the public sector) could in theory help provide insight into how legal provisions are being translated to code in the context of algorithmic regulation, as they force coders to be explicit about the design choices they make and the relevant assumptions underlying their ‘translations’. There is, however, a catch.
First, this information need not be made public. Arguably, providers need to draw up technical documentation pursuant to Article 11 AI Act, which demonstrates that the system complies with the requirements. However, that documentation is only meant to provide the relevant supervisory authorities with the necessary information to assess compliance ex post, in case an investigation ever arises. Citizens do not have access thereto, and it is not covered by the (rather minimalistic) information that providers and deployers are meant to include in the ‘EU database of stand-alone high-risk systems’.Footnote 265 Arguably, if the provider is a public authority rather than a private company to which the system’s development is outsourced (and perhaps also in that case), citizens could invoke a national right to access to information and submit an ‘access to documents’ request. However, as various illustrations in Chapter 4 have demonstrated, such a right does not always enable individuals to receive information about the system itself. More generally, the fact that such documentation is not rendered public-by-default whenever it concerns a system deployed to inform or take administrative acts is a missed opportunity. This may in part be due to the fact that the AI Act imposes a single set of requirements both to private and public actors, without acknowledging their crucial differences, particularly as regards the enhanced need of transparency in the public sector, which is meant to controllably act in the public interest.
Second, this article still leaves significant discretion to the system’s coders. For instance, statistical properties should be ‘appropriate’, yet what that precisely means is left to the provider, and does not necessarily need to be spelled out or justified. Moreover, the formulation of relevant assumptions and proxies does not in itself prevent reliance on misguided proxies. As noted elsewhere, nothing in the AI Act seems to, for instance “prevent public authorities from using arrest data as a proxy for crimes committed (while not all arrested persons are charged or convicted, and many crimes occur for which no arrests are made). Given that these assumptions are not publicly accessible, their misguided nature may not easily come to light.”Footnote 266
In order to enhance traceability and transparency, Article 12 requires record-keeping, while Article 13 imposes certain information obligations, requiring that high-risk systems be designed and developed in such a way to enable deployers to interpret a system’s output and use it appropriately.Footnote 267 The system should also come with a set of instructions for use that provide, inter alia, information about the ‘characteristics, capabilities and limitations of performance of the system’; ‘foreseeable circumstances’ that may lead to ‘risks to the health and safety or fundamental rights’; ‘when appropriate, its performance regarding specific persons or groups’; and human oversight measures (including technical measures) put in place to facilitate the interpretation out the system’s output.Footnote 268 Note, however, that all this information is only meant to be provided to deployers of the system (in casu, the public officials who will be using it) rather than to those subjected to or affected by the system.Footnote 269 Once again, individuals adversely affected by algorithmic regulation have a much more limited role in the regulation.
Besides requirements around the accuracy, robustness and cybersecurity of the system in Article 15, and requirements of record-keeping and automatic logging in Article 12, the AI Act also contains a requirement on human oversight. Article 14 provides that high-risk systems should be designed and developed in a way that they can be effectively overseen by natural persons, and that such oversight should aim to “prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse”.Footnote 270 The oversight measures must be commensurate to the risks, autonomy level and context of the system’s use, and should either be put in place by the provider or by the deployer (or both) depending on what is most appropriate. Footnote 271 Pursuant to Article 14(4) of the AI Act, the system’s deployers must be enabled to understand the system’s relevant capacities and limitations; to monitor its operation so as to detect and address anomalies and dysfunctions; to remain aware of their possible tendency of automation bias; to ‘correctly’ interpret the system’s output; to decide not to use the system or to otherwise disregard, override or reverse its output; and to intervene in its operation or interrupt it through “a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state”.
The intention behind this provision is certainly to be applauded, as it is aimed at mitigating the risk of ‘mindless rule-following’ identified under Chapter 4. As such, it could help public officials maintain their agency and hence their sense of responsibility for the administrative acts taken or informed by algorithmic systems. However, in many instances, a meaningful failsafe is impossible to secure in practice, given that the entire premise of data mining is aimed at generating insight that is beyond the capacity of human cognition. This inevitably also means that the human being who needs to exercise oversight over the system will often not be able to second-guess the validity of the system’s outputs, except in limited cases where human intuition may detect obvious failures or outliers. Moreover, the problem of automation bias is unlikely to be overcome through this provision, despite the laudable intentions.Footnote 272
It would therefore be important to ensure that this provision does not remain a dead letter, and that besides ‘technical measures’, deployers also implement ‘non-technical’ oversight measures, such as adequate education and training for public officials,Footnote 273 logging oversight activities and easily accessible review and redress mechanisms. While much will depend on the internal organisation of public authorities (and the importance they attach to speed and efficiency KPIs), it is to be hoped that this provision can nevertheless contribute to a more responsible use of algorithmic regulation by providing some friction and a much-needed opportunity for critical internal reflection.
5.4.4.b Additional Obligations for Deployers
In the original proposal, most of the responsibilities for high-risk AI systems fell on system providers.Footnote 274 In the final version of the AI Act, this has been somewhat rebalanced to also include obligations for system deployers.Footnote 275 The former are chiefly responsible for the conformity assessment of their system with the above high-risk requirements (and the concomitant affixation of the CE marking),Footnote 276 setting up a quality managementFootnote 277 and ensuring documentation keeping and automated logs,Footnote 278 while the latter must take technical and organisational measures to use the system in accordance with the provider’s instructions for use, ensure human oversight, monitor the system, and keep the system’s logs.Footnote 279
Interestingly, certain deployers have a few additional obligations.Footnote 280 First, all deployers of high-risk AI systems referred to in Annex III “that make decisions or assist in making decisions related to natural persons” shall inform those persons that they are subject to the use of such system.Footnote 281 This is a highly important (new) obligation, especially when coupled with two rights introduced in the AI Act: the right for individuals to lodge a complaint with a market surveillance authority if they believe the AI Act is not complied with,Footnote 282 and the right for individuals to receive an explanation of a decision taken about them,Footnote 283 of which the combination could facilitate redress. Second, deployers of a post-remote biometric identification in the context of a criminal investigation must obtain ex ante judicial or administrative authorisation to do so.Footnote 284 Third, deployers of high-risk systems who are public authorities must register their use of a high-risk system in the EU database to make such use known.Footnote 285
Finally, public authorities that deploy high-risk AI systems must also undertake a fundamental rights impact assessment (FRIA).Footnote 286 This obligation was fiercely advocated for by civil society organisations throughout the AI Act’s negotiations, and the Parliament ultimately managed to push it through in Article 27.Footnote 287 Inspired by the concept of data protection impact assessments,Footnote 288 it is meant to force deployers to reflect on how their system can affect people’s rights prior to its use, as well as to document and mitigate those effects. While this could constitute an important safeguard, the feedback gathered during the piloting phase of the Ethics Guidelines for Trustworthy AI revealed that most organisations, private or public, have no clue what such an assessment entails. Even for trained lawyers, assessing the impact of an AI system on all fundamental rights is not an easy task.
There are hence fears that this obligation will either be too difficult to comply with or, if interpretative guidance is provided, could turn into an empty box-ticking exercise leading to yet another façade of legality without any substantive protection. The latter unfortunately seems rather likely, since Article 27 provides a relatively rigid list of elements that the assessment should describe, and tasked the AI Office with developing “a template for a questionnaire, including through an automated tool, to facilitate deployers in complying with their obligations under this Article in a simplified manner”. This reflects the legislator’s desire to ensure the AI Act’s measures can be implemented in an easy and straightforward manner, ideally with the help of some technical tools – even if there is nothing easy and straightforward about carrying out a proper assessment of the ways in which those systems can affect people’s rights. This is an inevitably complex matter that requires the balancing of various interests and considerations, which cannot be bypassed through a simple checklist. Furthermore, the obligation’s narrow focus on individual rights also neglects the societal interests that can be affected by such systems, especially by virtue of the systemic effects they can have on values like democracy and the rule of law.
5.4.5 A Low Ceiling
Considering these shortcomings, it is worth asking whether Member States can still remedy these gaps by adopting stronger safeguards through national legislation. In essence, this question comes down to whether the AI Act, and its aim to harmonise rules on algorithmic systems across the EU to establish an internal AI market, aspires a form of minimum or maximum harmonisation. Minimum harmonisation “sets a common floor of regulation, which all Member States must respect, but it does not set a ceiling”, whereas maximum harmonisation “serves as both floor and ceiling”.Footnote 289 The AI Act is directly applicable in national legal orders and Member States will need to ensure that individuals can benefit from the protection it affords.
However, can Member States go further than what the regulation requires and provide stronger safeguards against AI’s adverse impact? If the AI Act seeks to ensure the maximum harmonisation of national rules, this would effectively prevent a Member State from addressing the AI Act’s shortcomings, as that would be contrary to the regulation’s objective and hence contrary to EU law.Footnote 290 As observed by Weatherill, this question thus expresses “a battle for the soul of the internal market”,Footnote 291 as it essentially determines at which level regulatory power is held, and how much regulatory diversity the EU internal market can tolerate.
While the initial proposal of the AI Act still left the answer to this question somewhat ambiguous, the final version is more clear: “This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation”.Footnote 292 It hence appears that the AI Act is targeting the maximum harmonisation of national legislation, and that Member States cannot provide a higher level of protection by adopting stricter rules, as this may cause unwanted ‘market fragmentation’. There are only few exceptions such as the protection of workers, for which the regulation states that it “does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers”.Footnote 293 This explicit exception only seems to confirm the regulation’s ceiling-imposing nature.
In theory, a maximum harmonisation approach is meant to ensure an equal level of protection in all Member States. Yet in practice, given the AI Act’s deficiencies, this may in fact come down to an equally low level of protection of the rule of law. Certainly, the AI Act establishes EU law provisions that public authorities must comply with. It could thereby also serve as a basis to invoke EU remedies aimed at ensuring Member States’ compliance with their EU law obligations.Footnote 294 However, the fact that it does not impose stronger obligations on Member States to ensure their use of algorithmic regulation does not lead to algorithmic rule by law, inadvertently or intentionally, is a missed opportunity. And the fact that Member States may be legally unable to level this protection up through national legislation, unless they can argue it falls outside its scope, only makes this problem worse.
Admittedly, the AI Act does state that the harmonised rules it provides “should be without prejudice to existing Union law, in particular on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety, to which this Regulation is complementary.”Footnote 295 It also mentions that a system’s classification as ‘high-risk’ should not be interpreted as indicating the lawfulness of its use under other Acts of Union law or national law implementing Union law, “such as on the protection of personal data, the use of polygraphs or the use of emotion recognition systems”.Footnote 296 However, the AI Act’s clear intention to comprehensively regulate systems that pose a risk to fundamental rights makes it difficult to argue that an application listed in Annex III is nevertheless unlawful. While not excluded in principle, the person claiming such unlawfulness (for instance due to an incompatibility with a fundamental right) would have to surmount a significant burden of proof, which may only be met in the case where a national law exists that explicitly sets out the system’s unlawfulness.Footnote 297 Moreover, in that case too, the risk still exists that an AI provider or deployer challenges the national law on internal market-based grounds, by claiming it constitutes an illegal market restriction for the very product the AI Act is meant to promote.Footnote 298
Finally, the maximum ceiling imposed by the AI Act also has repercussions for Member States’ participation in negotiations on AI regulation at the international level. For instance, during the Council of Europe’s negotiations of a new Convention on AI and human rights, democracy and the rule of law,Footnote 299 which largely took place in parallel with the negotiations of the AI Act,Footnote 300 the Commission requested and obtained a mandate to be the sole negotiator on behalf of EU Member States, rather than having Member States negotiating their own position.Footnote 301 From its request, it was evident that the Commission considers that matters related to the Council of Europe’s Convention (meaning, matters related to AI’s impact on human rights, democracy and the rule of law) to already be comprehensively dealt with under the AI Act. This assumption is, however, fanciful considering the inadequate attention to the rule of law in the AI Act, as well as its underwhelming protection of fundamental rights and democratic participation in decisions on AI’s use. Nevertheless, the Commission wanted to ensure the Convention’s alignment with the AI Act, thereby also precluding Member States to advocate for stronger protection. This is despite the fact that, as already hinted at, one could legitimately wonder whether the EU can claim the competence to exhaustively regulate aspects pertaining to algorithmic regulation in public administration on an internal market legal basis.
It hence appears that the EU legislator not only left open important legal gaps in the AI Act, but that it is also preventing Member States to fill these gaps, in the name of countering market fragmentation. This begs the question whether the Union’s interest in being ‘the first’ AI legislator to reap the economic benefits of an internal market for AI are deemed more important than safeguarding its core values.
5.4.6 Evaluation: The Return of Techno-supremacy
Considering the above analysis, what should we now make of the AI Act in the context of the concerns identified in Chapter 4? Can it offer any help in countering the threat of algorithmic rule by law? Undoubtedly, the new regulation does introduce important legal safeguards to better protect individuals against the risks that some AI systems pose to their health, safety and fundamental rights. The establishment of a public enforcement mechanism; the introduction of prohibitions and requirements that must be met ex ante; as well as the inclusion of several rights for individuals that can fortify private enforcement channels are all to be welcomed. Furthermore, by imposing documentation and logging obligations that should facilitate ex post review, and by setting up an EU database to register the use of high-risk systems by public authorities, the AI Act also enhances transparency around algorithmic regulation.Footnote 302
However, overall, the protection the AI Act provides remains insufficient to meet the identified concerns. Its process is primarily based on self-assessments, and is not tailored to the specific risks arising in the context of algorithmic regulation. There are no proper mechanisms for public participation and input (for instance regarding the decision to deploy algorithmic regulation in the first place, or even just to help identify and assess the risks attached thereto); there are only limited transparency obligations towards those potentially affected by the systems; there is no obligation to make the extensive documentation of the system’s functioning accessible to researchers, civil society organisations or the public at large; there are no provisions around the procurement of systems, or limitations as regards who should be able to undertake translations from law to code, with which training, and with which constitutional checks and balances; there are no ex ante independent oversight mechanisms, nor mandatory periodic audits that can help ensure continuous oversight; and there are no provisions that enable systemic review.
These substantive shortcomings are coupled with concerns around the AI Act’s regulatory architecture. First, despite the role of the European AI Office, when it comes to public authorities’ use of AI, oversight is primarily organised at the national level, which means the extent to which people can enjoy the AI Act’s protection depends on the resources and skills of Member States. As the enforcement practice of the GDPR has shown, these vary significantly from one state to another.Footnote 303 And while national supervisory authorities ought to be independent, especially given their task to also review the actions of public authorities, experience with national data protection authorities has likewise indicated that such independence may not always be straightforward, even in countries that are not yet considered as undergoing ‘an autocratic turn’.Footnote 304
Second, the AI Act’s reliance on conformity assessment and technical standardisation, as part of the New Legislative Framework (NLF) approach, is inadequate to deal with the rule of law risks posed by algorithmic regulation. As briefly noted above, this is an approach the Commission typically uses in the context of EU product regulation and safety standards. As such, it makes sense to subject high-risk systems that are already part of NLF legislation (listed in Annex I, including machinery, medical devices and toys), to the same compliance mechanisms as before, with the addition of the new requirements of the AI Act. However, applying the same model of safety legislation to the high-risk systems listed in Annex III is problematic, as the risks they pose – not only to fundamental rights, but also to societal interests such as the rule of law – are not comparable. AI systems are treated as tangible products which simply need to conform to technical requirements and have a ‘CE’ marking affixed to them, rather than socio-technological systems. This approach might work reasonably well for fridges and dishwashers, yet “it is hardly appropriate for a digital technology which, on the Commission’s own account, may pose significant risk to the protection of nontangible values”.Footnote 305 As noted elsewhere, while the “text is infused with fundamental rights-language, it seems to take an overly technocratic approach to the protection of fundamental rights”,Footnote 306 by essentially translating those rights into a set of prescriptive rules, exhaustive lists, detailed technical safety standards, as well as handy templates and checklists. In addition, the reliance on harmonised standards that provide a ‘presumption of conformity’ in fact implies that the interpretation of the AI Act’s requirements is outsourced to the (primarily technical experts of) standardisation organisations,Footnote 307 despite their lack of representativeness and democratic accountability.Footnote 308
Interestingly, and sadly, parallels can hence be drawn with one of the risks pointed out in Chapter 4, namely that reliance on algorithmic regulation may exacerbate a techno-scientific approach to the law.Footnote 309 As noted previously, the danger associated with algorithmic rule by law is that the legal protection of individuals and the translation from open-ended legal rules to code is seen as a mere techno-scientific enterprise, to the detriment of human rights, the rule of law, and other essential values. When the regulated risk concerns the safety of machines, food or pharmaceuticals, a techno-scientific approach is neither new nor surprising. In those areas, it is common for the legislator to set out broad norms stating the objective of protecting the health and safety of individuals, which are subsequently elaborated and complemented with detailed technical requirements by domain experts who translate the objective of ‘safety’ into quantifiable, measurable and demonstrable safety standards that need to be complied with.Footnote 310
Yet when it comes to protecting fundamental rights or the rule of law, this type of approach falls woefully short, as it is unable to do justice to the intricate contextual assessment and balance it requires between various interests. Nevertheless, this is precisely the AI Act’s approach, as it erroneously reduces “the careful balancing exercise between fundamental rights to a technocratic process, thus rendering the need for such balancing invisible”.Footnote 311 In this way, legal concepts such as the right to non-discrimination and the principle of equality are considered as elements that can be embodied by technical standards expressed in quantifiable and measurable specifications, checked by an internal ‘quality management process’, and algorithmically programmed, thereby maintaining the primacy of techno-rationality.Footnote 312 Worse still, the fact that this assessment and quality management occurs entirely in-house, without external oversight or ex ante accountability mechanisms, leaves these translation decisions entirely in the hands of technical experts, thus maintaining the supremacy of coders set out above.Footnote 313
Lastly, the ceiling imposed by the AI Act’s maximum harmonisation means that Member States who want to offer a higher level of protection than the AI Act currently provides are practically unable to do so. Despite the regulation’s laudable intentions, and despite its introduction of valuable new safeguards, the overall picture that results from this analysis is therefore still rather bleak when it comes to the AI Act’s ability to counter the threat of algorithmic rule by law.
5.5 Concluding Remarks
In Chapter 4, I conducted a systematic analysis of the way in which algorithmic regulation can impact the core principles of the rule of law, and conceptualised the threat emanating therefrom as algorithmic rule by law. I noted that this threat needs to be counter-balanced by appropriate legal safeguards, apt to address the challenges raised by public authorities’ increased reliance on algorithmic regulation. In this chapter, I therefore critically assessed whether the current EU legal framework is up to this task, and respectively analysed legislation pertaining to the rule of law, and legislation pertaining to the risks posed by (personal) data processing activities and by algorithmic systems. Although various protection mechanisms exist, and while the AI Act should be able to make a relevant contribution in this regard, my evaluation leads me to conclude that these are insufficient to address the adverse impact of algorithmic regulation on the rule of law.
On the one hand, the legal domain pertaining to the rule of law does not consider algorithmic regulation to be a particular threat, and rule of law monitoring initiatives that can help trigger such legislation do not pay attention to it. Moreover, the legal mechanisms to protect the rule of law currently seem unable to tackle more than individual infringements or budget-related concerns, whereas the deployment of algorithmic regulation without adequate safeguards can result in, or exacerbate, systemic deficiencies. On the other hand, the legal domain pertaining to algorithmic systems does not consider the rule of law to be a particularly impacted value. It hence does not pay specific attention to the fact that algorithmic regulation can increase the executive’s power, erode the protective role of the law and, given its systemic effects, undermine the normative pillars of liberal democracy. In other words, these two legal domains are currently not on speaking terms, despite the urgent need for them to enter into a serious dialogue.
This urgency is only exacerbated by the new AI Act, which aims to be future proof and on which hope has now been vested for years to come. Yet by seeking to put forward a comprehensive piece of legislation to deal with the risks of algorithmic systems in an exhaustive manner, the EU legislator’s ambition, though praiseworthy in itself, undermines its own purpose. First, there is no such thing as an exhaustive way to tackle the risks posed by algorithmic systems, even if we would assume the EU would have such competences,Footnote 314 and the many gaps in the complex and legalistic list-based approach of the AI Act testify to that. Second, the Act appears to pre-empt stronger safeguards at Member State level, due to its objective to counter market fragmentation and ensure harmonised member state legislation. Third, it approaches the protection of fundamental rights and, to the extent these are on the radar, other values, as a technocratic endeavour, which can be solved by identifying the right technical standard that providers should implement, crowned by a CE marking. Fourth, the combination of a single set of requirements for applications used in the public sector and the private sector, with a few useful yet tailored exceptions, overlooks the particular rule of law-related risks that are associated with the former, and might explain the AI Act’s disregard thereof. Instead, the regulation approaches the harm caused by algorithmic systems in an isolated fashion, assessing for each individual application which risks it can pose to each individual interest. Yet by looking at the trees, it risks overlooking the forest: the systemic, networked, long-term, widespread impact of algorithmic regulation on societal interests, including the preservation of the protective role of the law, and the integrity of the legal system as a whole.
Unfortunately, despite these flaws, the existence of the AI Act may nevertheless provide a false sense of security that the risks raised by algorithmic regulation are aptly dealt with, and that their use by public authorities can now be further promoted and sponsored in the name of efficiency and innovation. Some commentators even claimed that the AI Act is in fact overprotective, and “may come at the price of digital innovation”,Footnote 315 seemingly forgetting that innovation is not an end in and of itself, but that the aim of regulating the risks of this technology is to delineate the contours in which socially beneficial innovation can thrive. Finally, it must be recalled that these developments occur against a background in which illiberal and authoritarian practices are on the rise, and in which the implementation of the law – whether in textual or algorithmic form – is already being used, inadvertently or deliberately, to further those problematic ends.