1. Introduction
During the last decades, the rise of digitalisation and automation has undoubtedly become the ‘talk of the town’, disrupting the way in which various tasks and activities have been carried out so far. This has been the case also in the field of decision-making, where sophisticated technologies, including algorithmic and artificial intelligence (hereafter ‘AI’) tools, are increasingly deployed by public authorities and private operators alike to support or even fully automate the way decisions are taken about individuals (what is known as ‘automated decision-making’, hereafter ‘ADM’).Footnote 1 From border control management, taxation and social security, to online advertising, financial services and insurance, to name just a few examples, automated assessments are applied in a wide array of sectors within the European Union (hereafter ‘EU’), enhancing the efficiency, speed and precision of the decision-making process.Footnote 2 The outputs delivered through these assessments may take a myriad of forms, ranging from ratings, rankings and recommendations, to more adverse decisions relating, for instance, to the closure of bank accounts, the dismissal of employees or the refusal to grant credit.Footnote 3
The functioning of ADM systems usually consists in data mining and profiling techniques employing algorithms.Footnote 4 By discovering correlations and patterns in data, algorithms divide individuals into different groups based on common traits, with the aim of making predictions about them pursuant to their belonging to a specific ‘cluster’ and thus driving decision-making accordingly.Footnote 5 Yet, such algorithmic classifications and the decisions taken on the basis of them may entail far-reaching consequences for the rights and interests of the natural persons concerned, who may be selected, scored, rejected, disqualified or otherwise affected by the decisions in question.Footnote 6 Acknowledging this potentially serious impact of automated decisions on individuals, the EU General Data Protection Regulation (hereafter ‘the GDPR’) has laid down in its Article 22 a general prohibition of decision-making practices based solely on automated processing of personal data which produce legal effects on the persons concerned or similarly significantly affect them, save in exceptional circumstances and subject to strict safeguards.Footnote 7
In particular, the use of algorithms in decision-making practices has been found prone to generate potentially discriminatory outcomes, resulting in perpetuating existing stereotypes and exacerbating structural inequalities.Footnote 8 Such an inherent risk of algorithmic and especially AI-driven decisions to discriminate against particular individuals or societal groups has not gone unnoticed by the EU legislature which has integrated equality and non-discrimination concerns into its digital policy machinery,Footnote 9 such as in the GDPRFootnote 10 and the Artificial Intelligence Act (hereafter ‘the AI Act’).Footnote 11 Against this background, the legal challenges surrounding the notion of so-called ‘algorithmic discrimination’, sometimes also referred to as ‘digital’Footnote 12 or ‘emergent’ discrimination,Footnote 13 in EU law have already been the object of substantial scholarly discussions during the last years. These have predominantly focused on the limits of EU non-discrimination legislation, which comprises most notably a number of Equality DirectivesFootnote 14 and Article 21(1) of the Charter of Fundamental Rights (hereafter ‘the Charter’),Footnote 15 as well as on the potential relevance of the data protection rules contained in the GDPR for providing effective enforcement tools to the individuals affected by discriminatory algorithmic decisions.Footnote 16
It seems, however, that the term ‘discrimination’ is sometimes used interchangeably with ‘bias’ to describe instances of unfair differentiation between people caused by ADM, as if the two notions were synonymous. Some clarifications are thus warranted at this point. In general, even though the concept of ‘bias’ has a neutral meaning indicating a sort of a slant or deviation from a standard, it is commonly used with a rather negative moral connotation.Footnote 17 Accordingly, in the technical context of algorithms, whereas ‘bias’ generally refers to skewed or incorrect results of algorithmic systems,Footnote 18 it is mostly deployed as encompassing any kind of disadvantage against an individual or a group that could be deemed as ethically wrong.Footnote 19 As such, the term algorithmic ‘bias’ is much broader than the legal notion of algorithmic ‘discrimination’, which solely covers cases of unjustified disadvantageous treatment of certain population categories protected by law because of their specific attributes (eg, gender, race, sexual orientation, etc) and exclusively in certain social contexts (eg, employment).Footnote 20 Hence, legally speaking, these concepts may overlap only in part, in the sense that not all cases of biased algorithmic decisions fall within the scope of non-discrimination law.Footnote 21 As regards those instances of algorithmic bias that do not qualify as discriminatory, I consider the term ‘unfair algorithmic differentiation’ more appropriate.Footnote 22
For as vivid the academic debate around these issues of algorithmic bias, discrimination and unfairness has been in the context of EU law, little attention has been paid thus far to the way in which such instances have been dealt with in practice by courts across the EU. Although most of the judicial decisions delivered in this regard have been already spotlighted by scholars because of their far-reaching implications in terms of data protection and privacy law, their relevance from a non-discrimination perspective has been barely explored, if at all.Footnote 23 In view of this gap in existing literature, this article examines how domestic courts of EU Member States as well as the Court of Justice of the EU (hereafter ‘the CJEU’) have approached cases of algorithmic bias in ADM through recourse to discrimination-related considerations. For the purposes of my analysis, I propose the following taxonomy of judgments dealing with cases of algorithmic bias: given the legally crucial difference between the notions of discrimination and bias/unfairness as explained before, a first distinction can be drawn between judgments relating to cases of algorithmic discrimination and those concerning cases of unfair algorithmic differentiation. Depending on the extent to which courts take into account any risks of discrimination in cases falling under the second category, I further distinguish in descending order of relevance between judgments of ‘discrimination reflection’, those of ‘discrimination awareness’, and those of ‘discrimination silence’. Based on this classification, I then attempt to shed more light on how non-discrimination and data protection considerations may interact in court cases of algorithmic bias.
The reason for conducting such a comparative case law analysis lies in the particular architecture of the EU judicial system, which entrusts the full application of EU law in all Member States as well as the protection of the rights that individuals derive therefrom to both national courts and the CJEU.Footnote 24 In this context, the uniform interpretation and consistency of EU non-discrimination and data protection law is guaranteed through a dialogue between the domestic courts of Member States and the CJEU established by the preliminary reference procedure under Article 267 of the Treaty on the Functioning of the EU (hereafter ‘TFEU’).Footnote 25 It is precisely because of this judicial dialogue that I focus on jurisprudence, having a merely cursory look at the significant administrative practice of the Member States’ competent sectoral authorities, namely national equality bodies and data protection authorities (hereafter ‘DPAs’), to complement my case law findings where necessary.Footnote 26
The article is structured as follows: Section 2 examines the judgments rendered on cases of algorithmic discrimination, while Section 3 looks into the judgments on cases of unfair algorithmic differentiation, further distinguishing between those of discrimination reflection, those of discrimination awareness, and those of discrimination silence (Sections 3A, 3B and 3C). In the aforementioned sections, the article delves indicatively into certain judicial decisions, which merely serve as examples showcasing the particularities of each category of my proposed case law taxonomy, without purporting though to form an exhaustive or representative list of all the judgments that could potentially fall under the respective categories. Then, in Section 4, after setting the theoretical framework that explains the interrelation between non-discrimination and data protection law in general (Section 4A), the article moves on to explore how considerations relating to these two legal regimes may interact in practice in cases of algorithmic bias, by highlighting the degree of such interplay in judicial reasoning (Section 4B), as well as its framing by different courts across the EU (Section 4C). Finally, the article concludes in Section 5 with some reflections on the prevailing tendency to address equality concerns through recourse to data protection rules.
2. Judgments on cases of algorithmic discrimination
The judgments comprising this category deal with cases covered by the protective scope of EU non-discrimination law, in the sense that they relate to the discriminatory effects of algorithmic systems against individuals or groups with legally protected traits in certain areas of life. As such, these cases resemble the ones concerning instances of ‘traditional’, non-algorithmic discrimination that are already familiar to non-discrimination scholars, encompassing instances of both direct and indirect discrimination. More specifically, when membership in a protected group is inputted as a negatively weighted variable in an algorithmic model, the ensuing outcome of the decision-making process may fall within the ambit of direct (algorithmic) discrimination.Footnote 27 Indirect discrimination, for its part, may capture a large spectrum of situations where the algorithmic output itself or the relevant factors used for its generation are apparently neutral but could put a protected group at a specific disadvantage: this is the case, for example, with biased training data leading to under- or over- representation of certain groups, or proxy variables which correlate with a protected ground.Footnote 28
The main distinguishing feature of algorithmic discrimination cases in relation to ‘traditional’ instances of discrimination seems to be the use of some sort of algorithm- or AI-driven tool to differentiate between individuals in a certain decision-making process. Given its technology neutral character, though, EU non-discrimination law is not prevented from applying to automated assessments in areas that fall under its material scope.Footnote 29 Yet, algorithmic discrimination sits uneasily with the well-established dichotomy between direct and indirect discrimination.Footnote 30 In fact, the dividing line between these two concepts in the context of complex algorithmic decisions may prove to be fine,Footnote 31 especially in the case of proxies, which, depending on the degree of their link with a protected ground, can amount to either direct or indirect discrimination.Footnote 32
To date, there are only a handful of judicial decisions dealing with cases of algorithmic discrimination based on legally protected characteristics. Two notable examples, in this regard, are the judgments delivered by the Finnish Non-Discrimination and Equality Tribunal and the Bologna District Court in the Svea Ekonomi and Deliveroo cases, respectively.
A. Svea Ekonomi (Finnish Non-Discrimination and Equality Tribunal)
Following a request from Finland’s Non-Discrimination Ombudsman, the Finnish Non-Discrimination and Equality Tribunal (Yhdenvertaisuus- ja tasa-arvolautakunta) was called upon to adjudicate on a discrimination claim brought against the financial company Svea Ekonomi in the field of credit scoring.Footnote 33 The Finnish tribunal ruled that, by refusing on the basis of a scoring algorithm to grant credit to a client for online purchases, Svea Ekonomi directly discriminated against him on several grounds.Footnote 34
In particular, the score deployed by Svea Ekonomi was not established by conducting an individual assessment of the loan applicant’s financial situation but, instead, by using abstract statistical data obtained by an external service provider on the basis of various factors, including the applicant’s sex, language, place of residence and age.Footnote 35 According to the method deployed for the score calculation, men scored fewer points than women, native Finnish speakers scored less than native Swedish speakers, people living in remote rural areas scored less than those living in residential areas, and age had a similar impact on the number of points scored. For these reasons, the Finnish tribunal concluded that Svea Ekonomi’s scoring practice amounted to multiple discrimination against the client concerned.Footnote 36
B. Deliveroo (Bologna District Court)
In December 2020, the Labour division of the Bologna District Court (Tribunale Ordinario di Bologna, Italy) found that the reputational ranking algorithm deployed by Deliveroo’s platform to give its riders access to a booking system for working sessions was indirectly discriminatory on grounds related to the riders’ trade union activity.Footnote 37 The said system ranked each rider for the purpose of prioritising bookings for the attribution of services on the basis of statistics determined by two parameters: the rider’s reliability, in the sense of her actual participation in the booked working session, and her availability during peak hours for food deliveries. Accordingly, riders who did not participate in the booked session or cancelled it beyond the time limits prescribed by Deliveroo’s contractual regulation would see their ‘scores’ decreasing, thereby getting eventually marginalised from the priority order and having less opportunities to access work in the future.
However, as noted by the Italian court, the way that Deliveroo’s system would ‘penalise’ riders by affecting their statistics did not take into account the reasons behind the riders’ absence from work, such as their participation in a strike. By deeming irrelevant the reasons for the non-participation in the booked session or the late cancellation thereof, the algorithm in question was found to treat in the same way the riders who do not participate for trivial reasons and those who do so because of exercising their right to strike or for other legitimate reasons (eg, illness, disability, needs related to the care of a disabled person or minor children, etc). Under these circumstances, the Bologna Court concluded that Deliveroo’s seemingly neutral cancellation policy applied through its algorithm-driven booking system resulted in placing riders, who adhere to collective actions or otherwise lawfully abstain from work, at a particular disadvantage, thus indirectly discriminating against them.Footnote 38
3. Judgments on cases of unfair algorithmic differentiation
Contrary to the judgments featuring under the previous category, the ones classified hereunder do not deal with cases of discrimination as understood from a legal point of view. Rather, the differentiated treatment of people resulting from the algorithmic decision-making practices at issue in these cases is not based on any legally protected grounds or proxies thereof, or at least it is not proved to be so. Such a situation may arise due to the algorithms’ ability to identify complicated patterns and statistical correlations by relying on granular forms of data, for the purpose of making predictions about people and categorising them accordingly into new, non-traditional groups.Footnote 39 These algorithmically created groups may be based on seemingly irrelevant characteristics (eg, being a dog owner or a video gamer), or parameters that are even incomprehensible to humans (eg, browsing behaviour and various electronic signals),Footnote 40 or other attributes that are currently not recognised by law as worthy of protection (eg, socioeconomic status, education, income, etc) or a combination thereof.Footnote 41 As such, classifications resulting from data-driven forms of profiling might often be ephemeral, arbitrary or random,Footnote 42 and might not necessarily correspond to any real-life social groups,Footnote 43 thereby falling between the cracks of non-discrimination law.
That said, it has been correctly argued that these ‘new’ types of algorithmic differentiation may still be unfair or at least ethically controversial depending on the circumstances of each specific case, even if they do not harm people with protected traits.Footnote 44 Unfairness could emerge, in particular, when such differentiation reinforces structural inequalities in society to the detriment of already vulnerable groups (eg, higher prices charged to low-income consumers), or is based on seemingly innocuous criteria (eg, denial of credit on the basis of a consumer’s postal code or type of car), or leads to incorrect predictions about individuals (eg, a large deposit requested from a consumer who is wrongly predicted to default on her payments).Footnote 45 Thus, despite the inapplicability of non-discrimination legislation to these instances of unfair algorithmic differentiation regardless of legally protected identities, discrimination-related concerns may still be of some relevance depending on the context of each case.
This is not to say, though, that any instance of unfair algorithmic differentiation is necessarily relevant from a non-discrimination law perspective. It is only where a certain ADM practice at issue raises a sufficiently plausible risk or suspicion of discriminatory effects that judges may touch upon the underlying discrimination matters, and still to a varying degree of explicitness and detail, as the case may be, if at all. Accordingly, in relation to this kind of unfair algorithmic decision, my classification of relevant judgments is based on whether the courts concretely reflect on the discrimination risks of those automated decisions (‘judgments of discrimination reflection’); whether they appear merely aware of the discriminatory potential of ADM in general (‘judgments of discrimination awareness’); or whether they remain completely silent on these matters (‘judgments of discrimination silence’).
A. Judgments of discrimination reflection
In the rulings classified under this category, the judges examine in concreto the potentially discriminatory nature or effects of the algorithmic systems at issue, duly considering the ensuing harms for the persons affected under the specific circumstances of each particular case. Hence, discrimination-related considerations form an integral part of the courts’ reasoning and weight with their final conclusion. To better illustrate this judicial approach, I will indicatively refer to the SyRI judgment of the District Court of the Hague, the GALOP II judgment of the Dutch Administrative High Court, the CJEU’s ruling in Ligue des droits humains, as well as to the German Federal Constitutional Court’s judgment on the envisaged use of certain policing systems in Hessen and Hamburg.
SyRI (District Court of the Hague)
In a seminal ruling delivered in February 2020, the District Court of the Hague (Rechtbank Den Haag) pronounced itself on the legality of the ‘SyRI’ algorithm (Systeem Risico Indicatie, System Risk Indication), a digital welfare fraud detection system developed by the Dutch government, through which data from various sources were linked and analysed, generating risk reports about persons worthy of investigation.Footnote 46 Despite the critiques voiced against this tool mainly due to the disproportionately extensive types of personal data collected and processed,Footnote 47 the relevant legislation allowing for the deployment of SyRI was eventually adopted and the system was applied in four Dutch municipalities, thus prompting a challenge on behalf of a coalition of civil society organisations and two citizens.
In its verdict, the Court of the Hague found that the SyRI legislation violated the right to private life under Article 8 of the European Convention of Human Rights (hereafter ‘ECHR’), as interpreted in light of the data protection principles deriving from the EU Charter and the GDPR, by failing to provide sufficient safeguards to comply with the criterion of being ‘necessary in a democratic society’ under Article 8(2) ECHR.Footnote 48 Interestingly, though, the Dutch court inferred from the ECHR’s right to a private life ‘the right to equal treatment in equal cases and the right to protection against discrimination, stereotyping and stigmatisation’.Footnote 49 In this regard, the Court of the Hague noted that the SyRI system had been applied only to neighbourhoods with higher concentrations of poorer and vulnerable groups of people (the so-called ‘problem districts’), as also highlighted by the claimants and the United Nations Special Rapporteur on extreme poverty and human rights acting as an amicus curiae in the case.Footnote 50 Given the large amounts of data processed by the system, there was, in the court’s view, ‘a risk that SyRI inadvertently creates links based on bias, such as a lower socio-economic status or an immigration background’.Footnote 51 Nevertheless, the opacity surrounding the system’s risk indicators and the functioning of its risk model prevented the court from assessing whether or not such a discrimination risk was sufficiently neutralised.Footnote 52
GALOP II (Dutch Administrative High Court)
In the aftermath of the SyRI judgment, another algorithmic risk model, which was used in the context of the ‘GALOP II’ project in the Netherlands for the purposes of identifying addresses in certain areas worthy of further investigation into potentially fraudulent entitlement to social assistance, came under judicial scrutiny before the Dutch Administrative High Court (Centrale Raad van Beroep).Footnote 53 The case was initiated by an individual who was flagged and subjected to home visits based on that risk model, claiming that he was selected for an investigation on discriminatory grounds.Footnote 54
Against this background, the Dutch court emphasised that, when applying risk profiles for investigation purposes, public authorities must comply with the prohibition of discrimination under Article 14 ECHR and Article 1 of Protocol 12 to the ECHR, and refrain from unjustifiably infringing the right to privacy under Article 8(2) ECHR, while also respecting other principles such as transparency, the prohibition of arbitrariness, and legal certainty.Footnote 55 However, by explicitly relying on the findings of the District Court of the Hague in SyRI, the High Court admitted that the lack of transparency regarding the method used by the system in question to select addresses for targeted checks made it impossible to assess whether the aforementioned principles were respected in the case at issue.Footnote 56 Accordingly, the court concluded that the claimant’s suspicion of discriminatory treatment could not be refuted and thus the home visits carried out as part of the project were unlawful.Footnote 57
Ligue des droits humains (CJEU)
In the context of a preliminary reference procedure initiated by the Belgian Constitutional Court (Cour constitutionnelle/Grondwettelijk Hof) in Ligue des droits humains,Footnote 58 the CJEU was asked to review the validity of Directive 2016/681 which regulates the collection of Passenger Name Record (‘PNR’) data by air carriers and the transfer of such data to the Member States’ law enforcement authorities (hereinafter, ‘the PNR Directive’).Footnote 59 In its lengthy Grand Chamber ruling, the Court interpreted the Directive in light of Articles 7, 8, 21(1) of the Charter, laying down a number of obligations and safeguards that Member States need to comply with when implementing the Directive to ensure its consistency with the fundamental rights to privacy, data protection and non-discrimination enshrined in those Charter provisions, respectively.Footnote 60
Interestingly, even though the Belgian court’s request for a preliminary ruling did not originally include any reference to Article 21(1) of the Charter, the CJEU decided by its own motion to extend the scope of its interpretation of the PNR Directive, by also addressing issues of discrimination through the lens of that provision.Footnote 61 More specifically, the CJEU ruled that any processing of PNR data by automated means cannot be based on pre-determined criteria or databases that include factors relating to a person’s race or ethnic origin, political opinions, religion or philosophical beliefs, trade union membership, health, sexual life or sexual orientation, the use of which may result in discrimination.Footnote 62 In this regard, the Court pointed out that the prohibition of discrimination based on such characteristics covers both direct and indirect discrimination, so that any pre-determined criteria must be defined in such a way that, even if formulated in a neutral fashion, their application does not place persons bearing any of the those characteristics at a particular disadvantage.Footnote 63 Accordingly, national authorities shall not generally rely on such protected personal traits to target individuals, but must instead take into account special features in the factual conduct of persons giving rise to reasonable suspicions.Footnote 64 Lastly, the Court emphasised that the national authorities in charge of collecting PNR data need to review individually through non-automated means any positive matches resulting from the automatically performed comparison against databases or pre-determined criteria in order to exclude any discriminatory outcomes generated by automated operations.Footnote 65
Policing systems in Hessen and Hamburg (German Federal Constitutional Court)
Faced with actions challenging two pieces of legislation adopted by the German Länder of Hesse and Hamburg, which authorised the police to process stored personal data for law enforcement purposes through the use of automated data analysis (Hesse Act) or automated data interpretation (Hamburg Act) respectively, the German Federal Constitutional Court (Bundesverfassungsgericht) declared the relevant provisions unconstitutional. By carrying out a detailed proportionality analysis, the German court considered that the two Acts breached the general right of personality enshrined in the German Basic Law in its manifestation as the right to informational self-determination due to the broad powers conferred to police authorities.Footnote 66
In the court’s view, the interference with these rights becomes even greater when the automated investigation methods increase the risk of objectively innocent persons being targeted for further investigations, thereby amplifying discrimination in the context of police activity.Footnote 67 The German court acknowledged that although the greater automation of police work has the potential to prevent discrimination, it also harbours specific risks of amplifying discrimination.Footnote 68 The court noted, in this respect, that the severity of the interference with the personality rights of the persons affected may be influenced by the permissible methods of data analysis or interpretation, depending on factors such as the margin of error, the likelihood of discrimination and the difficulty of scrutinising the algorithms involved.Footnote 69 In case an automated investigation method allows the processing of large amounts of data with the aim to detect statistical correlations, then sufficient safeguards are to be provided to prevent any inappropriately distorting or discriminatory effects.Footnote 70 Most prominently, the German Constitutional Court highlighted the adverse consequences stemming from the use of AI self-learning systems by police authorities, noting that a specific challenge is posed by the need to prevent the emergence of algorithmic discrimination.Footnote 71
B. Judgments of discrimination awareness
This category comprises judgments where the risk of algorithmic discrimination is identified in abstracto and only briefly mentioned in the courts’ reasoning in passing, without eventually having particular influence on the outcome of the respective cases. In other words, although courts appear to generally acknowledge the discriminatory potential of ADM as well as the importance of appropriate safeguards to prevent the materialisation of such a risk, they refrain from drawing any concrete conclusion therefrom. This was, for instance, the approach adopted by the CJEU in SCHUFA and by the Italian Council of State in the Buona Scuola line of judgments that are examined below.
SCHUFA (CJEU)
In a case that arose before the Administrative Court of Wiesbaden (Verwaltungsgericht Wiesbaden) concerning an individual who was denied a loan by a financial institution due to a negative credit score established by the German credit reporting agency SCHUFA, the CJEU was asked to render a preliminary ruling on the question whether such credit scoring practice constitutes ‘automated individual decision-making’ within the meaning of Article 22(1) GDPR.Footnote 72 In the Court’s view, the automated establishment, by a credit information agency, of a probability value in the form of a credit score falls in itself under the scope of Article 22 GDPR, where a third party draws strongly on that value to establish, implement or terminate a contractual relationship with the person concerned.Footnote 73
To reach this conclusion, the Court corroborated its interpretation of Article 22(1) GDPR by pointing to the purpose pursued by this provision with regard to the protection of individuals against the particular risks of automated data processing, including any potential discriminatory effects arising therefrom on the basis of protected characteristics, as per recital 71 GDPR.Footnote 74 In this regard, the Court also stressed the need for automated decisions to be accompanied by appropriate measures safeguarding the rights of the persons concerned in a way that prevents, among others, the emergence of discriminatory effects.Footnote 75 Nevertheless, such risks of discrimination have been recognised by the CJEU by simply repeating the text of recital 71 verbatim and without reference to any concrete risks posed by the algorithmically generated credit score at issue for the individual affected.Footnote 76
Buona Scuola (Italian Council of State)
In a series of cases brought against the Italian Ministry of Education, the Italian Council of State (Consiglio di Stato) pronounced itself on the legitimacy of an algorithm-based mobility mechanism which formed part of the so-called ‘Buona Scuola’ educational reform, leading to the misplacement of thousands of school teachers across Italy.Footnote 77 In particular, the Italian court inferred from EU law three principles that must be taken into consideration when using automated tools in decision-making: firstly, the principle of knowability (principio di conoscibilità) which requires that individuals are aware of the existence of an ADM process and can receive meaningful information about the logic involved therein; secondly, the principle of non-exclusivity of an algorithmic decision (principio di non esclusività), pursuant to which individuals have the right not to be subjected to ADM based solely on automated means; and thirdly, the principle of algorithmic non-discrimination (principio di non discriminazione algoritmica).Footnote 78 However, the Council of State concluded that the placement algorithm used by the Ministry did not seem to comply with these principles, especially since it was impossible to understand the reason why the legitimate expectations of the teachers affected were confuted and the way in which the available posts were allocated.Footnote 79
Most notably, algorithmic discrimination was explicitly recognised by the Italian court as a fundamental principle (principio fondamentale) that derives from and is defined by recital 71 GDPR: data controllers must use appropriate mathematical or statistical procedures for the profiling, implement technical and organisational measures appropriate to ensure that factors resulting in inaccuracies of personal data are corrected and the risk of errors is minimised, secure personal data in a manner that takes account of the potential risks involved for the interests and rights of the individuals concerned and that prevents discriminatory effects on natural persons on the basis of prohibited grounds (eg, race or ethnic origin, political opinions, religion, etc) or that result in measures having such an effect.Footnote 80 In this context, the Council of State further noted that, even if an algorithm does not constitute the sole ground of a decision, it must not assume a discriminatory character, as in such a scenario it would be incumbent to rectify the input data in order to prevent any discriminatory effects in the decision’s output.Footnote 81
C. Judgments of discrimination silence
Contrary to the two previous categories of judgments, in the rulings classified hereunder the courts completely refrain from dealing with the discriminatory potential of unfair algorithmic classifications. As such, judgments of this kind may at first sight look like a mismatch with a taxonomy related to non-discrimination considerations. However, it is the existence of a sufficiently plausible risk or suspicion of discrimination resulting from the use of the ADM in the cases at issue that explains the relevance of these rulings for the purposes of my taxonomy, even though any relevant non-discrimination concerns are eventually overlooked by judges.Footnote 82 As indicative examples of the judicial approach described above, I will examine the judgments delivered by the French Council of State and the French Constitutional Council with regard to the Parcoursup student admission platform, and the judgment rendered by the District Court of the Hague in a case concerning Bunq’s de-risking practices in banking transactions.
Parcoursup (French Council of State and Constitutional Council)
In 2018, a national platform for admission to university studies was established in France under the name ‘Parcoursup’, enabling secondary school students to pre-register, submit their course preferences and respond to admission offers from various higher education institutions.Footnote 83 Following Parcoursup’s implementation, the French Defender of Rights received several complaints alleging lack of transparency as well as potential discrimination in the selection process due to the ranking of applications on the basis of the candidates’ high-school of origin (lycée d’origine), which could be to the detriment of applicants coming from schools with a lower reputation that are mostly located in disadvantaged areas.Footnote 84 In response to these allegations, the Defender of Rights addressed recommendations to the Ministry of Education, emphasising, in particular, that the use of the school of origin as a criterion for the selection of candidates may give rise to discrimination if it results in exclusionary outcomes, by favouring certain candidates or disfavouring others on the basis of their school’s geographical location.Footnote 85
Against this background, the National Student Union of France requested three French Universities, namely the University of Antilles, Reunion and Corsica, to disclose the algorithms and source codes used in the context of reviewing applications via the Parcoursup platform. After its requests were rejected, the union challenged the rejection decisions in court. In the case against the University of Antilles, the French Council of State (Conseil d’État) ruled that the access to documents relating to the algorithmic processes deployed by higher education institutions is restricted to individual candidates, only after the decision concerning them has been made, and concerns only the relevant criteria and methods used for examining their own application.Footnote 86 As regards the cases against the Universities of Reunion and Corsica, albeit reaching the same conclusion, the Council of State decided to refer the issue to the French Constitutional Council (Conseil constitutionnel),Footnote 87 which eventually found that these limitations on the right of access to information did not infringe upon the right of access to administrative documents nor upon the right to effective legal protection.Footnote 88 Nevertheless, despite the explicit warning of the Defender of Rights about the discriminatory potential of the algorithmic admission procedure, neither the Council of State nor the French Constitutional Council raised any concern in this regard.
Bunq’s de-risking practices (District Court of the Hague)
In a case concerning an individual whose accounts had been blocked by a bank called ‘Bunq’ following an automated due diligence investigation, the District Court of the Hague (Rechtbank Den Haag) was asked to rule on that person’s right under the GDPR to obtain information about the assignment of his risk profile, the reasons behind the monitoring and subsequent blocking of his accounts, as well as the underlying logic of the bank’s transaction screening system.Footnote 89 Automated tools are increasingly used by financial or other institutions to monitor financial transactions and identify ‘suspicious’ transfers for the detection of underlying criminal activities in compliance with EU and national rules on anti-money laundering and countering financing of terrorism.Footnote 90 Yet, such tools have been found likely to exclude perfectly innocent customers, particularly those belonging to certain nationalities, ethnic or religious groups, from access to the regular banking system.Footnote 91
In the case at issue, however, no discrimination concerns were raised either by the claimant or by the Dutch court. The Court of the Hague confined itself in rejecting the claimant’s access request, considering that there was no automated decision involved in Bunq’s due diligence process within the meaning of Article 22(1) GDPR: whereas the identification of an allegedly suspicious payment transaction giving rise to further investigation was automatically performed by the bank’s algorithmic monitoring system, all subsequent measures taken, including the blocking of the claimant’s accounts, required the intervention of Bunq’s employees.Footnote 92
4. Deciphering the case law: the interplay between non-discrimination and data protection law
It follows from my proposed case law taxonomy as developed above that the judicial decisions delivered in the field of algorithmic bias can be highly diverse.Footnote 93 The diversity concerns not only the subject matter at issue but also the legal argumentation deployed. What is most remarkable is how differently the judges in each case engage simultaneously with non-discrimination and data protection considerations. The lack of a coherent and uniform body of case law in this regard should not come as a surprise though; it rather reflects the intricate interplay between non-discrimination and data protection law underlying the reasoning of the relevant judgments, which may be often articulated in a different way by various courts across the EU.
Before examining how these two areas of law may interact in judgments relating to concrete instances of algorithmic bias, it is necessary first to delineate the theoretical framework that explains their interrelationship in general. I will then move on by exploring to what degree non-discrimination and data protection considerations may intertwine with each other in judicial reasoning, and on which terms such interplay may be framed by different courts across the EU.
A. The theoretical framework of the interplay
The parallels and close relation between EU non-discrimination and data protection law have been already noted by scholars.Footnote 94 It has been argued, in particular, that both non-discrimination and data protection law encompass elements of fairness.Footnote 95 This is also acknowledged by recital 71 GDPR which entails that fair processing of personal data requires the implementation of measures that prevent any discriminatory effects on natural persons.Footnote 96 Accordingly, apart from being potentially captured by EU non-discrimination rules, data-driven or algorithmic discrimination will further amount to unfair data processing contrary to Article 5(1)(a) GDPR.Footnote 97 Similarly, in case the discriminatory outcomes in question derive from biased or otherwise incorrect data, the principle of accuracy under Article 5(1)(d) GDPR will also be violated.Footnote 98 At the same time, since the fairness of data processing is inextricably linked to its lawfulness as indicated by the wording of Article 5(1)(a) GDPR,Footnote 99 it follows that the non-discriminatory character of data processing also constitutes a precondition for its lawfulness.
The interaction between non-discrimination and data protection law is perhaps more evident when looking at Article 9 GDPR, which in principle prohibits the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic data, biometric data, data concerning health or data concerning a natural person’s sex life or sexual orientation, subject to certain derogations and safeguards.Footnote 100 By a simple reading of the text of Article 9(1) GDPR, one cannot help but notice that the categories of data considered as ‘sensitive’ largely correspond to the prohibited grounds of discrimination under the EU Equality Directives and Article 21 of the Charter, even though they do not fully overlap with those.Footnote 101 These data are afforded greater protection under data protection legislation due to their inherent risks for the fundamental rights and freedoms of individuals, notably due to their increased potential of giving rise to discrimination in case of misuse.Footnote 102 Sometimes, however, using such data as variables in ADM models may be necessary to prevent discrimination.Footnote 103 Be that as it may, Article 9 GDPR clearly illustrates that certain personal traits are recognised by both non-discrimination and data protection regimes as deserving special legal guarantees. It has even been suggested, in this respect, that the GDPR and a broad understanding of the notion of ‘sensitive data’ could indirectly contribute to expanding the limited scope of the Equality Directives.Footnote 104
Yet, it is in the context of ADM processes where the convergence of non-discrimination and data protection considerations becomes mostly prevalent. Since structural inequalities, biases and stereotyping patterns are often reflected in the data used by algorithmic or AI systems as training material, the outputs generated therefrom may further reproduce or magnify existing discrimination against persons belonging to certain vulnerable social groups.Footnote 105 Given the particularities of algorithmic decision-making, any discriminatory or otherwise unfair outcomes produced by ADM systems can spread at a wider scale and at a much quicker pace in comparison to human decisions, thereby exacerbating their ramifications for a greater number of people.Footnote 106 Because of these risks, Article 22 GDPR confers on individuals the right not to be subject to automated data processing, unless certain conditions are met and provided the data controller – responsible for determining the means and purposes of the processing – implements appropriate measures to safeguard the rights and freedoms of the individuals concerned.Footnote 107 As per recital 71 GDPR, these measures are intended, among others, to prevent inaccuracies, errors, and discriminatory effects of automated decisions. Such an explicit connection between non-discrimination and data-related considerations is further made, albeit beyond the field of data protection, by Article 10 AI Act which requires that the datasets used for training, validation and testing of high-risk AI systems are ‘relevant, sufficiently representative, and to the best extent possible, free of errors and complete’ and become subject to data governance and management practices that involve the examination for possible biases likely to have a negative impact on fundamental rights or lead to discrimination, and provide for appropriate measures to detect, prevent and mitigate any biases identified. Nevertheless, an analysis of these requirements goes beyond the scope and purpose of the present article.
In the context of automated decisions, whether algorithmically generated or AI-based ones, the interrelation between non-discrimination and data protection considerations usually revolves around the concepts of ‘transparency’, ‘explainability’ or ‘interpretability’ of the ADM systems concerned.Footnote 108 Due to their complex internal functioning, commonly described as ‘black box’, algorithms and particularly advanced AI machine-learning models are typically not only unknown but also incomprehensible to laypersons, so that the understanding of how a particular algorithmic output has derived from specific data inputs and whether it is discriminatory often proves to be challenging.Footnote 109 Such opacity of algorithmic tools significantly impedes victims of discriminatory or otherwise unfair outcomes from detecting and proving algorithmic bias within a given automated decision, especially for the purposes of establishing a prima facie case of discrimination.Footnote 110 As a matter of fact, the persons affected by ADM systems may sometimes not even be aware of the occurrence of a discriminatory practice against them.Footnote 111 Thus, absent sufficient knowledge or understanding of the functionality of automated systems, the affected persons cannot successfully contest the relevant decisions concerning them, while the judges will be unable to properly assess the facts of the case brought before them.Footnote 112 Accordingly, transparency plays an instrumental role in ensuring non-discrimination in ADM processes, not only by facilitating the prevention and detection of biases but also by enabling the persons affected by biased decisions to contest them.Footnote 113 As such, it proves to be crucial for safeguarding the individuals’ right to have recourse to effective judicial redress against automated decisions of discriminatory or otherwise unfair character.Footnote 114
In view of the crucial function of transparency in ADM processes, Article 15(1)(h) GDPR enshrines the right of individuals to be informed about the existence of automated data processing and have access to ‘meaningful information about the logic involved as well as the significance and the envisaged consequences’ of such processing for them.Footnote 115 Following the CJEU’s ruling in Dun & Bradstreet Austria, this right has explicitly been deemed tantamount to ‘a genuine right to an explanation’ about the functioning of the mechanism involved in the ADM process and the result of that decision.Footnote 116 Accordingly, the persons affected by algorithmic outcomes will be entitled to receive an explanation of the procedure, principles and input data that were actually used to obtain the specific result concerning them in a concise, transparent, intelligible and easily accessible form, without general information about complex mathematical formulas, such as algorithms, being sufficient.Footnote 117 The disclosure of the most important features of an automated individual decision does not only enable the individuals concerned to effectively exercise the rights conferred on them by the GDPR;Footnote 118 it also contributes to the detection of discrimination, by enabling affected persons to determine to what extent a given ADM process might have been driven by variables overlapping or correlated with prohibited grounds of discrimination.Footnote 119 It is precisely for this reason that the so-called ‘judgments of discrimination silence’ become relevant from the standpoint of non-discrimination law: insofar as these rulings reflect the attempt of individuals to gain insight into the way in which unfair algorithmic outputs are produced, they pave the way for those persons to take follow-on actions, by exercising their rights granted under non-discrimination legislation, including the right to file a discrimination claim within a national equality body and the right to request compensation or reparation for the loss and damage sustained.Footnote 120
Against this background, it has been convincingly advocated in academic literature that data protection rules, notably the GDPR, and EU non-discrimination law could complement each other for the purpose of addressing any biased or discriminatory effects that may arise from ADM operations.Footnote 121 The potential synergies between the two areas of law concern not only the substance but also, and primarily, enforcement, with the GDPR providing the appropriate mechanisms that may contribute both to the ex ante control and prevention of unfair biases or discrimination, and to their correction ex post where applicable.Footnote 122 By way of illustration, the obligation imposed on data controllers under Article 35 GDPR to carry out a Data Protection Impact Assessment whenever the processing of personal data concerned is likely to result in a high risk to the individuals’ rights and freedoms, including their right to non-discrimination, ensures that controllers assess the likelihood and severity of risks of errors, biases or discrimination before the processing takes place, and take any measures necessary to mitigate such risks accordingly.Footnote 123 In addition, the right to receive information/explanations pursuant to Article 15(1)(h) GDPR, as further clarified by the CJEU in Dun & Bradstreet Austria, constitutes a promising tool which enables the individuals affected by discriminatory or otherwise unfair algorithmic results to be granted a certain degree of transparency about the ADM operations concerning them and thus to establish, as the case may be, a prima facie case of discrimination. Finally, apart from private enforcement through individual rights, the GDPR further provides for collective redress mechanisms and grants substantial public enforcement powers to DPAs, such as the conduct of audits and investigations and the imposition of hefty fines, thus compensating for the existing shortcomings of EU non-discrimination law in this regard.Footnote 124 This is why, besides, a close collaboration between national equality bodies and DPAs is deemed essential in the field of algorithmic bias.Footnote 125
B. The degree of the interplay in judicial reasoning
Having set the theoretical framework about how non-discrimination and data protection considerations may conceptually and legally converge, I will now examine how this interplay may be practically reflected in court cases of algorithmic bias. It is submitted from the outset that the two areas of law do not necessarily interact in a fixed and balanced manner in judicial reasoning. Instead, as my suggested taxonomy of judgments indicates, the degree of their interplay will usually be dynamic and sometimes even totally asymmetrical.
On the one hand, the courts’ reasoning in cases of algorithmic discrimination seems to predominantly revolve around arguments and concepts of non-discrimination law, whether of EU or national origin or both, leaving aside any data protection concerns. For instance, in Svea Ekonomi, the Finnish Non-Discrimination and Equality Tribunal reached its conclusion about the discriminatory nature of the credit scoring system in question on the basis of the personal characteristics protected under the Constitution of Finland, the Finnish Non-Discrimination Act and the Act on Equality between Women and Men, while also referring to Directive 2004/113/EC and its interpretation by the CJEU’s in Test-Achats.Footnote 126 In a similar vein, the District Court of Bologna in Deliveroo interpreted the Italian Legislative Decree transposing Directive 2000/78/EC in light of the CJEU’s case law to establish that the discrimination on grounds of trade union activity that Deliveroo’s riders suffered through the platform’s scoring system is covered by the protected ground of personal beliefs; to uphold the legal standing of associations or organisations in cases of discrimination against an unspecified group of workers; and to confirm that discrimination can be found even in the absence of an identifiable complainant.Footnote 127 In both cases mentioned above, the national courts seised of the disputes at issue approached them solely from the perspective of non-discrimination rules without considering the potential relevance of Article 22 GDPR and the consequences arising therefrom.Footnote 128 Given that the source of discrimination in such instances lies precisely in the use or omission of data relating to certain protected personal characteristics, any relevant data protection considerations will most likely be completely ‘absorbed’ by a non-discrimination law analysis and thus eventually become ‘invisible’ in the judicial reasoning.
As concerns judgments pertaining to cases of unfair algorithmic differentiation, on the other hand, the interplay between non-discrimination and data protection norms is far more complex and nuanced. Given that, in the absence of any (proven) correlation with legally protected attributes, instances of algorithmic bias are not captured by non-discrimination legislation, courts across the EU will be mainly called upon to address questions about specific data protection aspects of the ADM processes deployed in the cases concerned, notably regarding the (lack of) transparency of the algorithmic systems used.Footnote 129 Thus, recourse to data protection rules seems to be the only option of legal redress available under EU law for the individuals affected by this kind of biased decisions. Nevertheless, a certain degree of interaction with non-discrimination considerations still exists in some of these cases, depending on the context and the distinction drawn between judgments of discrimination reflection, those of discrimination awareness, and those of discrimination silence.
In judgments of discrimination reflection, the courts tend to expressly engage with the equality concerns raised in relation to the specific ADM processes in the cases at hand, only to ultimately subsume them within the broader framework of data protection law. One can think, for example, of the ruling in SyRI where the District Court of the Hague, albeit expressly acknowledging the risk of discrimination emerging from the use of the algorithmic system at issue, addressed that risk through the lens of its principal data protection/privacy line of argumentation.Footnote 130 As regards judgments of discrimination awareness, non-discrimination assumes a merely marginal role, by corroborating the data protection framework only at the level of abstract principles. This is exactly what happened, for instance, in SCHUFA where the CJEU briefly referred to the discriminatory effects that automated decisions may entail in general to support its interpretation of Article 22 GDPR.Footnote 131 As for the category of judgments of discrimination silence, courts across the EU will focus solely on data protection issues, such as transparency concerns, while completely disregarding any potential discrimination risks arising therefrom, as was the case in the ruling delivered by the District Court of the Hague in Bunq.Footnote 132
C. The framing of the interplay by courts
A closer look at the judicial decisions listed in this paper enables one to observe that the interplay between non-discrimination and data protection considerations in the context of algorithmic bias in ADM processes is framed in various ways by different courts across the EU. Yet, certain recurrent themes seem to exist: the interplay between the two legal regimes is mostly anchored in the principles of lawfulness/fairness of data processing and in the notion of transparency often combined with the right to effective judicial protection.
As a first point, it is noted that the interrelation between non-discrimination considerations and the lawfulness/fairness of data processing is explicitly acknowledged by the CJEU in Ligue des droits humains, where the Court stated that the Member States’ competent authorities must ensure the lawfulness of the automated processing of PNR data, in particular its non-discriminatory nature.Footnote 133 Similarly, such a link underlies the reasoning of the District Court of the Hague in SyRI, where the concerns about the discriminatory potential of the SyRI algorithm were raised in the context of assessing whether the conditions of Article 8(2) ECHR, which reflect the principle of lawful processing, were fulfilled.Footnote 134
Accordingly, given the overlaps between non-discrimination and data protection principles, the finding of algorithmic discrimination by a court or a national equality body does not preclude the simultaneous or subsequent finding of data protection violations by a DPA, or vice versa. Albeit not in a judicial context, this was the case, for instance, of the use of a categorical upper age limit in Svea Ekonomi’s credit assessment system, which was not only deemed discriminatory by the Finnish Non-Discrimination Tribunal but was also found unlawful by Finland’s Data Protection Ombudsman. The latter took the view that the mere age of clients ‘does not describe their solvency, willingness to pay or ability to deal with their commitments’ and ordered Svea Ekonomi to change its data processing practices.Footnote 135 Another notable example in this regard is offered by the decision of the Dutch DPA on the so-called ‘childcare benefits scandal’ in the Netherlands which concerned the deployment of a self-learning algorithm by the Dutch Tax Administration to assess childcare benefit applications and combat fraud. The DPA concluded that the way in which the Tax Administration processed the nationality data of childcare benefit applicants for years, amounted to a serious breach of the GDPR and was also discriminatory by introducing an unjustified distinction on the basis of nationality, thus infringing the principle of fairness under Article 5(1)(a) GDPR.Footnote 136 Dealing with the follow-up discrimination claims brought by the victims of the benefits case, the Dutch Institute of Human Rights also concluded that the selection criteria used by the Tax Administration for the discontinuation and recovery of childcare benefit from the complainants indirectly discriminated against them on the basis of their foreign origin.Footnote 137
Turning to transparency-related concerns, these constitute the point of convergence between non-discrimination and data protection that appears perhaps most prominently as a leitmotif in the courts’ rulings examined in this paper. For instance, the District Court of Bologna pointed out in Deliveroo that the failure of the delivery company to clarify the concrete mechanism of its algorithm and the specific calculation criteria it deployed precluded a more in-depth judicial examination of the system’s prejudicial effects on the riders concerned. Yet, this remark related only to the defendant company’s failure to discharge its burden of proof by challenging the prima facie case of discrimination brought against it and thus did not prevent the Italian court from upholding the existence of algorithmic discrimination in that case.Footnote 138 On the contrary, though, courts across the EU have declared on several occasions that they were unable to properly assess the discriminatory potential of certain ADM systems due to their lack of sufficient transparency. This was the case, most notably, in SyRI,Footnote 139 GALOP II,Footnote 140 as well as in the Buona Scuola line of judgments.Footnote 141
Regarding, in particular, the tripartite relation between transparency, effective judicial protection and discrimination-related concerns as explained before, a clear illustration thereof is provided in the CJEU’s reasoning in Ligue des droits humains. The Court emphasised there that the individuals concerned must be able to understand how algorithmic programs and the criteria used therein work, so that they can decide with full knowledge of the relevant facts whether or not to exercise their right to judicial redress, in order to call into question, as the case may be, the discriminatory nature of the said criteria.Footnote 142 In the CJEU’s view, this is particularly so with regard to AI-driven self-learning tools, the opacity of which renders it impossible to understand the way decisions are taken and is thus likely to compromise the right of the persons concerned to an effective judicial remedy under Article 47 of the Charter, especially when it comes to challenging the discriminatory nature of the results obtained.Footnote 143 Similarly, in the case of the Policing systems in Hessen and Hamburg, the German Constitutional Court recognised the difficulties in scrutinising complex algorithmic tools, as well as the far-reaching consequences arising therefrom in relation to both the legal protection of the individuals suffered from erroneous assessments and the administrative oversight by state authorities.Footnote 144 Such connection between the inability to understand how a certain automated decision has been reached by a given ADM system and the right of individuals to defend themselves against that decision was also drawn by the District Court of the Hague in SyRI.Footnote 145 Interestingly enough, although the plaintiffs in that case had also argued for a violation of their right to a fair trial under Art 6 ECHR, the Dutch court, having already established a violation of Art 8 ECHR, did not address this claim.Footnote 146
5. Concluding remarks
Reading the existing case law concerning instances of algorithmic bias in ADM operations through the lens of non-discrimination law, the EU judicial landscape appears highly fragmented. The great diversity that permeates the interplay between non-discrimination and data protection considerations can be primarily attributed to the specific claims brought forward by the parties in the course of legal proceedings, which largely determine if and to what extent courts will look more into one legal regime or the other or even both at the same time.Footnote 147 Taking a step back, though, the plaintiffs’ choice as to the exact articulation of their legal argumentation inevitably depends on their ability to successfully discharge the relevant burden of proof required by each set of rules. As regards non-discrimination law, this burden proves to be disproportionately high, given that, apart from the existing barriers to discrimination-related litigation in general,Footnote 148 the granularity and opacity of ADM processes make it particularly hard to identify and prove the discriminatory potential of the decisions concerned.
Such obstacles, along with the inapplicability of non-discrimination law to instances of unfair algorithmic differentiation beyond legally protected attributes, may explain why victims of algorithmic bias tend to frame the harm they suffered mostly in data protection terms when seeking justice before courts, while only reserving a secondary role for non-discrimination considerations, if at all.Footnote 149 That said, as long as that the latter are organically integrated into data protection norms, such as those relating to fair and lawful data processing or transparency, this approach should be welcomed, since it respects the particularities of both non-discrimination and data protection law, while at the same time reflecting their complementarities. Conversely, though, where equality concerns are completely overshadowed by data protection law and ‘reduced’ to rather technical data-related issues, the social functions performed by non-discrimination law remain unaddressed, thereby disregarding the objectives of substantive equality that the latter is meant to accomplish.Footnote 150
In other words, data protection considerations alone fail to properly acknowledge and condemn the social stigma and the disadvantage suffered by victims of algorithmic discrimination.Footnote 151 To quote Hakkarainen, by framing discrimination in purely data-centric terms, the GDPR and data protection law more broadly treat it as a merely ‘mechanical question about abusive data processing’ and, consequently, tend to ‘lose sight of the human behind the data’.Footnote 152 Therefore, leaving aside any clear-cut cases of algorithmic discrimination, it seems to me that the hybrid paradigm represented by the so-called ‘judgments of discrimination reflection’, which organically combine aspects of both non-discrimination and data protection law, could serve as the appropriate benchmark guiding the interplay between the two sets of rules in future cases of algorithmic bias that may reach the courtrooms.
Acknowledgements
An earlier draft of this article was presented at the Third Max Planck European Law Group Conference ‘Equality in an Unequal Europe? Assessing Anti-Discrimination Law in Context’ held in Berlin in October 2024, and at the ELU-S Annual Conference ‘European Law Unbound: What Kind of Europe Can We Reach For?’ that took place in Prague in September 2025. I extend my gratitude to the organisers and participants of these conferences for giving me the opportunity to discuss my ideas and receive feedback. I would also like to particularly thank Professor Christa Tobler for her insightful suggestions during the drafting stage of this article as well as the two anonymous reviewers for their valuable comments. The usual disclaimer applies.
Funding statement
None.
Competing interests
The author has no conflicts of interest to declare.