Skip to main content Accessibility help
Hostname: page-component-59b7f5684b-qn7h5 Total loading time: 0.913 Render date: 2022-10-04T22:09:04.185Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "displayNetworkTab": true, "displayNetworkMapGraph": true, "useSa": true } hasContentIssue true

The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System

Published online by Cambridge University Press:  13 May 2022

Charly Derave*
Perelman Centre for Legal Philosophy, Faculty of Law and Criminology, Université Libre de Bruxelles (ULB), Brussels, Belgium
Nathan Genicot
Perelman Centre for Legal Philosophy, Faculty of Law and Criminology, Université Libre de Bruxelles (ULB), Brussels, Belgium
Nina Hetmanska
Perelman Centre for Legal Philosophy, Faculty of Law and Criminology, Université Libre de Bruxelles (ULB), Brussels, Belgium
*Corresponding author. Email:
Rights & Permissions[Opens in a new window]


In recent years, the European Union (EU) has strongly promoted a human-centric and trustworthy approach to artificial intelligence (AI). The 2021 proposal for a Regulation on AI that the EU seeks to establish as a global standard is the latest step in the matter. However, little attention has been paid to the EU’s use of AI to pursue its own purposes, despite its wide use of digital technologies, notably in the field of border management. Yet, such attention allows us to confront the highly moral discourse that characterises EU institutions’ communications and legislative acts with a concrete example of how the promoted values are realised “on the ground”. From this perspective, this paper takes the case study of the European Travel Information and Authorisation System (ETIAS), an EU information technology system (planned to become operational in May 2023) that will provide travel authorisation to visa-exempt third-country nationals using a profiling algorithm. The paper shows, on the one hand, that ETIAS constitutes another piece in the massive infrastructure of digital surveillance of third-country nationals that the EU has been building for years. On the other hand, ETIAS’s algorithmic process is shown to be an instrument of differential exclusion that could well have an adverse impact on certain groups of foreign travellers. Ultimately, this paper argues that far from falling outside the scope of the trustworthy approach to AI championed by the EU, ETIAS – and more broadly the systematic risk evaluation predominant in the EU’s use of AI – is a constitutive part of it.

© The Author(s), 2022. Published by Cambridge University Press

I. Introduction

Ethical approaches to artificial intelligence (AI) in general and the concept of trustworthy AI in particular have been in vogue in recent discussions on the social challenges posed by new technologies. The notion of trustworthy AI has been endorsed by the European Union’s (EU) institutions in their discourse and legislative practices. Nevertheless, little attention has been paid to the realisations of these values by the very same institutions in their use of technological solutions to tackle social problems.

In its communication of 8 April 2019, the European Commission declares that the European approach to AI should contribute to build confidence in AI: “[T]rust is a prerequisite to ensure a human-centric approach to AI: AI is … a tool that has to serve people with the ultimate aim of increasing human well-being”.Footnote 1 This Communication is part of the broader EU strategy for AI,Footnote 2 according to which Europe has to champion an approach that places people at the centre of technological progress, which should benefit the whole society. Drawing on the recommendations of the High-Level Expert Group on AI (HLGAI),Footnote 3 the Commission details several key requirements that constitute “human-centric AI”: human agency and oversight; technical robustness and safety; privacy and data governance; transparency, diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. In the White Paper on AI,Footnote 4 the Commission calls for an “ecosystem of trust”, noting that although AI applications are already subject to EU legislation (on fundamental rights, consumer protection, etc.), “some specific features of AI (e.g. opacity) can make the application and enforcement of this legislation more difficult”.Footnote 5 The Commission’s proposal for a legal framework on AI in April 2021 marks a significant step in the European AI strategy.Footnote 6 The proposal follows a risk-based approach in which the level of regulation depends on the level of risk posed by AI systems.Footnote 7 Through this proposal, the EU confirms its desire to distinguish itself from other international powers such as the USA and China by becoming “a global leader in the promotion of trustworthy AI” and spearheading “the development of new ambitious global norms”.Footnote 8

Interestingly, in parallel with its efforts to create an ecosystem of trust and to develop a human-centric approach to AI, the EU has also shown interest in the use of digital technologies and AI applications in the implementation of its own policies in the field of migration policy and external border control.Footnote 9 Indeed, since the 1990s, the EU has been building a vast digital infrastructure that includes various information systems and databases with the aim of expanding the surveillance and control of the movement of third-country nationals. In the last decade, the trend towards the “digitisation of control”Footnote 10 has been clearly confirmed, notably in the Communication “Smart Borders – Options and the Way Ahead”, in which digital technologies are presented as essential to ensuring “swift and secure border crossings”.Footnote 11

Among the latest major developments in the field is the adoption by the European Parliament and the Council of the Regulation 2018/1240 establishing a European Travel Information and Authorisation System (ETIAS)Footnote 12 – an information technology (IT) system that will provide travel authorisation to visa-exempt foreigners.Footnote 13 ETIAS imposes a new condition of entry on non-EU citizens who are currently able to enter European territory without any prior authorisation. When ETIAS becomes operational (which is planned for May 2023),Footnote 14 every visa-exempt traveller will be compelled to hold an authorisation,Footnote 15 delivered on the basis of an assessment of the presumed security, illegal immigration and/or high epidemic risks. This risk assessment will notably be performed by a profiling algorithm, called “screening rules” in the Regulation’s nomenclature. The establishment of these screening rules is a major milestone in the history of the management of external borders and the use of digital technologies, as it constitutes the first European automated risk-profiling system that will be used in migration management.Footnote 16

Taking ETIAS as a case study permits us to examine the human-centric AI narrative in light of the EU’s own use of AI to pursue its own purposes. The highly moral discourse that characterises EU institutions’ communications and legislative acts on the matter becomes more precise when confronted with a concrete example of how the promoted values are realised “on the ground”. ETIAS seems to be a well-suited object of study, since it will be one of the biggest automated information systems using profiling techniques developed by the EU with far-reaching consequences for foreign travellers, who are nonetheless not required to possess a visa to cross the EU’s external borders.

Despite what precedes, it should be stressed that it is an open question as to whether ETIAS’s screening rules and their underlying profiling algorithm will be effectively based on AI. The proposed Regulation only provides for a framework without explaining in detail how the algorithm will be established. The newly adopted delegated decision by the EU Commission does not give much information about the algorithmic process,Footnote 17 and it cannot be taken for granted that machine learning algorithms – with which AI is often associated – will be implemented. Because it is not clear that AI will be used in ETIAS, one might ask whether it is a well-suited case study for assessing the EU’s narrative on AI in light of its actions. Still, we contend it is suitable for at least two reasons.

Firstly, as we will address later in our analysis, there is evidence from the EU itself that machine learning techniques may be used in ETIAS, either at launch or later, once it is implemented. Secondly, the very notion of AI,Footnote 18 which is so fashionable today (including within EU institutions), is often used in a vague and imprecise way. Indeed, legal and political approaches to AI do not operate under exactly the same meaning of AI compared with what computer engineers and data scientists encompass under the umbrella of “artificial intelligence”. In this regard, some experts argue that the definition of AI set out by the EU Commission in Annex I of the proposed Regulation on AI is so broad that any computer program could qualify as an AI system under its provisions.Footnote 19 Interestingly, Article 83 of this proposal excludes AI systems that are components of large-scale IT systems in the Area of Freedom, Security and Justice – the list of which includes ETIAS – from its scope.Footnote 20 This demonstrates that the Commission itself assumes that ETIAS is likely to constitute an AI system within the meaning of the proposed Regulation on AI; otherwise, there would be no need for an exception rule. Thus, regardless of the exact functioning of the screening rules, we argue that ETIAS is undeniably part of the EU’s narrative on AI.

This paper is divided into two main parts. In the first (Section II), we briefly depict the digitalisation of the EU’s external borders with an overview of the six large-scale information systems that have been set up by EU institutions as significant tools for their migration policy and border management. We then turn to our case study by briefly developing the historical context that gave rise to ETIAS. We explain the decision and operational processes of ETIAS that specifically aim at enhancing surveillance at external borders. In the second part (Section III), we focus on the profiling algorithm that is set forth in Article 33 of ETIAS Regulation. Based on the information available to date, we show how the algorithm would work once operational. Promoted as a tool of rationalisation and optimisation of border control, ETIAS may well on the contrary produce arbitrary and groundless results, as we intend to show by drawing on the seminal paper of Barocas and Selbst.Footnote 21 We argue that ETIAS is an instrument of differential exclusion, playing a key role in European border management.

By way of conclusion (Section IV), we tackle the ambiguities that lie behind the often supposedly naïve European discourse on AI. Indeed, we show that the risk-based approach (applied to the “undesirable” foreign population) constitutes a prerequisite of the “ecosystem of trust” (promoted among European citizens).

II. A new piece in the jigsaw of the EU’s large-scale information systems

1. The digitalisation of the EU’s external borders

The digitalisation of the EU’s external borders can be traced back to the achievement of the EU’s Area of Freedom, Security and Justice. The resulting lifting of internal border controls and subsequent loss of Member States’ sovereignty over their own borders were compensated for by increased cooperation at the external borders, particularly in the fight against illegal immigration.Footnote 22 This development reached new heights following 9/11 and the terrorist attacks perpetrated in Madrid in 2004 and in London in 2005. Migration policies became the main lever of/in the “war on terror”, shedding light on an “intertwining between immigration and security” or a “migration-risk nexus”.Footnote 23 Borders have become a privileged site for public investment in arms and high-tech industries. Drones, thermal and facial recognition cameras and biometric identification tools have been tested and deployed primarily on migrant populations, as well as a number of other technologies aiming at automating the treatment of visa, asylum or residence permit applications.Footnote 24 The technological change has been accompanied by a conceptual one: human mobility has been represented in terms of manageable and dynamic flows (motilitiesFootnote 25) and captured through novel technologies and digitalisation that allow a wide range of non-EU citizens’ data to be collected and stored in large-scale databases with multiple purposes.

To date, there are six EU databases on third-country nationals in the EU (see Figure 1). Three of them are operational: the Schengen Information System (SIS II), which enables national authorities to have access to alerts on persons and property during border controls;Footnote 26 Eurodac, established in the framework of the Dublin system,Footnote 27 which registers asylum seekers’ and undocumented people’s fingerprints with the aim of their identification, as well as the determination of the Member State responsible for examining an asylum application;Footnote 28 and the Visa Information System (VIS), which collects all data relating to any application for a short-stay visa, a long-stay visa and a residence permit in order to implement the common visa policy.Footnote 29 In the wake of the so-called “migration crisis” of 2015 and the terrorist attacks in Paris in 2015 and Brussels in 2016, three other large-scale EU information systems (which are not yet functional) have been created: the Entry/Exit System (EES), which will record a range of data of any foreigner crossing the EU’s external borders for the purpose of a short stay (no more than ninety days) and pursue more than a dozen objectives;Footnote 30 ETIAS, which will be studied in detail in what follows; and the European Criminal Records Information System for Third-Country Nationals (ECRIS-TCN), which collects information on the criminal records of non-European citizens.Footnote 31

Figure 1. Overview of the European Union (EU) information systems (our illustration). BMS = biometric matching service; CIR = common identity repository; ECRIS-TCN = European Criminal Records Information System for Third-Country Nationals; EES = Entry/Exit System; ESP = European search portal; ETIAS = European Travel Information and Authorisation System; IT = information technology; MID = multiple-identity detector; SIS II = Schengen Information System; VIS = Visa Information System.

This patchwork of six successively created databases considerably diverges from the initial intent of the EU Parliament, which planned to create a single EU information system in the 1990s.Footnote 32 Indeed, the Commission itself has described the resulting compartmentalisation as an obstruction.Footnote 33 To override what was perceived as a weakness, the new concept of interoperability has made its way from computer science to the politics of European data management. Standardised in an International Organization for Standardization (ISO) norm,Footnote 34 interoperability refers to the ability of several functional units – such as information systems – to share and exchange data. Two regulations implementing interoperability were enacted at the EU level. The first one deals with borders and visas, whereas the second one is concerned with police and judicial cooperation, asylum and immigration.Footnote 35 Their common objectives cover the common visa and asylum policies, the fight against illegal immigration and the management of security and public order. According to these two regulations,Footnote 36 interoperability takes the form of four components: the European search portal (ESP),Footnote 37 the shared biometric matching service (BMS),Footnote 38 the multiple-identity detector (MID)Footnote 39 and the common identity repository (CIR). The CIR will be the core of the EU digital infrastructure in the future as it will contain a personal record of every single data subject registered in the EES, VIS, ETIAS, Eurodac and/or ECRIS-TCN.Footnote 40 Moreover, it is worth noting that a central repository for reporting and statistics (CRRS) is established “for the purposes of supporting the objectives of the EES, VIS, ETIAS and SIS … and to provide cross-system statistical data and analytical reporting for policy, operational and data quality purposes”.Footnote 41

The progressive development of the various databases detailed here, from the first SIS to the latest techno-legal interoperability devices, is marked by the same trend of expansion. The primary purpose of a database is to ease the application of a specific legal mechanism, such as the “Schengen acquis” for the SIS II or the identification of the Member State responsible for examining an asylum application for Eurodac. Then, new objectives are added to the initial one(s), notably the fight against irregular immigration or law enforcement. As a result, the categories of personal data collected and processed are extended, thus encompassing biometric data in particular. Finally, with interoperability, the collection and processing of personal data serve yet other purposes, which were not initially included in the respective legal instruments, such as to facilitate the identification of unknown persons.Footnote 42

As Niovi Vavoula rightly notes, the raison d’être of this expansion mirrors the paradigm of filling in “information gaps” and can be qualified as “data greediness”.Footnote 43 The current landscape of information systems effectively covers all groups of third-country nationals, providing various types of data on their status and mobility. The Foucauldian metaphor of the panopticon has often been mobilised to account for this “emerging know-it-all surveillance system, whereby authorities would be able to achieve total awareness of the identities of the individuals”.Footnote 44 The recourse to Foucauldian theory is surely grounded in the analysis of devices of migration control. Nevertheless, ETIAS may not be reduced to just another window in the tower of the prison officer or a simple tool of mass surveillance. It also speaks to a recent paradigm shift from migration control (reactive, focusing on concrete individuals) to migration management (proactive, focusing on potential migrant populations). In this sense, ETIAS is plainly part of the theoretical framework developed in Foucault’s Discipline and Punish and Security, Territory, Population. Both of these roles – a tool of mass surveillance of foreigners and an instrument of an individualised population management – will be explored in the pages that follow.

2. ETIAS: background and constitutive elements

Developing an automated European travel authorisation for visa-exempt third-country nationals was first considered in 2008 when the Commission launched a study on establishing an Electronic System of Travel Authorisation, called “EU-ESTA” in reference to its US counterpart.Footnote 45 This project gave rise to divergent opinions. On the one hand, the European Parliament, following the lead of the European Data Protection Supervisor,Footnote 46 questioned the necessity and proportionality of such a system and criticised the underlying assumption that “all travellers are potentially suspect and have to prove their good faith”.Footnote 47 On the other hand, other EU institutions unequivocally favoured preventative and anticipatory actions based on the use of new technologies and the massive collection of data for risk assessment.Footnote 48 A security narrative (“intelligence-led policies”Footnote 49) was used to ground the need for information. In 2011, the auditing firm PricewaterhouseCoopers (PwC), commissioned to study the feasibility of EU-ESTA, concluded that the development of such a system “would not … be responding to fully unambiguous, well-identified and fully understood needs and problems at this stage”.Footnote 50 Eventually, the Commission gave up on the project on the grounds that the “potential contribution [of an EU-ESTA] to enhancing the security of the Member States would neither justify the collection of personal data at such a scale nor the financial cost and the impact on international relations”.Footnote 51

However, it took less than five years for the EU executive to put a draft travel authorisation for visa-exempt foreigner travellers back on the table. In a 2016 Communication, the Commission asserted the need to strengthen external border controls, citing (the migratory consequences of) the Syrian conflict and the threat to internal security arising from the terrorist attacks in Paris in 2015 and Brussels in 2016.Footnote 52 The Commission argued that there were “shortcomings” in EU information systems, notably “gaps in the EU’s architecture of data management”.Footnote 53 This, coupled with the visa liberalisation policy, is “particularly relevant” for the monitoring of external (land)Footnote 54 borders.Footnote 55 This time, the European Parliament warmly welcomed the Commission’s initiative and invited it to propose a text.Footnote 56

On 16 November 2016, the Commission tabled a proposal for a Regulation establishing ETIAS.Footnote 57 The reading of the proposal raises at least two major concerns. Firstly, although the Commission stated that it was aware of the major impact ETIAS is likely to have on human rights, no ex ante assessment (neither in general nor with regard to human rights in particular) was carried out by its services. There was a feasibility study conducted by PwC between June 2016 and October 2016,Footnote 58 but contrary to what the Commission claimed in the proposal, it focused exclusively on the feasibility of establishing ETIAS. This ignores the terms of the 2016 Interinstitutional Agreement on Better Law-Making, which obliges the Commission to carry out an impact assessment where its initiatives, such as the one at stake here, are likely to have a significant economic, environmental or social impact.Footnote 59 Secondly, the very fact that the feasibility study was again entrusted to PwC – the same company that was charged with the study on EU-ESTA five years earlier – raises some questions. The ubiquity of PwCFootnote 60 in the field of external border and migration management reveals a worrying privatisation of policies that should remain under the public sector’s roof, especially in situations where people’s lives and rights are at stake. It is reaching its paroxysm here, as the Commission seems to have drawn directly from PwC’s study as regards ETIAS’s operational process and IT architecture.Footnote 61

Although raised in the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE Committee),Footnote 62 to which the legislative proposal was submitted for discussion and vote, these concerns were not followed up and the Regulation was adopted by the co-legislators in the summer of 2018.Footnote 63 Hence, serious shortcomings can be identified in the democratic process, in which “building trust” does not seem to have been of concern.

a. ETIAS’s general architecture

The contextual overview shows that ETIAS and its legal framework are a full product of the long-lasting but still ongoing digitalisation of external borders detailed above. ETIAS’s primary objective is to determine whether the short stay of these foreigners on Member States’ territory “would pose a security, illegal immigration risk or a high epidemic risk”.Footnote 64 But there is more: ETIAS also aims at making the prevention, detection and investigation of terrorist offences or of other serious criminal offences more effective.Footnote 65 These are indisputable examples of the migration–security nexus already identified.

ETIAS consists of a large-scale information system (managed by the European Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice – eu-LISA), a Central Unit (created within the European Border and Coast Guard Agency – Frontex) and National Units (which Member States have to set up).Footnote 66 The information system (Figure 2) provides the technical infrastructure that supports ETIAS,Footnote 67 while the units – both Central and National – are responsible for the manual processing of applications submitted by travellers.Footnote 68 This multiplication of stakeholders entails a fragmentation of responsibilities between EU agencies and Member States. Therefore, accountability, which is fundamental in the AI trust-building process, could well be undermined.

Figure 2. Overview of the European Travel Information and Authorisation System (ETIAS) information system (our illustration). ECRIS-TCN = European Criminal Records Information System for Third-Country Nationals; EES = Entry/Exit System; EU = European Union; eu-LISA = European Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice; SCI = secure communication infrastructure (fourth-generation Trans European Services for Telematics between Administrations – TESTA-ng); SIS II = Schengen Information System; SLTD = Interpol Stolen and Lost Travel Document database; SR = ETIAS screening rules; TDAWN = Interpol Travel Documents Associated with Notices database; VIS = Visa Information System; WL = ETIAS watchlist.

b. ETIAS’s decision process

ETIAS Regulation stipulates that every visa-exempt third-country national who wishes to enter a Member State’s territory must complete an online form containing more than twenty personal data fields,Footnote 69 such as common identity data (including nicknames and nationality), contact details (including domicile or place of residence), travel documents, place of birth and parental first name(s), level of educationFootnote 70 and current profession.Footnote 71 The applicant is also required to answer three specific questions relating to any previous criminal convictions, stays in a war or conflict zone and orders to leave the territory of a Member StateFootnote 72 or return decisions to which they could have been subject. Once the application is introduced and declared admissible, the information system automatically creates an application file and records and stores the IP address from which the application form is submitted.Footnote 73 These data are kept for the period of validity of the travel authorisation (which is in principle set at three years) or during five years from the last decision of refusing, cancelling or revoking the authorisation.Footnote 74

ETIAS then examines the application, based on automated and individual processing, to determine whether there is a positive answer (“hit”) or, to put it differently, whether non-EU citizens pose a risk.Footnote 75 If the assessment processing does not result in a hit, the travel authorisation is automatically issued to the applicant.Footnote 76 If, on the contrary, there is a hit, the application is first examined by the Central Unit, which is responsible for checking whether the personal data in the application file correspond to the data that triggered the hit: the aim is to remove the “false positives” caused by data errors.Footnote 77 If the identity of the applicant is confirmed or doubts remain, the application is sent to the National Unit of the Member State responsible for manual processing,Footnote 78 which is responsible for issuing the travel authorisation.Footnote 79 Where the national authority refuses, the applicant has the right to appeal “in the Member State that has taken the decision on the application and in accordance with the national law of that Member State”.Footnote 80 Nevertheless, the effectiveness of this right seems to be undermined by the lack of a detailed motivation for the decisionFootnote 81 and practical obstacles to accessing European judicial bodies from abroad.

For the sake of completeness, it should be noted that carriers (air, sea or coachFootnote 82; before boarding) as well as border authoritiesFootnote 83 (at the border crossing point) consult the ETIAS information system to verify whether foreign travellers are in possession of a valid travel authorisation, since it is a new condition of entry into EU territory.Footnote 84

c. ETIAS’s operational process

What remains to be described is the operational process of the ETIAS information system or, in other words, the operations it entails when it examines an application. These operations are of three kinds.

Through the first operation, ETIAS launches “a query by using the ESP to compare” the applicant’s data with those “contained in a record, file or alert recorded in an application file stored in the ETIAS central system, SIS, the EES, the VIS, Eurodac, ECRIS-TCN, Europol data and in the Interpol SLTD [Stolen and Lost Travel Document] and TDAWN [Travel Documents Associated with Notices] databases”.Footnote 85 In other words, ETIAS automatically checks whether, for example, a person has been denied entry under the EES or is subject to an SIS alert, in which case there is a match. If the identity of the applicant is confirmed by the Central Unit, the National Unit makes a sovereign assessment of the security and/or illegal immigration risk(s) that the applicant could pose and issues or refuses the travel authorisation accordingly.Footnote 86

Through the second operation, ETIAS “compares” the applicant’s data with the ETIAS watchlist,Footnote 87 which “shall consist of data [entered by Europol or the Member States] related to persons who are suspected of having committed or taken part in a terrorist offence or other serious criminal offence or persons regarding whom there are factual indications or reasonable grounds, based on an overall assessment of the person, to believe that they will commit a terrorist offence or other serious criminal offence”.Footnote 88 This list, for which the framework has not yet been adopted,Footnote 89 is based on a logic of pre-emption, whereby persons who are only suspected of committing an offence in the future will potentially be denied authorisation to cross external borders. If the applicant’s data are included in the watchlist and their identity is confirmed by the Central Unit, the National Unit makes a sovereign assessment of the security risk(s) posed by the applicant and issues or refuses the travel authorisation accordingly.Footnote 90

Last but not least, the third operation has the purpose of assessing the risks posed by the applicant in the light of “screening rules”Footnote 91 (ie a profiling algorithm laid down in Article 33 of ETIAS Regulation). If the applicant’s data match “specific risk indicators” and their identity is confirmed by the Central Unit, the National Unit makes a sovereign assessment of the applicant’s security, illegal immigration and/or high epidemic risk(s) and, accordingly, issues or refuses the travel authorisation. Significantly, the National Unit is required to conduct an individual assessment and “in no circumstances” can make an “automatic” decision solely on “the basis of a hit based on specific risk indicators”.Footnote 92 However, it is doubtful that this obligation of autonomous human assessment will be satisfied in practice, considering the well-documented automation bias.Footnote 93

The first operation is past-orientated, since it seeks to cross-check someone’s data with the pre-existing information stored in other databases.Footnote 94 The second operation is hybrid: it works in the same way, except that the watchlist, as we have said, will also be composed of information about persons who are merely deemed likely to commit a serious criminal offence in the future. The third operation is fully future-orientated, since the screening rules aim at singling out “persons who are unknown” but “assumed of interest for irregular, security or public health purposes due to the fact they display particular category traits”.Footnote 95 This process of profiling is examined in the next section.

III. The algorithmic profiling system set up by ETIAS

1. The profiling algorithm introduced by Article 33 of ETIAS Regulation

a. How does profiling work?

Schematically, profiling refers to making a prediction about a hidden variable of interest (also called a target variable) on the basis of one or more characteristics of a group of subjects (human or otherwise) that are highly correlated with this variable. Frederick F. Schauer proposes a comprehensive example. Consider a rule prohibiting certain breeds of dogs such as pit bulls from entering parks for security reasons. This constitutes a form of profiling, where the hidden variable of interest is the dog’s dangerousness and the correlated criterion is the breed.Footnote 96 Such a rule is justified by the fact that a pit bull is statistically more likely to behave dangerously compared to other dog breeds. This measure, based on generalisation, therefore aims to reduce the risk of accidents in parks (even though many pit bulls are harmless). Profiling is thus a rudimentary technique present in our everyday lives. Regardless of the technologies used, its essence lies in the detection of correlations and patterns: a dog, simply because one of their physiological traits is correlated with dangerousness, will be denied access to parks, even though nothing is known of their past individual behaviour.

While profiling is a long-standing practice, it has been the subject of much scrutiny for several years, mainly due to the latest developments in the IT sector. On the one hand, the growing influence of digital technology – notably through the Internet of Things and the multiplication of sensors in both private and public spaces – has led to the production of an unprecedented amount and diversity of data. On the other hand, this myriad of data produced daily has been successfully processed and analysed thanks to the considerable progress made in computer storage and computing capacities. The development of data mining and machine learning techniques has thus given rise to new forms of profiling: digital footprints and online behaviour (eg on social networks), public transport journeys or credit card purchases are all pieces of information that allow algorithms to make inferences about our future behaviour.Footnote 97 Whereas, for instance, traditional credit scoring models take salary and household composition as the main relevant variables, alternative forms of credit scoring now consider data such as one’s network of friends on Facebook or the time spent reading terms and conditions.Footnote 98 These “profiles 2.0” are very far from the “dangerous dog” profile described above, but the principle remains of making predictions based on variables that are supposed to be statistically good indicators of the information sought.

Alongside marketing, which is the field par excellence in which data mining techniques were experimented with, the field of security was also one of the first to be affected, particularly in the aftermath of 9/11.Footnote 99 It fostered the emergence of a pre-emptive rationality illustrating “the efforts of the policymakers to capture the future and fold it back into the present” to render it “actionable”.Footnote 100 This has been done by developing risk analysis tools to assess a probable, yet unknown, threat that a human could pose.

b. Trends of profiling at the EU level

As for the EU, digital technologies play a major role, as we have seen, in the management of external borders. However, although EU databases and information systems have contributed to the development of a massive infrastructure of surveillance, they have had little recourse to predictive and profiling methods so far. A first step has been achieved in that direction with the establishment of SIS II mentioned earlier. Whereas the first version of SIS (SIS I) was merely an alert system operating through the collection of information, its latest version (SIS II) makes it possible to assess the risk of a person on the basis of not only the data that are directly related to that person, but also links that would connect them to someone (or something) else listed in the system.Footnote 101 A second step was initiated with the Passenger Name Record (PNR) Directive, adopted on 27 April 2016, which introduced the collection of passengers’ data to prevent and detect serious crime and terrorism. This Directive requires each Member State to set up “Passenger Information Units” responsible for processing and analysing this data, which involves profiling techniques.Footnote 102

However, there are several reasons to argue that ETIAS is not a mere additional step of the trend initiated by the two systems we have just described. Firstly, ETIAS is the first European information system to use profiling for extremely broad objectives, far from being limited to the field of terrorism. This ambition appears unprecedented. Secondly, the profiling system will be managed by the EU, with Member States playing only a secondary role. By contrast, in the PNR Directive, Member States are responsible for the collection and processing of PNR data. As for SIS II, it is an EU database that takes the form of an investigation tool allowing for interlinking different alerts, but it does not use algorithmic profiling. Thirdly, ETIAS’s profiling system will be a compulsory step for all visa-exempt third-country nationals. This does not mean that the risk represented by individual visa-exempt travellers has not been assessed so far. Border guards have already had the power to deny access to foreigners at external borders – a prerogative that they will keep after ETIAS’s implementation, as we have already explained. What is new here is the implementation of a systematic and generalised digital procedure: all visa-exempt foreigners will necessarily be assessed against the screening rules (ie profiled).

c. How does ETIAS’s profiling algorithm work?

At this stage, as mentioned in Section I, the exact content of the screening rules is not yet known. The only official document available is the Regulation. Article 33 contains the main elements, which will serve as the basis for the following analysis. This provision specifies that the screening rules shall take the form of an algorithm enabling profiling as defined in point 4 of Article 4 of the General Data Protection Regulation (GDPR).Footnote 103 The algorithm will determine whether data collected from the applicant matches “specific risk indicators … pointing to security, illegal immigration or high epidemic risks”. ETIAS Regulation does not precisely define the nature of the algorithm, and its terminology is (to say the least) vague, referring alternately to “risks”, “specific risks” and “specific risk indicators”. The Commission has recently adopted a delegated decision to further define what is referred to as the “risks” (ETIAS Regulation, Article 33(2))Footnote 104 and has yet to adopt an implementing act to determine the “specific risks” (ETIAS Regulation, Article 33(3)).

This is why, in order to better understand how the screening rules will work and what kinds of algorithmic techniques will be used, we contacted several relevant EU institutions and agencies. First, eu-LISA, which is responsible for ETIAS’s technical infrastructure,Footnote 105 denied access to information because “ETIAS is currently under design and development, and it is not in operation. Therefore, the disclosure of the requested information could have a substantial impact on the decision-making process of the Agency since with access to such information the public can create an unnecessary pressure on the ongoing process and jeopardize the design and development of the system”.Footnote 106 Then, the Commission offered some insights but replied that it would be more appropriate to ask Frontex since, pursuant to Article 75(1)(c) of ETIAS Regulation, this Agency is responsible for the screening rules.Footnote 107 Finally, the latter answered that, as regards the screening rules, “[i]t is probably more appropriate to talk about filtering queries than complex algorythms [sic]. … no sophisticated analysis methods … and no machine learning, or any other form of AI is involved”.Footnote 108

Despite these statements, there are various indications that machine learning techniques (classification or clustering algorithms) could be used, either at launch or at a later stage. In its 2020 report “Artificial Intelligence in the Operational Management of Large-Scale IT Systems”, eu-LISA considers, for instance, that “an additional level of automation or analytics based on AI or machine learning could be introduced” to assist national authorities in dealing with applications that have triggered a hit.Footnote 109 In the same vein, the report “Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security”, written by Deloitte for the Commission,Footnote 110 states in relation to ETIAS’s screening rules that “AI could support in selecting these indicators and possibly adapting them over time/depending on the applicant”. The report further adds: “Based on this risk level, applications can be classified for individual review (‘triaging’) by an appropriate case worker. Additionally, a suggested decision (e.g. similar cases resulted in the following outcome) could be provided by the AI model based on historical data (and thus similar applications). This would not only ensure a fair assessment, but also improve consistency across Member States”.

Both these reports, which perfectly illustrate the current trend towards the growing mobilisation of AI for migration management, make us think that Frontex’s response should not be taken too literally. This seems all the more true when one considers that tens of millions of applications will be processed each year by ETIAS.Footnote 111 Furthermore, given the puzzling history of EU information systems in the Area of Freedom, Security and Justice, it cannot be excluded that the items of information required to shape the screening rules will be completed by data coming from other EU or international databases and/or new categories of data. Last but not least, as will be seen below, the Commission insists on the need to collect data on a quasi-continuous basis in order to constantly identify the specific groups of travellers who pose a risk. This demonstrates the willingness to follow the continuous evolution of risks and to use the most recent data to feed the risk profiles so that they stick as closely as possible to the ever-changing reality. This goal of making models evolve in response to new data as quickly as possible can be more accurately achieved using machine learning algorithms than by humans.

Thus, despite the indeterminacy resulting from the lack of transparency in the algorithmic design by the European institutions, we may formulate several strong assumptions on the functioning of the screening rules based on the reading of Article 33.Footnote 112 Rather than talking about a single algorithm, it seems more appropriate to describe the screening rules as forming a complex algorithmic decision-making system that will be composed of various algorithms, including expert-system algorithms and presumably machine learning techniques.Footnote 113 Two aspects should be distinguished: on the one hand, the rules themselves, which will determine whether a given applicant poses a risk by comparing their data with the “specific risk indicators” (ie risk profiles thus triggering a hit); and on the other hand, the way in which these profiles of risk will be defined and the data that will be taken into account for doing so. The latter will be established by the ETIAS Central Unit (which is part of Frontex) on the basis of one or more of the following items of information: (1) age range, sex and nationality; (2) country and city of residence; (3) level of education (primary, secondary or higher education or none); and/or (4) current occupation (multiple-choice answer).

But how and on what basis will the risk profiles be defined? What will justify some combinations of attributes being considered risky and others not? Article 33(2) and the Delegated Decision that further defines the risks list three sources of information that will be taken into account: (1) data provided by the World Health Organization (WHO) and the European Centre for Disease Prevention and Control (ECDC); (2) data transmitted by Member States; and (3) statistics produced by EES and ETIAS itself.

Firstly, the data transmitted by the WHO and the ECDC will help to identify high epidemic risks. Frontex, in its answer, stated that “the ETIAS Central Unit will monitor the Outbreak Disease Notices published by WHO. Frontex will also cooperate closely with the ECDC to have an accurate and up to date situational awareness on the diseases that might represent a high epidemic risk relevant for ETIAS”.Footnote 114 In practical terms, if the WHO declares that an outbreak of a certain disease has occurred in a given region of the world, the ETIAS Central Unit could modify the screening rules so that a hit would be triggered for anyone (or certain categories of people) travelling from that region. While the ETIAS Regulation was adopted before the COVID-19 pandemic, it is not difficult today to gauge the significance of such a provision.

Secondly, Member States will provide information that relates to data on “abnormal” rates of overstaying and refusals of entry, “specific” security risks and high epidemic risks. The scope is thus extremely broad. Articles 4 and 5 of the Delegated Decision state, in terms of security risks and abnormal rates of overstaying and refusals of entry, that “Member States shall review their analysis at least every 6 months or where new information emerges that makes it necessary to modify the analysis”.Footnote 115 They must describe the risks identified and provide evidence and known facts related to it and the sets of characteristics of specific groups of travellers associated with the security or immigration risk. As each Member State will be a producer of knowledge in this respect, harmonisation is needed so that these data can be compared. Without this, any data analysis is impossible.

In their answer, Frontex told us that “Member States will transmit their periodic reports to Frontex via the regular channels established for risk analysis purposes”.Footnote 116 Indeed, Member States have already been required to provide Frontex with a certain amount of data on a regular basis, in accordance with the “Common Integrated Analysis Model” (CIRAM), through which Frontex monitors the situation at the borders and assesses the capacity of each Member State to deal with migration flows.Footnote 117 Scholars have shown that risk analysis was one of the most important, if not the most important, tasks of Frontex: the results of risk analyses allow for the comparison of Members States’ capacity to manage their borders and serve as a basis for key decisions such as the allocation of financial support to Member States.Footnote 118 The implementation of ETIAS will therefore increase the data reporting obligations of Member States and incidentally the influence that Frontex has acquired in the field of migration management. More broadly, Frontex’s risk analysis and ETIAS should be seen as part of the same approach towards migration: this phenomenon is seen as a threat that needs to be managed in a rational way (via risk assessment) through the collection and processing of large amounts of data.

Last but not least, there are the reports and statistical data produced by EES and ETIAS. Importantly, these data will only be collected once the two information systems are operational. This means that at the start of ETIAS’s functioning the screening rules will only be shaped by the data given by the Member States, the WHO and the ECDC. The (statistical) data provided by EES and ETIAS is biographical and related to travel information. Pursuant to Article 84(2) of ETIAS Regulation, the information is stored in an interoperability component, the CRRS.Footnote 119 For EES, this specifically covers “abnormal” rates of overstaying and refusals of entry into the EU. For ETIAS, this constitutes statistics pointing to “abnormal” rates of refusals of travel authorisations. Statistics produced by EES and ETIAS shall also be combined to find correlations between information collected through ETIAS’s application form and abnormal rates of overstaying or refusals of entry.Footnote 120 In other words, the aim is to detect profiles of “irregular” migrants, such as by analysing, among the travellers who will appear in EES statistics as overstayers, whether specific groups of travellers sharing common sets of characteristics can be identified.Footnote 121

These specific groups of travellers are anything but static: the Delegated Decision explicitly mentions that the data used to determine the risk profiles must be collected “in a manner that makes it possible to continuously identify sets of characteristics of specific groups of travellers” who exhibit a so-called illegal immigration risk.Footnote 122 In this sense, the Commission, in a Delegated Regulation detailing the rules on the operation of the CRRS, provides for the daily collection, by way of extraction, of anonymised data on overstays, refusals of entry and refusals of travel authorisation.Footnote 123 The Central Unit may then obtain statistical data and reports (which may be customised) “to further define the risks of overstaying, refusal of entry or refusal of travel authorisation”.

This ever-changing nature of ETIAS’s screening rules may well conflict with the notion of a “rule” itself, which supposes that clear criteria are defined. A similar critique was raised by the authors of the report to the LIBE Committee, according to whom it is doubtful that the reasons why a hit will be triggered will be intelligible and publicly accessible.Footnote 124 Talking about “screening rules” seems consequently to be misleading. This reflects more globally the problem of opacity that characterises many automated decision-making systems, as frequently noted by the Commission itselfFootnote 125: National Units will not be able to understand the risk score attributed by the algorithm and therefore will not have the means to justify their decisions. Default of interpretability entails difficulties in the motivation of the decision.

That being said, the ETIAS profiling system – as currently described in Article 33 – is not exactly what we tend to call a “black box”. Indeed, one element regularly pointed out as being characteristic of Big Data is the variety of such data (ie the fact that, for the same person, extremely numerous and diverse categories of data are taken into account). In the case of ETIAS, while a significant amount of data will be collected and processed overall (given that millions of travellers will have to go through the process every year), only a relatively limited number (ie six) of the applicants’ attributes – corresponding to well-established sociodemographic categories – will be taken into account. Furthermore, Article 33(6) of ETIAS Regulation provides that “[t]he specific risk indicators shall be defined, established, assessed ex ante, implemented, evaluated ex post, revised and deleted by the ETIAS Central Unit after consultation of the ETIAS Screening Board”.Footnote 126 The Regulation does not elucidate how this provision will be implemented, but this implies that, even if machine learning is likely to be used, the existence of a human factor in the establishment of the specific risk indicators is clearly somehow foreseen.

In any case, there is no doubt that the way in which the screening rules will be applied requires close scrutiny. Under EU law, Article 24 of the EU institutions data protection Regulation, which mirrors Article 22 GDPR, states as a general principle (subject to exceptions) that each data subject “shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Although the Court of Justice of the European Union (CJEU) has not yet ruled on the interpretation of the scope of those provisions, the screening rules do not seem to fall within their scope, especially because the final decision rests with the National Units, which make a sovereign assessment of the risk(s) posed by the applicant.Footnote 127

Moreover, there are several reasons to believe that the ETIAS screening rules will not meet the safeguards set up by the CJEU in its Opinion 1/15 on the EU–Canada PNR Agreement and in its La Quadrature du Net and Others judgment (both related to the field of security and the fight against terrorism),Footnote 128 in which it ruled that the models and pre-established criteria used to conduct automatic data analysis must be specific and reliable, limited to that which is strictly necessary, not based on sensitive data in isolation and non-discriminatory.Footnote 129 Indeed, ETIAS is bound to produce unfair and/or wrong results, as we argue in the next section.

2. Non-discriminatory profiling of foreigners?

Article 14 of ETIAS Regulation states as a general principle that

[p]rocessing of personal data within the ETIAS Information System by any user shall not result in discrimination against third-country nationals on the grounds of sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation. It shall fully respect human dignity and integrity and fundamental rights, including the right to respect for one’s private life and to the protection of personal data. Particular attention shall be paid to children, the elderly and persons with a disability. The best interests of the child shall be a primary consideration.Footnote 130

This provision unequivocally embodies the principle of equality and non-discrimination, which is also a core value of the “human-centric AI” promoted by the Commission and is mentioned in its 2021 proposed Regulation on AI.Footnote 131

Yet, in a report drafted during the legislative procedure of the ETIAS Regulation and submitted to the LIBE Committee, the EU Agency for Fundamental Rights (FRA) warns against the risks of discrimination that the screening rules may entail. It states that ETIAS “may in practice unduly complicate the obtaining of a travel authorisation for some persons in comparison with other applicants”.Footnote 132 Although the FRA does not question the Commission’s distinction between bona fide travellers and potential risk-bearers, it points out that the former could be put in a disadvantageous position if they are associated with a risk group, even in the absence of “individual reasons” for concluding that they embody a risk of illegal migration.Footnote 133 The FRA further cautions that combining ETIAS, EES and Member States’ data with risk profiles based on age, gender, nationality, place of residence, education and occupation could lead to “indirect discrimination”. In conclusion, it advises European legislators to postpone the enactment of the screening rules until a testing phase demonstrates that they are “necessary, proportionate and do not result in discriminatory profiling”.Footnote 134 Unsurprisingly, this advice was not followed.

But what exactly should we understand by non-discriminatory profiling? Can we take seriously the guarantees contained in Article 14 and ask for their effective implementation? This question is all the more salient in the field of migration law,Footnote 135 which is itself a form of institutionalisation of direct discrimination on the ground of nationality.Footnote 136 Non-EU citizens wanting to travel to Europe need a visa or an entry authorisation precisely because of their foreign nationality. It is therefore appropriate to start our analysis with some clarifications on the particular meaning of discrimination in the case of the algorithmic profiling of foreigners in the context of our case study.

Kasper Lippert-Rasmussen notes that the term “discrimination” has a double meaning.Footnote 137 According to the first, neutral meaning, “to discriminate” is simply to distinguish or differentiate. Discrimination in this sense is a primary task of an algorithm: in our case, it is to distinguish between categories of travellers on the basis of their characteristics. As Barocas and Selbst observe, “[b]y definition, data mining is always a form of statistical (and therefore seemingly rational) discrimination”.Footnote 138 The second, morally blameworthy sense of the term “discrimination” automatically entails a negative evaluation of the act or practice in question. Accordingly, if the effects of an algorithmic distinction in a particular political and social context contradict our sense of justice, one may be compelled to denounce discrimination in this second meaning. However, for such a claim to materialise before a court, one ought to prove a discrimination in a third, legal sense (ie a given differentiated treatment negatively affecting protected groups, pursuing an illegitimate aim and/or disproportionate).

Algorithmic profiling may well lead to adverse impacts on different groups of foreigners. This is because profiling, or the assignment of a given person to a migratory profile, can have far-reaching consequences in terms of their rights: “In an age in which rights, or in this case lack of rights, are coupled to identities through registration, a network of databases aimed at registering (potential) irregular migrants is bound to make life inside the Member States of the EU harder”.Footnote 139 Of course, this also holds for non-EU citizens who would never have the chance to find themselves “inside” the Schengen boundaries because of ETIAS and other instruments of the borders’ externalisation.

Already a superficial reading of the ETIAS Regulation allows one to conclude that the profiling based on nationality, sex, age range, country or city of residence, level of education and current occupation can hardly be neutral in regard to these grounds, which, what is more, correspond to the protected groups under anti-discrimination law.Footnote 140 Moreover, the algorithmic profiling of foreigners intervenes at several stages of the procedure, so that the potentially detrimental effect cannot be attributed to a single element of the system or a particular decision: “[A]s the European migration case illustrates, structural biases at every moment of ‘calculation’ – data gathering, organization, aggregation, algorithmic design, interpretation, prediction and visualization – serve to construct legitimized difference by reproducing existing inequalities across individuals as data subjects”.Footnote 141

In a seminal paper entitled “Big Data’s Disparate Impact”, Barocas and Selbst distinguish five ways in which Big Data can “unduly discount members of legally protected classes and put them at systematic relative disadvantage” at different stages of the design of (machine learning) algorithms: problem specification, training data, feature selection, choice of proxies and masking discriminatory intentions.Footnote 142

In the following subsections, we use Barocas and Selbst’s theoretical framework as an analytical tool to argue that ETIAS’s screening rules can pave the way to disproportionally adverse outcomes on groups of third-country nationals and may well provoke unfair and/or wrong results. Their analysis appears relevant to our case study since we have demonstrated that there is an indication that machine learning is likely to be used at some stage in ETIAS. In addition, much of their reasoning is relevant more generally for applications that resort to probabilities and statistics. Therefore, even if ETIAS’s risk profiles are not established through machine learning, our argument remains valid because these risk profiles will be inherently statistical.

Importantly, our ambition is not to demonstrate that ETIAS’s profiling algorithm is strictly discriminatory in the third, legal sense stated above.Footnote 143 Rather, we intend merely to show its encoded biases can lead to prima facie discriminatory disparities (ie instances of suspicious discriminations).Footnote 144 By relying on the principle of the shared burden of proof of EU equality law,Footnote 145 our aim is to put forward facts from which it may be presumed that discrimination is at stake, as an alleged victim of discriminatory practices, such as a non-EU citizen whose travel authorisation has been denied due to ETIAS’s screening rules, would do in courts.Footnote 146 We thus leave aside the questions of legitimate aim and proportionality that require a contextual analysis in concreto, which will not be possible as long as ETIAS’s information system is not operational. Similarly, we do not tackle the thorny question of whether there are cases of prima facie direct or indirect discriminations (or both), not only because it is a difficult exercise to perform in abstracto, but also due to the blurring of the conceptual distinction between direct discrimination and indirect discrimination in the algorithmic society.Footnote 147

a. Defining “target variable” and “class labels”: problem specification in ETIAS

The “art of data mining” consists of translating a complex social problem into a formal question about the state or value of a target variable – a process that is called “problem specification”. As we have seen, in the case of ETIAS, the target variable – or otherwise what the data minerFootnote 148 is looking for – is a security, illegal immigration or high epidemic risk, as stated in Article 33 of the Regulation. The different values that the target variable can take are called “class labels”. For our case study, these values are divided into a binary dichotomy: being a foreign traveller who poses a risk or not.

Barocas and Selbst argue that discrimination may result from the decisions taken already at this early stage of the algorithmic design,Footnote 149 notably when defining the target variable involves the creation of new classes. A “good migrant”/“bad migrant” detector, such as ETIAS, serves as a perfect example of this phenomenon. The “risk of illegal migration”, which ETIAS is supposed to detect, is actually an “artifact of the problem definition itself”.Footnote 150 Firstly, at the more general level, this is because ETIAS is a tool of the legal production of migrant illegality.Footnote 151 Unlike earthquakes, which other data-mining algorithms seek to detect, the risk of undocumented migration does not exist outside of the legal and technical infrastructure (of irregularisation) of which ETIAS is a part. In this way, ETIAS contributes to creating illegal migration, which it is designed to combat. Secondly, the algorithmic definition of irregular migration and the related risks performatively create the sociolegal condition of the irregular migrant. As will be explained in more detail below, what is meant by “irregular migration risk” is subject to a changingFootnote 152 and quite arbitrary definition. Thus, ETIAS defines the irregular migration risk, which it pretends to proactively prevent.

b. Training data

Any data-mining algorithm learns from examples to find relevant correlations. This means that ETIAS, whose aim is to detect potentially dangerous travellers, will learn from data regarding persons who have been “labelled” dangerous and thus stopped at the border in the past. But what happens if the data on the border checks and border detention reflect a strong bias due to the discriminatory nature of these practices or systematic errors in the data collection? As Barocas and Selbst explain, “biased training data leads to discriminatory models”.Footnote 153 This bias may intervene, according to the authors, in two ways. Firstly, if bias has played some role in qualifying valid examples for learning, it will be reproduced by the algorithm. Secondly, if data mining draws conclusions from a biased sample of the population, any decision based on these conclusions may systematically disadvantage those who are under- or over-represented in the data set.Footnote 154

A closer reading of ETIAS Regulation gives a clue as to the potentially adverse impact of biased labelling examples. Following Article 33, paragraph 2(c), the security, irregular immigration or high epidemic risks should be defined by the Commission on the basis, among others, of the “information substantiated by factual and evidence-based elements provided by Member States concerning abnormal rates of overstaying and refusals of entry for a specific group of travellers for that Member State”. If these “abnormal rates of refusals of entry” are themselves shaped by the discriminatory practices of border guards,Footnote 155 ETIAS is deemed to reproduce or even amplify this historical bias. Anyone who has visited an international airport may confirm that the criteria of target selection of border checks are far from colour-blind. Some 79% of border guards consider ethnicity as a helpful indicator for effectively recognising persons attempting to enter the country in an irregular manner before speaking to them, according to a FRA study.Footnote 156 This figure reaches 100% in some of the European airports examined in the study. This practice is very likely to be ruled discriminatory, as a judgment of a Higher Administrative Court of Rhineland-Palatinate in Germany shows.Footnote 157 ETIAS’s algorithmic design is deemed to reproduce the discriminatory and potentially racist practices of border guards.

Secondly, as noted above, if data mining draws conclusions from a biased sample of the population, any decision based on these conclusions may systematically disadvantage those who are under- or over-represented in the data set. A more detailed reading of the ETIAS agreement gives, once again, a perfect example of this mechanism. Apart from the commonplace errors in administrative databases on third-country nationals,Footnote 158 structural biases may actually be encoded in the program. According to Article 33, paragraphs 2(a) and (c), the statistics generated by ETIAS shall be correlated with the EES statistics to explore the characteristics of the specific groups of travellers that tend to overstay. This may seem to be a perfectly rational measure that aims to detect the “illegal immigration risk”. However, according to EES Regulation (which refers to the Schengen Code),Footnote 159 any person who has lodged an application for asylum or for a residence permit during their authorised stay will be considered an overstayer from the ninety-first day of their stay and thus registered as such. From a legal point of view, such an applicant is in a precarious but regular situation. Nevertheless, they will appear in the EES statistics as an “overstayer” until their application is successfully processed (or until their return in the case of rejection). For asylum or naturalisation applications, this can take several years. During this period, travellers having characteristics (nationality, place of residence, sex, etc.) similar to persons who have lodged a residence or asylum application will be considered as representing an “illegal migration risk” by ETIAS.

Consider the example of Venezuelans, whose asylum applications have been growing considerably over the past few years.Footnote 160 If these trends continue after the EES’s entry into force, Venezuelans will be overrepresented in the overstayers statistics, even if, for the most part, they will be legally residing in a given Member State waiting for their asylum application to be examined. As a consequence, the new (Venezuelan) candidates for travel to Europe will be more likely to see their demands of authorisation of entry refused by the National Units on the basis of high irregular migration risk, as defined by ETIAS. The growing number of refusals of travel authorisation may have two consequences. The first is a potential rise in attempts of unauthorised border crossing, logically leading to a higher number of refusals of entry at the border, and the second is the decreasing number of authorised entries registered by ETIAS, leading to the underrepresentation of Venezuelans in the ETIAS statistics. Both of these consequences will lead to a self-reinforcing effect confirming and amplifying the effects of ETIAS’s predictions.Footnote 161 Therefore, one might go so far as to conclude that the aim of the system is not so much to fight against irregular immigration but instead to combat any form of immigration altogether.

Although this view can be considered rather pessimistic, there are arguably ways to prevent this issue to some extent. Indeed, while establishing the specific risk indicators, the Central Unit, after consulting the ETIAS Screening Board (which are both human components), could or should take into account the rise in applications for international protection and therefore neutralise – for example – the criterion based on nationality or any other parameters it believes relevant to avoid potential biases. Therefore, human oversight, which is part of the EU’s framework on trustworthy AI, might in some respects be an answer to the issue we have just identified. However, this is not to say that human intervention is, by itself, sufficient. This will depend, among many other things, on whether the Central Unit will actually pay attention to such potential harm, and this cannot be taken for granted, especially when one knows that the groups of people who will be targeted by the adverse treatments will have little or no opportunity to make their voices heard as they are not EU citizens or on EU territory.

c. Feature selection

Bias may occur also at the stage of feature selection; that is, the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Data are only a simplified form of representation of complex social phenomena. The selection of the attributes that should and should not be taken into account in the analysis is never a neutral decision, notably because “members of protected classes may find that they are subject to systematically less accurate classifications or predictions because the details necessary to achieve equally accurate determinations reside at a level of granularity and coverage that the selected features fail to achieve”.Footnote 162 Frederick Schauer explains that decision-makers may be “simultaneously rational and unfair” if they rely on “statistically sound but nonuniversal generalisations”.Footnote 163

Certainly, six basic sociodemographic categories considered by ETIAS cannot reflect (nor accurately predict) the complexity of migration phenomena. As stated above, the profiling performed by ETIAS does not really involve advanced Big Data techniques, since it is limited to a relatively small number of attributes. Thus, the conclusions drawn by ETIAS screening rules may be statistically accurate but still highly approximate.Footnote 164 For example, ETIAS is blind to factors such as the applicant’s family situation, heritage, knowledge of the language and disabilities or the presence of their family members in the country of destination, even though all of these may have a significant impact on the “risk of irregular migration”. This data scarcity logically implies highly approximate results. They may disadvantage some groups of travellers bearing attributes diminishing the risk of irregular migration (eg paternity), since they may be associated with a given “risk profile” because of their age, nationality and level of education, notwithstanding these additional characteristics. Of course, by bringing to light the risk of bias induced by a relatively small number of attributes in feature selection, we are not calling for the addition of further types of data to the questionnaire. Our aim here is merely to put into question the objectivity of the results obtained by the screening rules.

Apart from this, the choice of attributes/features itself can reveal unfair outcomes for specific groups of travellers. The “current occupation” serves as a telling example. The Central Unit might consider it relevant to establish profiles of (say, security) risk in which particular job categories are qualified as “suspect” for some reasons we simply ignore. Journalists or human rights activists might then be prevented from crossing the EU’s external borders based (solely or partially) on their occupation. ETIAS Regulation explicitly encourages this in Article 33(4). It leaves the door ajar to the arbitrary selection of jobs that are perfectly suited for the EU’s political, economic and social strategies (while excluding those that are not), without the possibility of clearly understanding the reasons why such choices are made by the Central Unit.Footnote 165

d. Proxy

“Proxy discrimination” is currently a common term in the literature on law and AI. In brief, it refers to the fact that algorithmic data processing can produce adverse effects on specific groups even when protected characteristics are not inputted as permissible variables. This often occurs when a model “may easily detect other variables and ‘neutral’ data points that are closely related to those characteristics”.Footnote 166 This kind of indirect discrimination may be non-intentional when the examined attributes that “are genuinely relevant in making rational and well-informed decisions also happen to serve as reliable proxies for class membership”.Footnote 167 The notion of proxy discrimination seems particularity relevant to grasping the risk of bias posed by ETIAS’s screening rules.

Article 33(5) of the Regulation, which mirrors Article 14, guarantees that specific risk indicators “shall in no circumstances be based on information revealing a person’s colour, race, ethnic or social origin, genetic features, language, political or any other opinion, religion or philosophical belief, trade union membership, membership of a national minority, property, birth, disability, or sexual orientation”. Nevertheless, as the notion of proxy discrimination indicates, blinding the screening rules to these “sensitive social categories” does not mean that unfavourable treatment based on these characteristics may not occur. This is for two reasons. Firstly, membership in a protected class may be encoded in other data. Thus, “race” or ethnic origin may be “deduced” from other, non-suspicious data, such as nationality, place of residence and education level. Moreover, there is no definition of the scope and limits of the prohibited grounds, so the degree of overlap with proxies required for the protection to apply is unclear.Footnote 168 Secondly, and more generally, even a perfectly accurate model of irregular immigration risk will be discriminatory on the ground of “race”, on account of the racialised (or racist) reality that it describes. A great majority of undocumented third-country nationals residing in Europe belong to racial or ethnic minorities. Consequently, ETIAS is compelled to encode their race or ethnic origin when processing their data. This conclusion is valid for most immigration law enforcement measures, be they digital or not. As stated by the United Nations Special Rapporteur on contemporary forms of racism: “States regularly engage in racial discrimination in access to citizenship, nationality or immigration status through policies and rhetoric that make no reference to race, ethnicity or national origin, and that are wrongly presumed to apply equally to all”.Footnote 169

Unidimensional proxy discrimination is not the only type of discrimination that is relevant here. According to Article 33(4) of ETIAS Regulation, specific risk indicators will consist of “a combination of data including one or several of the following characteristics: age range, sex, nationality, country and city of residence, level of education, current occupation” (emphasis added). While it is guaranteed that specific risk indicators “shall in no circumstances be based solely on a person’s sex or age” (paragraph 5; emphasis added), meaning that ETIAS may not automatically deny an authorisation to all women or all persons in the age group between twenty and twenty-five, for example, it may very well identify women between twenty and twenty-five years of age as belonging to a risk profile. Though admittedly simplistic, this example gets the key point across. Thus, risk profiles may be located at the intersection of illegal motives of discrimination. This is why the notion of intersectional discrimination (ie that occurring at the intersection of more than one protected groundFootnote 170) seems particularly relevant to understanding the adverse impacts of ETIAS’s profiling. Because specific risk indicators combine attributes constituting protected grounds (sex and age) with other, seemingly non-suspicious grounds (nationality, residence, level of education and occupation), which may serve as proxies for other protected grounds (“race” or ethnic origin), they are intersectional by design.

As demonstrated in the seminal work of Kimberlé Crenshaw,Footnote 171 victims of intersectional discrimination experience specific difficulties in proving the discrimination that they suffer from. Consider the following example: an established risk profile points at highly educated women in their twenties coming from big towns in Serbia. They will fall short of establishing a prima facie case of (unidimensional) discrimination. If there is no hint in the ETIAS statistics that women in general are subject to higher rates of refusal than men, no discrimination on the ground of sex occurs. The same is true if the rate of refusal for the whole group of highly educated persons is similar to poorly educated ones or if the rate of refusal for all persons in their twenties is the same to that for older people. Neither group may prove that they were treated unfavourably solely on the basis of their nationality. This is why the strongly embodied single-axis testing tools set forth,Footnote 172 which look for a biased distribution of error rates according to a single variable only, such as race or sex, will not be sufficient to assess the risk of adverse impacts posed by ETIAS. Only intersectional bias auditsFootnote 173 may adequately do so.

e. Masking

This brings us to the last point brought up by Barocas and Selbst: data mining may also be intentionally used for discriminatory purposes. All discrimination begins with a distinction. The task of an algorithm is precisely to operationalise this distinction. ETIAS screening rules distinguish groups of travellers previously indistinguishable (at least systematically) by the hierarchical legal system of global mobility. To better understand this point, we tackle the purpose of establishing travel authorisation for visa-exempt third-country nationals.

As explained in Section II of this article, ETIAS “fills the gap” presumed to exist in the EU surveillance of global mobility. As such, it is part of “the global mobility infrastructure”, which Thomas Spijkerboer defines as “physical structures, services and laws that enable some people to move across the globe with high speed, low risk, and low cost”.Footnote 174 This perspective shows that ETIAS is not merely a tool of exclusion or just another brick in the wall enclosing the European fortress. Rather, we propose to analyse it also as an instrument of selective and differentiated inclusion, which stimulates the cross-border mobility of some categories of people while restricting the rights of entry of other, “undesirable” travellers. By regulating access to the global mobility infrastructure through instruments such as visa systems, ETIAS and ESTA or carrier sanctions rather than controlling access to their sovereign territories, Global North countries can pursue their aim of increased human mobility while at the same time maintaining sovereign control over their territories and populations.Footnote 175

The EU visa system is undeniably a key element in the control over access to the global mobility infrastructure. It is enough to look at the map representing the countries whose nationals are subjected to visa requirements (Figure 3) in order to see that it approximately (with the exception of certain South American countries) reflects the Global North/Global South divide. Moreover, a great majority of visa-exempted countries are traditionally Christian. One could say that the visa obligations imposed on poor and non-Christian countries hosting racialised populations basically represent a presumption of undesirability in the Schengen Area.

Figure 3. The European Union’s (EU) visa policy. In dark blue: Schengen Area; in light blue: EU states and territories of EU states not part of Schengen and other exceptions; in green: no visa required to enter the EU; in light red: visa required to enter the EU; in dark red: visa and airport transit visa required to enter the EU. Source: European Commission (2021). Note that the Southern Hemisphere seems abnormally compressed.

Nevertheless, amid ongoing globalisation, there is more of the First World in the Third and the Third World in the First. This calls for an individualisation of travel restrictions. The visa system is an institutionalisation of such individualisation. In a critical vein, one could say that “[w]ithin the broader immigration system, visa allocation has always been an algorithm, and it is one that has been designed according to the political priorities of power holders. It is an algorithm that has long privileged whiteness, hetero- and cis-normativity, wealth, and higher socioeconomic status”.Footnote 176 Therefore, on the basis of the individualised class, race and gender criteria, some privileged nationals of Global South countries may gain a visa and thereby gain access to the global mobility infrastructure.

ETIAS performs an analogous type of individualisation for the nationals of visa-exempted, generally wealthier and whiter countries. In fact, it functions as a visa lite, notes Spijkerboer.Footnote 177 With ETIAS, the poor and racialised citizens of relatively wealthy countries may be refused authorisation to enter the Schengen Area for individual reasons, despite the general presumption of their membership to a group of “desirable travellers”. Therefore, the “information gap” that ETIAS is supposed to bridge seems to be the one separating class, race and nationality.

IV. Conclusion

By investigating the AI applications used by the EU to serve its own ends, this paper aimed to provide a more nuanced understanding of the EU’s narrative on human-centric AI. The choice of ETIAS as a case study is of relevance because of a generally known tendency: the invasive techniques first tested on foreigners are subsequently extended to the most precarious parts of the general population, before finally being deployed in society as a whole. Moreover, algorithmic profiling and risk assessment tools are becoming pervasive in areas falling within the public sector, such as social security and criminal justice. ETIAS is only a small piece of a larger and elusive digital fabric, the building process for which is currently underway.

Our investigation reveals two main findings. On the one hand, ETIAS constitutes another piece in the massive infrastructure of the surveillance of third-country nationals that EU institutions have been building for years. As such, it is in line with the process of the digitalisation of the EU’s external border management. On the other hand, ETIAS will serve as a tool of the individualisation of travel restrictions and differential exclusion. Based on a pre-emptive logic, it objectivises, systematises and institutionalises non-EU citizen sorting operations that constitute the core business of migration policies. Nevertheless, as our in-depth analysis of the Regulation and the algorithmic profiling system that it sets forth demonstrates, ETIAS’s data processing is likely to discriminate against some protected groups and produce biased results. Therefore, it fails to achieve its objective of rationalising migration management.

Far from falling outside the scope of the “ecosystem of trust” championed by the EU in AI matters, we argue that ETIAS is fully embedded within it. In its Communication “Fostering a European Approach to Artificial Intelligence”, issued on the same day as the publication of the proposal for a Regulation on AI, the Commission states that “AI can significantly contribute to the objectives of the EU Security Union strategy. It can be a strategic tool to counter current threats and to anticipate both future risks – including hybrid threats – and opportunities. AI can help to fight crime and terrorism, and enable law enforcement to keep pace with the fast developing technologies used by criminals and their cross-border activities”.Footnote 178 Because of the migration–security nexus, there is no doubt that this also applies to migration management: to build an “ecosystem of trust” (the benefits of which are limited to EU citizens), EU institutions promote the risk-based approach to potentially undesirable foreign populations.

One could conclude that there exists a double standard characterising the European approach towards AI. Indeed, while the proposed Regulation on AI lists the “systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk” among high-risk systems subject to strict obligations before they can be put on the market,Footnote 179 Article 83 of the same proposition – as already underlined – excludes ETIAS from the scope of application of the Regulation.Footnote 180 The EU seems to voluntarily evade its own standards in the domain. Consequently, the notion of “trustworthy AI” has clearly delimited borders: its meaning changes dramatically depending on the population it is applied to.

Nevertheless, we should not naïvely conclude that risk-based digital migration management is simply incompatible with the “trustworthy AI” paradigm promoted by the EU in other areas. Michel Foucault stated in Security, Territory, Population that “freedom is nothing but the correlative development of apparatuses of security”. Today, one could say that trust is nothing but the correlative development of systematic risk evaluation. In this sense, ETIAS contributes to “building trust” in the AI-shaped environment.


The authors wish to express their gratitude to Benoit Frydman, Robin Medard Inghilterra and Arnaud Van Waeyenberge for their precious comments on previous versions of this paper. Many thanks to Raphaël Gyory, Théo Verhelst and Adam Goliński for their technical expertise.

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.


1 Commission, “Building Trust in Human-Centric Artificial Intelligence” (Communication) COM (2019) 168 final 1.

2 Laid down in a prior Communication and in the coordinated plan on AI: Commission, “Artificial Intelligence for Europe” (Communication) COM (2018) 237 final; Commission, “Coordinated Plan on Artificial Intelligence” (Communication) COM (2018) 795 final.

3 High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 9 April 2019.

4 Commission, “White Paper on Artificial Intelligence – A European Approach to Excellence and Trust” COM (2020) 65 final.

5 ibid, 10.

6 Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts” COM (2021) 206 final (proposed Regulation on AI).

7 Accordingly, high-risk AI systems are submitted to specific requirements.

8 Commission, “Fostering a European Approach to Artificial Intelligence” (Communication) COM (2021) 205 final 4.

9 Area of EU competence (Article 3(2) of the Treaty on European Union and Articles 67 and 77 of the Treaty on the Functioning of the European Union).

10 D Bigo, “The Socio-Genesis of a Guild of 'Digital Technologies’ Justifying Transnational Interoperable Databases in the Name of Security and Border Purposes: A Reframing of the Field of Security Professionals” (2020) 6 International Journal of Migration and Border Studies 74.

11 Commission, “Smart Borders – Options and the Way Ahead” (Communication) COM (2011) 680 final 3.

12 European Parliament and Council Regulation (EU) 2018/1240 of 12 September 2018 establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) 1077/2011, (EU) 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 [2018] OJ L236/1 (ETIAS Regulation).

13 European Parliament and Council Regulation (EU) 2016/399 of 9 March 2016 on a Union Code on the rules governing the movement of persons across borders [2016] OJ L77/1 (Schengen Border Code), Art 6(1)(b).

14 According to the latest information publicly available: <> (last accessed 1 April 2022).

15 The list of visa-exempt countries concerned is set out in Annex II of European Parliament and Council Regulation (EU) 2018/1806 of 14 November 2018 listing the third countries whose nationals must be in possession of visas when crossing the external borders and those whose nationals are exempt from that requirement [2018] OJ L303/39. Family members of EU citizens who are exempt from the visa requirement and do not hold a residence card or permit fall, under certain conditions, within the scope of the ETIAS Regulation (see Art 24 of ETIAS Regulation for more details).

16 However, in July 2021, EU co-legislators adopted a Regulation that sets up screening rules for visa applications, which are also based on a profiling algorithm (Regulation (EU) 2021/1134 of the European Parliament and of the Council of 7 July 2021 amending Regulations (EC) No 767/2008, (EC) No 810/2009, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1860, (EU) 2018/1861, (EU) 2019/817 and (EU) 2019/1896 of the European Parliament and of the Council and repealing Council Decisions 2004/512/EC and 2008/633/JHA, for the purpose of reforming the Visa Information System [2021] OJ L248/11, Art 1(11)). It mirrors ETIAS’s profiling algorithm. Although our contribution focuses on ETIAS, most of the analyses we develop here could be, pending further examination, applicable to visa applications’ screening rules. For more details, see N Vavoula, “Artificial Intelligence (AI) at Schengen Borders: Automated Processing, Algorithmic Profiling and Facial Recognition in the Era of Techno-Solutionism” (2021) European Journal of Migration and Law SSRN: <>.

17 Commission, “Delegated Decision of 23 November 2021 on further defining security, illegal immigration or high epidemic risks” C(2021) 4981 final (Delegated Decision). The Decision has not been published yet and is available at <> (last accessed 15 March 2022).

18 For a recently published book on this topic, see R Gelin, Dernières nouvelles de l’intelligence artificielle (Paris, Flammarion 2022).

19 See, among others, M Ebers, VRS Hoch, F Rosenkranz, H Ruschemeier and B Steinrötter, “The European Commission’s Proposal for an Artificial Intelligence Act – A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)” (2021) 4 J 589; N Smuha, E Ahmed-Rengersb, A Harkens, W Li, J MacLaren, R Pisellif and K Yeung, “How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act” (2021) SSRN: <>.

20 Art 83(1) of the Commission’s proposal for a Regulation on AI states: “This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [12 months after the date of application of this Regulation referred to in Art 85(2)], unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned”. ETIAS Regulation is listed in Annex IX.

21 S Barocas and AD Selbst, “Big Data’s Disparate Impact” (2016) 104 California Law Review 671.

22 Especially after the entry into force of the 1985 Schengen Agreement (OJ [2000] L239/13) and the 1990 Convention implementing the Schengen Agreement (OJ [2000] L239/19).

23 N Vavoula, “The ‘Puzzle’ of EU Large-Scale Information Systems for Third-Country Nationals: Surveillance of Movement and Its Challenges for Privacy and Personal Data Protection” (2020) 45 European Law Review 348, 6.

24 See Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, ET Achiume, “Racial Discrimination and Emerging Digital Technologies: A Human Rights Analysis”, A/HRC/44/57, Human Rights Council, Forty-fourth session, 15 June–3 July 2020.

25 D Broeders, “The New Digital Borders of Europe: EU Databases and the Surveillance of Irregular Migrants” (2007) 22 International Sociology 71, 89; N Gäckle, “Taming Future Mobilities: Biopolitics and Data Behaviourism in the European Travel Information and Authorisation System (ETIAS)” (2020) 15 Mobilities 257, 262.

26 European Parliament and Council Regulation (EC) 1987/2006 of 20 December 2006 on the establishment, operation and use of the second-generation Schengen Information System (SIS II) [2006] OJ L381/4 (SIS II Regulation).

27 European Parliament and Council Regulation (EU) 604/2013 of 26 June 2013 establishing the criteria and mechanisms for determining the Member State responsible for examining an application for international protection lodged in one of the Member States by a third-country national or a stateless person [2013] OJ L180/31 (Dublin III Regulation), which “lays down the criteria and mechanisms for determining the Member State responsible for examining an application for international protection lodged in one of the Member States by a third-country national or a stateless person” (Art 1).

28 European Parliament and Council Regulation (EU) 603/2013 of 26 June 2013 on the establishment of Eurodac for the comparison of fingerprints for the effective application of Regulation (EU) 604/2013 establishing the criteria and mechanisms for determining the Member State responsible for examining an application for international protection lodged in one of the Member States by a third-country national or a stateless person and on requests for the comparison with Eurodac data by Member States’ law enforcement authorities and Europol for law enforcement purposes, and amending Regulation (EU) 1077/2011 establishing a European Agency for the operational management of large-scale IT systems in the area of freedom, security and justice (recast) [2013] OJ L180/1 (Eurodac Regulation). Eurodac has been operational since 2003. See N Vavoula, “Transforming Eurodac from 2016 to the New Pact: From the Dublin System’s Sidekick to a Database in Support of EU Policies on Asylum, Resettlement and Irregular Migration” (2020) ECRE Working Paper <> (last accessed 14 July 2021).

29 European Parliament and Council Regulation (EC) 767/2008 of 9 July 2008 concerning the Visa Information System (VIS) and the exchange of data between Member States on short-stay visas [2008] OJ L218/60 (VIS Regulation), lastly amended by Regulation (EU) 2021/1134 of the European Parliament and of the Council of 7 July 2021 which extends the scope of VIS to long stay visa and residence permits (Art 1(2)). However, pursuant to Art 12 of the newly adopted Regulation, the Commission shall take a decision setting its date of application. This decision has not been adopted so far.

30 European Parliament and Council Regulation (EU) 2017/2226 of 30 November 2017 establishing an Entry/Exit System (EES) [2017] OJ L327/20 (EES Regulation). This IT system is expected to become operational by the end of September 2022 (according to the latest information publicly available: <> (last accessed 1 April 2022)).

31 European Parliament and Council Regulation (EU) 2019/816 of 17 April 2019 establishing a centralised system for the identification of Member States holding conviction information on third-country nationals and stateless persons (ECRIS-TCN) [2019] OJ L135/1 (ECRIS-TCN Regulation). This IT system is expected to become operational in 2022 (according to the latest information publicly available: <> (last accessed 1 April 2022)).

32 S Preuss-Laussinotte, “L’élargissement problématique de l’accès aux bases de données européennes en matière de sécurité” (2009) 74 Cultures & Conflicts 81.

33 Commission, “Proposal for a Regulation of the European Parliament and of the Council establishing a framework for the interoperability of EU information systems (borders and visas) and amending Council Decision 2004/512/EC, Regulation (EC) 767/2008, Council Decision 2008/633/JHA, Regulation (EU) 2016/399 and Regulation (EU) 2017/2226” COM (2017) 793 final.

34 ISO/TC 184/SC 5.

35 European Parliament and Council Regulation (EU) 2019/817 of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of borders and visa [2019] OJ L135/27 and European Parliament and Council Regulation (EU) 2019/818 of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of police and judicial cooperation, asylum and migration and amending Regulations [2019] OJ L135/85 (Interoperability Regulations).

36 For a detailed explanation of the functioning of interoperability, including illustrative examples, see the Commission’s impact assessment: COM(2017) 793 final; SWD(2017) 474 final.

37 It has the purpose “of facilitating the fast, seamless, efficient, systematic and controlled access of Member State authorities and Union agencies to the EU information systems, to Europol data and to the Interpol databases for the performance of their tasks and in accordance with their access rights and the objectives and purposes of the EES, VIS, ETIAS, Eurodac, SIS and ECRIS-TCN” (Interoperability Regulations, Art 6(1)).

38 “Storing biometric templates obtained from the biometric data … that are stored in the CIR and SIS and enabling querying with biometric data across several EU information systems”, this system “is established for the purposes of supporting the CIR and the MID and the objectives of the EES, VIS, Eurodac, SIS and ECRIS-TCN” (Interoperability Regulations, Art 12).

39 “Creating and storing identity confirmation files … containing links between data in the EU information systems included in the CIR and SIS and allowing detection of multiple identities, with the dual purpose of facilitating identity checks and combating identity fraud”, this system “is established for the purpose of supporting the functioning of the CIR and the objectives of the EES, VIS, ETIAS, Eurodac, SIS and ECRIS-TCN” (Interoperability Regulations, Art 25).

40 Interoperability Regulations, Art 17.

41 Interoperability Regulations, Art 39. See also ETIAS Regulation, Art 84.

42 See M Leese, “Fixing State Vision: Interoperability, Biometrics, and Identity Management in the EU” (2020) 27(1) Geopolitics 113. The author argues that the EU interoperability legal framework will lead to a shift from identity production to identity management “that aspires to simultaneously verify and cross-validate identity records across multiple domains [to] then form[s] a new, allegedly truthful basis for knowledge production and government” (at 127).

43 Vavoula, supra, note 23, 24.

44 M Leese, “Exploring the Security/Facilitation Nexus: Foucault at the 'Smart’ Border” (2016) 30 Global Society 412.

45 Commission, “Preparing the Next Steps in Border Management in the European Union” (Communication) COM (2008) 69 final.

46 The European Data Protection Supervisor delivered an opinion on its own initiative (European Data Protection Supervisor, “Preliminary Observations on the Commission’s Communications COM (2008) 67, 68 and 69 Final” (2008) 2).

47 European Parliament Resolution 2008/2181(INI) of 10 March 2009 on the next steps in border management in the EU and similar experiences in third countries, para 19.

48 Justice and Home Affairs Council, “Internal Security Strategy for the European Union: Towards a European Security Model” (Publications Office of the European Union 2010) 22, 27–28.

49 ibid.

50 PricewaterhouseCoopers, “Policy Study on an EU Electronic System for Travel Authorization (EU-ESTA)” (2011) 292.

51 COM (2011) 680 final 8.

52 Commission, “Stronger and Smarter Information Systems for Borders and Security” (Communication) COM (2016) 205 final.

53 ibid, 3. See also Commission, “The European Agenda on Security” (Communication) COM (2015) 185 final; Commission, “A European Agenda on Migration” (Communication) COM (2015) 240 final 13–14.

54 Travellers crossing external land borders do not generate upstream information, unlike when travelling via air and sea, for which personal data are collected as part of the measures taken by Member States to implement, on the one hand, Council Directive 2004/82/EC of 29 April 2004 on the obligation of carriers to communicate passenger data [2004] OJ L2261/24 and, on the other hand, European Parliament and Council Directive (EU) 2016/681 of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime [2016] OJ L119/132 (PNR Directive).

55 COM (2016) 205 final 13.

56 European Parliament Resolution 2016/2773(RSP) of 6 July 2016 on the strategic priorities for the Commission Work Programme 2017, para 29.

57 Commission, “Proposal for a Regulation of the European Parliament and of the Council Establishing a European Travel Information and Authorisation System (ETIAS) and Amending Regulations (EU) 515/2014, (EU) 2016/399, (EU) 2016/794 and (EU) 2016/1624” COM (2016) 731 final.

58 PricewaterhouseCoopers, “Feasibility Study for a European Travel on Information and Authorization System (ETIAS)” (2016).

59 Interinstitutional Agreement between the European Parliament, the Council of the European Union and the European Commission on Better Law-Making of 13 April 2016 [2016] OJ S123/1, para 13. See S Alegre, J Jeandesboz and N Vavoula, “European Travel Information and Authorisation System (ETIAS): Border Management, Fundamental Rights and Data Protection” (European Parliament 2017) 28. See also European Data Protection Supervisor, “Opinion 3/2017 on the Proposal for a Regulation Establishing a European Travel Information and Authorisation System (ETIAS)” (2017); European Union Agency for Fundamental Rights, “Opinion 2/2017 on the Impact on Fundamental Rights of the Proposal for a Regulation Establishing a European Travel Information and Authorisation System (ETIAS)” (2017).

60 In addition to the feasibility studies on EU-ESTA of February 2011 and ETIAS of November 2016, PwC was also requested by the Commission to carry out a technical study on smart borders (PricewaterhouseCoopers, “Technical Study on Smart Borders” (2014)), which explored the different technical options available to the Commission in the context of the proposals it tabled under the cover of the 2013 “Smart Borders Package” and that aimed in particular at creating an entry/exit system (COM (2013) 96 final; proposal aborted) and a passenger registration programme (COM (2013) 97 final; proposal withdrawn).

61 M Akkerman, “Financing Border Wars. The Border Industry, Its Financiers and Human Rights” (Transnational Institute and Stop Wapenhandel 2021); Claire Rodier, Xénophobie business. À quoi servent les contrôles migratoires? (Paris, La Découverte 2012).

62 The LIBE Committee commissioned a study to assess the necessity, implications in relation to interoperability and impact in terms of fundamental rights of the ETIAS proposal: see Alegre et al, supra, note 59).

63 The European Parliament adopted a common position on 5 July 2018 (European Parliament Resolution P8_TA(2018)0307 of 5 July 2018 on the proposal for a Regulation of the European Parliament and of the Council establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) 515/2014, (EU) 2016/399 and (EU) 2016/1624), which the Council unanimously approved on 5 September 2018 (Council Vote No 11890/18 of 6 September 2018).

64 ETIAS Regulation, Art 1(1). Definitions of these concepts are specified in Art 3 and the concrete objectives are detailed in Art 4.

65 ETIAS Regulation, Arts 1 and 4.

66 ETIAS Regulation, Art 5.

67 ETIAS Regulation, Art 6.

68 ETIAS Regulation, Arts 7 and 8.

69 ETIAS Regulation, Art 17.

70 Primary, secondary or higher level or none.

71 Under “job groups”, the applicant chooses from a predetermined (long) list laid down by the Commission in the form of delegated acts. See Commission Delegated Regulation (EU) 2021/916 of 12 March 2021 supplementing Regulation (EU) 2018/1240 of the European Parliament and of the Council establishing a European Travel Information and Authorisation System (ETIAS) as regards the predetermined list of job groups used in the application form [2021] OJ L201/1.

72 Or of third countries whose nationals are exempt from the visa requirement (short stay).

73 ETIAS Regulation, Arts 17(8) and 19(3)(d).

74 ETIAS Regulation, Art 54(1).

75 ETIAS Regulation, Art 20(1).

76 ETIAS Regulation, Arts 21(1) and 36.

77 ETIAS Regulation, Arts 21(2) and 22.

78 ETIAS Regulation, Art 21(3). Art 25 states the criteria to determine the Member State responsible.

79 ETIAS Regulation, Art 26(1–2).

80 ETIAS Regulation, Art 37(3). Moreover, contact details of the relevant European and national data protection authorities shall be given to applicants (ETIAS Regulation, Art 38(2)(e)). See also Art 64.

81 The National Unit specifies grounds for refusal. However, reading ETIAS Regulation, it appears that it is not obliged to explain those grounds. A mere mention seems sufficient. In our view, this is likely to impact the applicant’s right to an effective judicial remedy.

82 ETIAS Regulation, Art 45.

83 ETIAS Regulation, Art 47.

84 ETIAS Regulation, Art 80. Verification by the border guards seems in principle to be formal (there is a travel authorisation or not). However, following Recital 37 of ETIAS Regulation, “if there is a valid travel authorisation, the decision to authorise or refuse entry should be taken by the border guard”. Indeed, border guards should still determine whether entry conditions laid down in Art 6 of the Schengen Border Code are satisfied. Moreover, flags can be attached to a travel authorisation allowing border guards to proceed to second-line checks (further checks carried out in a special location at the external borders) or to access the ETIAS Central Unit to obtain additional information (ETIAS Regulation, Art 47(4)). Finally, when the period of validity of a travel authorisation expires, access to the EU territory is denied (ETIAS Regulation, Art 47(2)(a)).

85 ETIAS Regulation, Art 20(2).

86 If the applicant’s travel document is reported as being lost, stolen, misappropriated or invalidated in SIS or where the applicant is subject to a refusal of entry and a stay alert is entered into SIS, the National Unit shall refuse the travel authorisation without any discretion (ETIAS Regulation, Art 26(3)(a)).

87 ETIAS Regulation, Art 20(4).

88 ETIAS Regulation, Art 34(1) (emphasis added). Following Art 34(2), “the ETIAS watchlist shall be established on the basis of information related to terrorist offences or other serious criminal offences”. Art 34(3) specifies that the information shall be provided by Europol and/or Member States, but there is no further indication of the criteria upon which Europol and the Member States should base their decision to put a person on the watchlist. Art 34(4) enumerates specific personal data fields of which the ETIAS watchlist is composed. It does not contain data about the nature of the offence (terrorist or other serious crimes), which seems prima facie to remain solely in the hands of Europol and/or the Member States.

89 ETIAS Regulation, Art 35(7). It states that the Commission “shall, by means of implementing acts, establish the technical specifications of the ETIAS watchlist”, which, to date, has not even been “planned”.

90 ETIAS Regulation, Art 26(5).

91 ETIAS Regulation, Art 20(5).

92 ETIAS Regulation, Art 26(6).

93 “Automation bias” refers to the tendency of human beings to align themselves with the outcomes the algorithms, notably because their authority. Humans are therefore less inclined to decide differently from algorithms. See J Gerards and R Xenidis, “Algorithmic Discrimination in Europe” (European Commission 2020) 42.

94 Gäckle, supra, note 25, 267–68.

95 Alegre et al, supra, note 59, 23.

96 FF Schauer, Profiles, Probabilities and Stereotypes (Cambridge, MA, Harvard University Press 2006). On profiling, see also M Hildebrandt and S Gutwirth (eds.), Profiling the European Citizen: Cross-Disciplinary Perspectives (Berlin, Springer 2008).

97 See also M Leese, “The New Profiling: Algorithms, Black Boxes, and the Failure of Anti-Discriminatory Safeguards in the European Union” (2014) 45 Security Dialogue 494.

98 Robinson + Yu, “Knowing the Score: New Data, Underwriting, and Marketing in the Consumer Credit Marketplace. A Guide for Financial Inclusion Stakeholders” (2014) <>.

99 This is well described in L Amoore, The Politics of Possibility: Risk and Security beyond Probability (Durham, NC, Duke University Press 2013).

100 Leese, supra, note 97, 497.

101 VIS Regulation, Art 37 and Recital 13. See European Data Protection Supervisor, “Opinion of the European Data Protection Supervisor on the Proposal for a Council Decision on the Establishment, Operation and Use of the Second Generation Schengen Information System” (2006); Vavoula, supra, note 23.

102 PNR Directive, Arts 6(3)(b) and (4): when carrying out an assessment of passengers, Passenger Information Units may “process PNR data against pre-determined criteria”, which should be carried out in a non-discriminatory manner. See N Vavoula, “Prevention, Surveillance, and the Transformation of Citizenship in the 'Security Union': The Case of Foreign Terrorist Fighters” (2018) Queen Mary School of Law Legal Studies Research Paper 19, SSRN: <>; Leese, supra, note 97.

103 Art 4(4) of GDPR provides that “profiling” means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work or their economic situation, health, personal preferences, interests, reliability, behaviour, location or movements. Note that while ETIAS Regulation mentions GDPR, it is also subject to the European Parliament and Council Regulation 2018/1725 of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) 45/2001 and Decision 1247/2002/EC [2018] OJ L295/29 (EU institutions data protection Regulation). However, it is not the purpose of this paper to address specific issues of data protection.

104 C(2021) 4981 final.

105 ETIAS Regulation, Arts 73 and 74.

106 Email from eu-LISA, 12 May 2021, request No 5/2021 (emphasis added).

107 Email from the European Direct Contact Centre, 26 May 2021, request No 186751. In its reply, the Centre says it consulted the Directorate-General for Migration and Home Affairs of the EU Commission.

108 Email from Frontex, 9 September 2021, PAD-2021-00228.

109 Eu-LISA, “Artificial Intelligence in the Operational Management of Large-Scale IT Systems” (2020) 30.

110 Deloitte, “Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security” (2020) 90.

111 In its proposal of the ETIAS Regulation, the Commission states the following: “The number of visa-exempt third country nationals to the Schengen countries will continue to grow, with an expected increase of over 30% in the number of visa-exempt third country nationals crossing the Schengen borders by 2020, from 30 million in 2014 to 39 million in 2020” (COM (2016) 731 final 2).

112 These assumptions are notably based on interviews with two AI experts that we have conducted.

113 For a general account on algorithmic decision-making systems, see DR Amariles, “Algorithmic Decision Systems: Automation and Machine Learning in the Public Administration” in W Barfield (ed.), The Cambridge Handbook of the Law of Algorithms (Cambridge, Cambridge University Press 2020) p 273.

114 Email from Frontex, 9 September 2021. See also Art 6 of the Delegated Decision.

115 Regarding the high epidemic risks, Art 6(1)(a) of the Delegated Decision states that Member States shall provide information “through the epidemiological surveillance and control of communicable diseases network and the Early Warning and Response System in accordance with Articles 6, 8 and 9 of Decision No 1082/2013/EU”.

116 Email from Frontex, 9 September 2021.

118 See R Paul, “Risk Analysis as a Governance Tool in European Border Control” in A Weinar, S Bonjour and L Zhyznomirska (eds.), The Routledge Handbook of the Politics of Migration in Europe (London, Routledge 2020); S Horii, “The Effect of Frontex’s Risk Analysis on the European Border Controls” (2016) 17 European Politics and Society 242.

119 Art 84(2) of ETIAS Regulation provides that eu-LISA shall store the following data in the CRRS: application status information; nationalities, sex and year of birth of the applicant; the country of residence; education (primary, secondary, higher or none); current occupation (job group); the type of the travel document and three-letter code of the issuing country; the type of travel authorisation; the validity period of the travel authorisation; and the grounds for refusing, revoking or annulling a travel authorisation. It further states that “cross-system statistical data and analytical reporting shall allow [Member States, the Commission, eu-LISA and the ETIAS Central Unit] … to support the implementation of the ETIAS” (emphasis added). Pursuant to Art 39 of the Interoperability Regulations, which gives eu-LISA the mandate to establish the CRRS, the data are anonymised.

120 All of this shows how ETIAS and the EES are interlinked. The EES will be a valuable information resource for ETIAS’s algorithm.

121 This is confirmed by Art 3 of the Delegated Decision.

122 Delegated Decision, Recital 4 (emphasis added).

123 Commission Delegated Regulation (EU) 2021/2223 of 30 September 2021 supplementing Regulation (EU) 2019/817 of the European Parliament and of the Council with detailed rules on the operation of the central repository for reporting and statistics [2021] OJ L448/7. Art 84(4) subpara 2 of ETIAS Regulation is also of relevance and states that “the daily statistics shall be stored in the central repository for reporting and statistics referred to in Article 39 of Regulation (EU) 2019/817” (emphasis added).

124 Alegre et al, supra, note 59.

125 Eg see COM (2020) 65 final 12.

126 The ETIAS Screening Board is established within Frontex and mainly has an advisory role. It is composed of one representative of each ETIAS National Unit, one representative of Frontex and one representative of Europol (ETIAS Regulation, Art 9(1)). Following an amendment proposed by the EU Parliament (see the Draft Report on the proposal for a regulation of the European Parliament and of the Council establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) 515/2014, (EU) 2016/399 and (EU) 2016/1624, 4 October 2017, PE605.985v02-00, 11-12), the Regulation also creates an ETIAS Fundamental Rights Guidance Board, with an advisory and appraisal function. According to Art 10(2), it “shall perform regular appraisals and issue recommendations to the ETIAS Screening Board on the impact on fundamental rights of the processing of applications and of the implementation of Article 33”. Art 9(3) states that “when issuing recommendations, the ETIAS Screening Board shall take into consideration the recommendations issued by the ETIAS Fundamental Rights Guidance Board”.

127 However, following Binns and Veale, the question of whether Art 22 GDPR and Art 24 of the EU institutions data protection Regulation are applicable to these types of algorithmic decision-making systems may not be as straightforward as it seems. Indeed, the action performed by the ETIAS screening rules is typically what they describe as triaging, ie “determining which cases get to a human decision-maker or are passed to another automated process”: if a hit is not triggered, the travel authorisation is issued automatically. Therefore, the triggering of a hit could be considered as a decision producing a significant effect since without the intervention of the ETIAS algorithm the refusal decision would not have been issued (for more details, see R Binns and M Veale, “Is That Your Final Decision? Multi-Stage Profiling, Selective Effects, and Article 22 of the GDPR” (2021) International Data Privacy Law 1). That being said, a further discussion on the applicability of Art 22 GDPR and Art 24 of the EU institutions data protection is beyond the scope of this paper.

128 Opinion 1/15 ECLI:EU:C:2017:592; Joined Cases C-511/18, C-512/18 and C-520/18 La Quadrature du Net and Others v Premier Ministre and Others [2020] ECLI:EU:C:2020:791. The opinion and the ruling of the CJEU involve automatic data processing in the fight against terrorism.

129 For more details, see Vavoula, supra, note 16.

130 Emphasis added. Equivalent provisions are contained in the Interoperability Regulations (Art 5, which is an open-ended, non-discriminatory clause), the EES Regulation (Art 10, which states broad fundamental rights guarantees, and Recital 19, which is a non-discriminatory clause with an exhaustive list of protected grounds) and the VIS Regulation (Art 7, which is a non-discriminatory clause limited to protected grounds of secondary EU equality law).

131 Eg see Recitals 10, 13, 15, 17, 33, 35 to 39, 44, 45 and 47 of the proposal.

132 European Union Agency for Fundamental Rights, supra, note 59, 26.

133 ibid, 28.

134 ibid, 29. See also European Data Protection Supervisor, supra, note 59, 11–14.

135 Which we take as our main analysis angle in the following pages to tackle issues raised by ETIAS’s profiling algorithm.

136 T Spijkerboer, “The Global Mobility Infrastructure: Reconceptualising the Externalisation of Migration Control” (2018) 20 European Journal of Migration and Law 452, 467; M-B Dembour, When Humans Become Migrants: Study of the European Court of Human Rights with an Inter-American Counterpoint (1st edition, Oxford, Oxford University Press 2015).

137 K Lippert-Rasmussen, “The Badness of Discrimination” (2006) 9 Ethical Theory and Moral Practice 167.

138 Barocas and Selbst, supra, note 21 (emphasis added). See also R Xenidis and L Senden, “EU Non-Discrimination Law in the Era of Artificial Intelligence: Mapping the Challenges of Algorithmic Discrimination” in U Bernitz et al (eds.), General Principles of EU Law and the EU Digital Order (Alphen aan den Rijn, Kluwer Law International B V 2020) p 156.

139 Broeders, supra, note 25, 88.

140 Indeed, nationality, sex and age range can lead to direct discrimination. Country or city of residence, level of education and current occupation can serve as proxies for protected grounds such as race and ethnic origin, thereby leading to indirect discrimination. We will return to this latter.

141 K Leurs and T Shepherd, “Datafication & Discrimination” in MT Schäfer and K van Es (eds), The Datafied Society (Amsterdam, Amsterdam University Press 2017) p 220.

142 Barocas and Selbst, supra, note 21, 677. Systematisation of sources of algorithmic biases and the terminology used vary in the legal scholarship, but the substance is quite similar. See FJZ Borgesius, “Discrimination, Intelligence Artificielle et Décisions Algorithmiques” (Conseil de l’Europe 2018); P Hacker, “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination Under EU Law” (2018) 55 Common Market Law Review 1143; J Kleinberg et al, “Discrimination in the Age of Algorithms” (Social Science Research Network 2019) SSRN Scholarly Paper ID 3329669 <> (last accessed 30 October 2021); Xenidis and Senden, supra, note 138; FJZ Borgesius, “Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence” (2020) 24 International Journal of Human Rights 1572.

143 In the same vein, see Barocas and Selbst, supra, note 21, 673. As the authors specify, their research subject is broader than discrimination in its strict legal sense and concerns “disproportionately adverse outcomes concentrated within historically disadvantaged groups in ways that look a lot like discrimination”.

144 J Ringelheim, “The Burden of Proof in Anti-Discrimination Proceedings. A Focus on Belgium, France and Ireland” (European Network of Legal Experts in Gender Equality and Non-Discrimination 2019) 2, 51. See also F Palmer, “Re-Dressing the Balance of Power in Discrimination Cases: The Shift in the Burden of Proof” (European Network of Legal Experts in the Non-Discrimination Field (EU Commission) 2006); I Rorive, “Proving Discrimination Cases – The Role of Situation Testing” (Migration Policy Group – Swedish Centre for Equal Rights 2009); L Farkas and O O’Farrell, “Reversing the Burden of Proof: Practical Dilemmas at the European and National Level” (European Network of Legal Experts in the Non-Discrimination Field (EU Commission) 2014); A Baele, “Proving Discrimination: The Shifting Burden of Proof and Access to Evidence” (Cloisters 2016).

145 The CJEU posed the principle of the shared burden of proof for the first time in the Danfoss case (Case 109/88, Handels- og Kontorfunktionærernes Forbund I Danmark v Dansk Arbejdsgiverforening, acting on behalf of Danfoss [1989] ECLI: ECLI:EU:C:1989:383), thereby guaranteeing the effectiveness of the equality provisions. Its case law has remained constant since (eg Case 83/14 CHEZ Razpredelenie Bulgaria AD contre Komisia za zashtita ot diskriminatsia [2015] ECLI:EU:C:2015:480 77-85) and EU secondary law embodies this principle (eg Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin OJ [2000] L180/22). The CJEU has alleviated the standard rule of burden of proof that states it is for the claimant to establish the facts they allege. Indeed, discrimination can leave no material trace and the standard rule reinforces the unequal distribution of power that exists between the parties. It is now well established that when an alleged victim of discrimination brings evidence of facts from which it may be presumed there is, on the face of things, a discrimination (a prima facie case), the onus shifts to the defendant who will escape liability once showing that there is no discrimination.

146 We contend that the principle of the shared burden of proof applies here. Frontex is responsible for ETIAS’s screening rules (ETIAS Regulation, Art 75(1)(c)) and the processing of applicants’ personal data (ETIAS Regulation, Art 57(1)). The Agency is bound to (theoretically) apply the principle of equality and non-discrimination by virtue of Art 14 of the Regulation. This provision partly mirrors Arts 20 (equality before the law) and 21 (non-discrimination clause) of the EU Charter of Fundamental Rights, to which Frontex is subjected according to Art 51(1) of the Charter. Although to our knowledge the CJEU has not explicitly recognised that Arts 20 and 21 of the Charter standing alone entail the principle of the shared burden of proof – it has only done so by combining these provisions with those of EU secondary law (see Case 83/14, supra, note 145) – there are no reasons for the Court to decide otherwise given the significance it devotes to the effectiveness of equality and non-discrimination as core values of EU law.

147 R Xenidis, “Two Round Holes and a Square Peg: An Alternative Test for Algorithmic Discrimination in EU Equality Law” (on file with the authors).

148 In this context and according to our understanding, the data miner is Frontex (more specifically, the Central Unit that will be created within Frontex’s existing structure) acting as the authority responsible for the screening rules (ETIAS Regulation, Art 75(1)(c)).

149 See also Kleinberg et al, supra, note 142, 139–40 (according to whom “what outcome to predict, or how to weight together different outcomes, is one of the critical choices that must be made” in the building process of a machine learning algorithm); Xenidis and Senden, supra, note 138 (who observe that “stereotyping can influence the framing of the problem posed, and of the output looked for”).

150 Barocas and Selbst, supra, note 21, 679.

151 NP De Genova, “Migrant 'Illegality’ and Deportability in Everyday Life” (2002) 31 Annual Review of Anthropology 419; Y Jansen, R Celikates and J de Bloois, The Irregularization of Migration in Contemporary Europe: Detention, Deportation, Drowning (Lanham, MD, Rowman & Littlefield International 2015).

152 According to Art 33(3) subpara 2 of ETIAS Regulation, specific risks shall be reviewed at least every six months.

153 Barocas and Selbst, supra, note 21, 680.

154 Hacker follows the same path and identifies, among the main causes of algorithmic bias, biased training data, which covers two subcases. The first is “incorrect handling of data”, which is incorrect labelling from implicit bias or sampling bias (misrepresentation of the population in the data set). The second is “historical bias” in the training data. See Hacker, supra, note 142.

155 On the discriminatory border checks see R Bright, “Beware the Border Patrol: The Nasty History of Airport Discrimination” (The Conversation, 14 August 2017) <> (last accessed 29 June 2021); Y Vázquez, “Race and Border Control: Is There a Relationship?” (Oxford Law Faculty, 6 April 2015) <> (last accessed 29 June 2021).

156 European Union Agency for Fundamental Rights, “Fundamental Rights at Airports: Border Checks at Five International Airports in the European Union” (2014) 45.

157 Oberverwaltungsgericht Rheinland-Pfalz (2012). The Court overruled a decision of the Administrative Court of Koblenz that had justified the triggering of further checks based merely on foreign looks, ruling instead that any form of ethnic profiling is inconsistent with Art 3 of the German Basic Law.

158 See eu-Lisa, “2019 Annual Report on Eurodac” (2020) 20. This Report gives a good account of the proportion of erroneous entries in the database. In 2019, 79,595 transactions were rejected due to errors, representing 6.3% of such transactions. Transaction error rates stood at 8% in 2018 and 5.7% in 2017.

159 See EES Regulation, Art 2(3)(c): “This Regulation does not apply to: … holders of residence permits referred to in point 16 of Article 2 of Regulation (EU) 2016/399 other than those covered by points (a) and (b) of this paragraph”” Art 2 of Regulation (EU) 2016/399 (Schengen Border Code) states that: “For the purposes of this Regulation the following definitions apply: … (16) ‘residence permit’ means: … (b) all other documents issued by a Member State to third-country nationals authorising a stay on its territory that have been the subject of a notification and subsequent publication in accordance with Article 39, with the exception of: (i) temporary permits issued pending examination of a first application for a residence permit as referred to in point (a) or an application for asylum” (emphasis added).

160 European Commission and Joint Research Centre, Migration Profile Venezuela (2019) <> (last accessed 15 July 2021).

161 Therefore producing “feedback loops”. See Gerards and Xenidis, supra, note 93, 43.

162 Barocas and Selbst, supra, note 21, 688. In the same vein, Kleinberg et al argue that unfavourable treatment “can also be introduced through decisions about what candidate predictors to collect, construct and give to the … algorithm to consider for possible inclusion in the final statistical model”: see Kleinberg et al, supra, 142, 140–41.

163 Schauer, supra, note 96, 3–7 (cited in Barocas and Selbst, supra, note 21, 688).

164 See Xenidis and Senden, supra, note 138. According to them, “in the absence of perfect information or more granular data and in front of the cost of obtaining such data, stereotypes and generalizations regarding certain groups of population might be relied on as a way to approximate reality”.

165 Art 33(4) of ETIAS Regulation does not provide clear guidance as to the way the Central Unit will establish the specific risk indicators and the criteria it will resort to in order to balance the six attributes. There are some procedural guarantees, such as the consultation of the ETIAS Screening Board, but the Central Unit seems to have (from our reading of the Regulation) a wide margin of appreciation, therefore leading to opacity in the decision-making process.

166 Gerards and Xenidis, supra, note 93, 44.

167 Barocas and Selbst, supra, note 21, 691. See also AER Prince and D Schwarcz, “Proxy Discrimination in the Age of Artificial Intelligence and Big Data” 105 Iowa Law Review 1260–61.

168 For a similar approach, see R Xenidis, “Tuning EU Equality Law to Algorithmic Discrimination: Three Pathways to Resilience” (2020) 27 Maastricht Journal of European and Comparative Law 736.

169 OHCHR, “Thematic Report on Racial Discrimination in the Context of Citizenship, Nationality and Immigration” <> (last accessed 2 July 2021).

170 S Fredman, “Intersectional Discrimination in EU Gender Equality and Non-Discrimination Law” (European Commission, 2016) 27. For an account of intersectional disadvantage in the field of algorithmic discrimination, see, among others, Xenidis, supra, note 168.

171 K Crenshaw, “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics” (1989) University of Chicago Legal Forum 1.

172 Note that, in Parris, the CJEU rejected a claim based on intersectional disadvantage. It ruled that “while discrimination may indeed be based on several of the grounds … there is, however, no new category of discrimination resulting from the combination of more than one of those grounds, such as sexual orientation and age, that may be found to exist where discrimination on the basis of those grounds taken in isolation has not been established” (Case 443/15 David L Parris v Trinity College Dublin [2016] ECLI:EU:C:2016:897 80). Therefore, it is impossible from the onset for alleged victims of discrimination to contend that they underwent unfavourable treatment based on a combination of protected characteristics for the time being.

173 A Agrawal, “Removing Bias in AI Isn’t Enough, It Must Take Intersectionality into Account” (Medium, 23 April 2019) <> (last accessed 9 July 2021).

174 Spijkerboer, supra, note 136.

175 ibid.

176 S Costanza-Chock, “Design Justice, AI, and Escape from the Matrix of Domination” (2018) Journal of Design and Science <> (last accessed 22 March 2021).

177 Spijkerboer, supra, note 136, 461.

178 COM (2021) 205 final 3.

179 Annex 3, para 7(b) of the Proposal for a Regulation on AI.

180 Interestingly, in its 2021 Opinion, the European Economic and Social Committee (EESC) “strongly recommends widening the scope of the AIA [Artificial Intelligence Act] so as to include ‘legacy AI systems’, i.e. systems that are already in use or are deployed prior to the coming into force of the AIA, in order to avoid deployers fast tracking any prohibited, high- and medium-risk AI to avoid compliance requirements. Moreover, the EESC strongly recommends not to exclude AI that is a component of large-scale IT systems in the area of freedom, security and justice as listed in Annex IX from the scope of the AIA” ([2021] OJ C517/61).

Figure 0

Figure 1. Overview of the European Union (EU) information systems (our illustration). BMS = biometric matching service; CIR = common identity repository; ECRIS-TCN = European Criminal Records Information System for Third-Country Nationals; EES = Entry/Exit System; ESP = European search portal; ETIAS = European Travel Information and Authorisation System; IT = information technology; MID = multiple-identity detector; SIS II = Schengen Information System; VIS = Visa Information System.

Figure 1

Figure 2. Overview of the European Travel Information and Authorisation System (ETIAS) information system (our illustration). ECRIS-TCN = European Criminal Records Information System for Third-Country Nationals; EES = Entry/Exit System; EU = European Union; eu-LISA = European Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice; SCI = secure communication infrastructure (fourth-generation Trans European Services for Telematics between Administrations – TESTA-ng); SIS II = Schengen Information System; SLTD = Interpol Stolen and Lost Travel Document database; SR = ETIAS screening rules; TDAWN = Interpol Travel Documents Associated with Notices database; VIS = Visa Information System; WL = ETIAS watchlist.

Figure 2

Figure 3. The European Union’s (EU) visa policy. In dark blue: Schengen Area; in light blue: EU states and territories of EU states not part of Schengen and other exceptions; in green: no visa required to enter the EU; in light red: visa required to enter the EU; in dark red: visa and airport transit visa required to enter the EU. Source: European Commission (2021). Note that the Southern Hemisphere seems abnormally compressed.

You have Access

Save article to Kindle

To save this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the or variations. ‘’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System
Available formats

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System
Available formats

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System
Available formats

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *