1. Introduction
We live in a world that depends on artificial intelligence (‘AI’). The quest to unravel ‘what is going on’ in an AI system and the consequences of its usage are frequently referred to as ‘transparency’. Transparency is in reality often more an aspiration than a fact, also for human decision making.Footnote 1 AI systems are rather known for their intrinsic opacity.Footnote 2 Barriers to understanding how AI systems work run the gamut from technical obstacles – for example, not being able to access their bits and components (eg, the source code, training data, output data) – to epistemic challenges, such as the inability for humans to provide an explanation for the outcome of an AI system. The ‘state of not knowing’Footnote 3 AI systems depends on multiple factors. A particularly significant one is that AI systems are utilised and developed behind closed doors. Design choices and processes operate within, and are nurtured by, a legal architecture that constructs and maintains secrecy on multiple levels. In outline, secrecy is both ‘intentional’ and ‘legal’. It is ‘intentional’ because actors involved in AI governance, whether a private or a public actor, choose to rely on or at least find themselves operating in a regime enabling secrecy. It is ‘legal’ because it is fortified by and grounded on a plethora of legal frameworks that enable an information holder to keep information about AI (its components, and even more fundamentally the fact of being developed) secret.Footnote 4
A secrecy-infused legal patchwork impacts the possibility for actors other than those in control to access their informational components (such as training data, algorithms, technical documentation and so forth). Yet, secrecy is an often-untold story when looking into public sector applications of AI systems. Of course, these tools offer great potential for making operations more efficient and effective. Key applications encompass the growing use of AI systems by public bodies even in areas hugely sensitive for citizens and their lives,Footnote 5 including experimentation at national level in the fields of migration, asylum, criminal justice, social welfare, education and taxation.Footnote 6 Governmental agencies and public bodies increasingly classify AI development and research, especially in domains with military or security implications, in order to safeguard AI-related information and maintain a strategic edge.Footnote 7 We will call these sources of AI political secrecy.Footnote 8 Political secrecy is preferred as a term rather than the more descriptive term of public secrecy to highlight the politically motivated aspects of secrecy in a public governance context.Footnote 9 When involved in developing AI, private developers can rely upon other secrecy regimes (such as trade secrets) which make algorithmic systems all the more inaccessible to the public. We can refer to these as forms of AI private secrecy.Footnote 10 Stated concisely, different strata of concealment, with different rationales and pertaining to different branches of law, apply to AI and operate in tandem.
A striking illustration of the coexisting secrecy trends is border governance in the EU. Migration has always been a site of intertwinement between public needs and private interests. The increasing algorithmisation of border control has turned such coalescing into a path dependency of EU agencies such as Frontex and eu-LISA, tying them to often large private vendors providing and maintaining state-of-the-art AI-driven technology. To a very significant extent European research funding gives academics and private consortia safe spaces to construct and develop often far-reaching systems of border control (for example, iBorderControl).Footnote 11 This development of AI systems, amply funded by public money, means that they are often not developed in house as such by European officials. The subsequently applied AI systems are frequently procured from private vendors, which are large conglomerates of the tech industry.Footnote 12 As AI development is thus often outsourced, private contractors may (and do) keep their AI systems opaque to bolster, in accordance with existing legal frameworks, their (ongoing) competitive advantage over other firms. That motive is then imported into public governance, the consequence being that officials may share or conceal bits of information internally in a whole host of ways.
The growing enmeshment of private and public actors in developing and employing AI systems makes it more difficult to consider one set of secrecy enablers separated from the others, albeit underscored by different motivations. The cross-cutting approach of the European AIAFootnote 13 even reinforces this coming together of public and private forms of secrecy,Footnote 14 and underscores the importance of considering both dimensions side by side. The compresence of secrecy across public and private domains is unprecedented in terms of scale and application across potentially all policy domains.Footnote 15 The secret development of AI ends up thwarting public values and normative foundations running the gamut from public accountability to transparency.
Our paper examines the intertwined range of public and private legal drivers of secrecy covering procured AI technology in the field of migration governance in Europe, illustrating how they come together and undermine affected persons (such as third country nationals), watchdogs, pressure groups and researchers. The paper comes in four consequential parts. Paragraph 2 reveals AI secrecy as an institutional framework that consists of both public law and private law forms of secrecy and is informed by varying degrees and layers of opacity (a spectrum of deep and shallow secrets). Paragraph 3 explores the sources of what we call ‘political secrecy’ of AI systems, variously manifesting in forms of secrecy operated by migration and security authorities in ways that tend to shield the existence and use of AI systems from public awareness. Paragraph 4 turns to analyse the legal frameworks that shape and maintain AI vendors’ private forms of secrecy that are highly relevant in the context of migration governance. In analysing the various sources of political and private secrecy, we also refer to how the AIA barely makes a dent in the intertwined legal framework of AI secrecy. We then close by condensing the core takeaways and some countervailing strategies in Paragraph 5. The conclusion looks wider at the general implications of AI secrecy beyond migration governance, including national security, defence and law enforcement.
2. Conjoining secrecy in AI law and governance
A. How and why private and political secrecy converge in public-sector AI
Inquiries into AI secrecy frequently come in silos. Conventional analyses tend to compartmentalise the private and public legal sources of secrecy, treating them as sort of separate, self-contained domains. Examples of these approaches abound. For instance, intellectual property law scholars are inclined to look into the extent to which trade secrecy and other IP rights cover AI systemsFootnote 16 and how public interest limitations come into play.Footnote 17 Viewing private secrecy as an obstacle to public accountability, other authors falling in the same camp look to find legal doctrines (such as ‘information publicity’ in the US)Footnote 18 or exceptions and limitationsFootnote 19 that can serve the purpose of disclosing corporate forms of secrecy. Not nearly as much ink has been spilled on public law forms of secrecy (such as state secrecy, security and classified information), which enable governments and public bodies to keep AI systems under wraps when they utilise AI systems.Footnote 20 The existing scholarship, more or less implicitly, tends to (over-)stress an analytical demarcation between the AI systems held secret by corporations and the secrecy regimes applied by public administrations to keep the functioning of AI systems secret in turn.
Yet, the reliance on such separation is particularly odd because current trends and practices in AI deployment in public governance frequently take an intertwined course via public procurement. These trends more broadly shape the dynamics of private–public relationships across various areas of interests. For one thing, the commingling of public and private actors in the employment of AI-driven technology for public functions responds to a narrow view of government efficiency, which ends up favouring procurement from the private sector over other arrangements.Footnote 21 Many of these applications rely on AI technology that is privately developed and made available to public actors. In Europe, both law and political science scholars have signalled the emergence of out-and-out ‘migration markets’ in the realm of European border and asylum governance.Footnote 22 The realisation of European-wide databases for governing entry of so-called third country nationals has traditionally involved large private vendors through large framework contracts. For example, in 2019 the implementation and maintenance of the Entry–Exit System (EES) was awarded to a consortium of IBM Belgium BVBA, Atos Belgium NV and Leonardo S.p.a. for 142 million euros.Footnote 23 The upkeeping of SIS II and Eurodac have been respectively procured from Atos, Accenture and HP, and from Bull Atos Technologies, Sopra Steria and Gemalto.Footnote 24 In 2020, Frontex and eu-LISA jointly commissioned a range of contractors (such as Leonardo and Unisys Belgium SA) under a sweeping Transversal Engineering Framework (‘TEF’) worth 181 million euros to design, support, maintain and test core business systems as well as interoperability components and infrastructure for new EU-wide systems.Footnote 25
The vast majority (if not all) of these IT systems are (very probably) being enhanced with AI technology to further automate their operations.Footnote 26 This potentially hands private contractors vast technical and political clout about which we know very little overall. The first lot of the TEF (allotted to Unisys Belgium NV/SA, Unisystems Luxembourg SARL and Wavestone SA) involves the study, development and implementation of AI techniques to infer patterns from travellers data stored in the Central Repository for Reporting and Statistics (‘CRRS’).Footnote 27 Further, Frontex concluded in late 2024 several contracts to supply commanded aerial devices for operations related to its tasks and mandate, with vast sums of (public) money involved.Footnote 28 The tender to equip the agency with a medium-altitude, long-endurance remotely piloted aerial system is worth 184.3 million euro.Footnote 29 In fact, private companies have typically furnished a wider range of military and security equipment, including biometric identification systems.Footnote 30 Even though the procurement documentation is publicly available,Footnote 31 there is in any event a glaring absence of easily accessible information on whether AI is used, or how the making and implementation of the AI system is taking place.Footnote 32 Infantino speaks of ‘a “black box” within the “black box” of policy implementation’ in the field of migration governance in Europe.Footnote 33 Bigo highlights the wider (political) context of corporate actors who fundamentally shape the European IT architecture, their role being that of ‘visionaries of a future in the making’.Footnote 34 Yet that critical part is entirely hidden from public view and scrutiny.Footnote 35
Ensconced in these environments, procurement of AI-driven migration technology from private actors challenges a clear separation between private and public (political) forms of secrecy in a formal setting. This makes migration a field where secrecy thrives more widely than in other settings. In fact, since both private agents (contractors or vendors) and deploying public agencies in migration are involved, both can leverage secrecy regimes and adopt secrecy-enhancing strategies. Broadly speaking, secrecy claims mutually reinforce one another. This does not mean that they respond to the same needs and interests, nor that they always align. Take the example of an individual (such as a migrant) or an entity such as an NGO making an access request. Even though public administrations intended to provide access to AI-related information on transparency grounds, private vendors may in turn reject any access to safeguard their own economic interests.Footnote 36 The public authority, for its part, may also not be willing to share access to AI-related information with an applicant on the ground of safeguarding public security.Footnote 37 Migration and security authorities may keep ongoing experimentations in AI use under wraps in a way that the public is not even aware of its occurrence. If a leak takes place, then journalists and researchers may seek more information and make the public aware of some details.Footnote 38
The compresence of private and political secrecy over AI systems calls their demarcation into question because in the final analysis they equally contribute to undermining public transparency and accountability.Footnote 39 More specifically, two reasons substantiate why we should analyse private and political forms of secrecy together. First, ultimately affected are the people on the receiving end: individuals who ought to be served by public bodies acting in the public interest.Footnote 40 The accentuated vulnerability of the persons affected by public and private choices in border control (third-country nationals) makes the latter a particularly sensitive sector requiring careful design, implementation and management of algorithmic systems.Footnote 41 To ensure this, watchdogs and pressure groups are at the forefront of filing freedom of information requests to have the secrets disclosed to the public. Therefore, a holistic understanding and conceptualisation of the various sources of secrecy, together with the actors leveraging them, proves essential to keep up with the reality on the ground.
Second, in both private and political forms of secrecy, restricted reason giving is their final outcome. For one thing, governments and public administrations rely on political secrecy in areas of (national) security, arguing that classified decisions or information, or unclassified but restricted information need to remain internal and not be shared with the public or other audiences. Private actors, on the other hand, rely on variants of secrecy to keep their own commercial interests over AI systems away from their competitors, but also from the public more generally, and even from government officials. So, the very outcome of both political and private secrecy remains largely the same, much as the undergirding reasons and rationales for adopting secrecy regimes may differ.
B. AI secrecy in migration governance: unearthing deep and shallow secrets
Placing political and private secrecy on the same analytical plane warrants their examination through a unitary conceptual lens. More fundamentally, focusing on the structure of a secret and its strength or weakness shifts debates on AI secrecy onto another level: that of positioning the sources of secrecy within the social practices, relational networks and power disparities the governmental use of procured AI systems triggers. Pozen’s seminal analysis of ‘deep secrecy’ in the governmental sector, which draws and expands on previous research conducted by Scheppele in the realm of contract law,Footnote 42 is a helpful benchmark to understand better private and public (AI) secrecy and its implications for migration governance.
Pozen distinguishes deep secrets from shallow secrets. Secrecy is deep ‘if a small group of similarly situated officials conceals its existence from the public and from other officials, such that the outsiders’ ignorance precludes them from learning about, checking, or influencing the keepers’ use of the information’.Footnote 43 National security decision making has traditionally been considered a deep secret protected by extensive and interlocking classification systems of documents and information.Footnote 44 Systems of information security (classified information) protect the executive power both as a matter of internal security and of external security and make it very difficult to hold the executive to account.Footnote 45 Conversely, secrets are shallow ‘if ordinary citizens understand they are being denied relevant information and have some ability to estimate its content’.Footnote 46 Shallow secrets are capable of being known, in the sense that some reference is made to them. Further details on shallow secrets may for example be requested in freedom of information requests (and may subsequently be denied because of a plethora of legal exceptions: trade secrecy, privacy or public security). By contrast, deep secrets typically emerge into public view if there is leaking or whistleblowing from within the security administration.
This structural conceptualisation that pivots on depth is particularly effective and apt to describe governmental secrets in the way it visualises them according to the degree of unearthing by inside or outside forces. The bottom line is that there is no rigid dichotomy. Secrets are not set in stone. They rather flow along a continuum and are more or less deep (or more or less shallow) based on the degree to which four indices come into play. First, how many people know: the more members of a community know, the shallower the secret becomes.Footnote 47 Another element is what sort of people know, or more precisely, what kinds of public officials and administrations are aware of the concealed information.Footnote 48 Thirdly, it is important to consider how much they know: the less one is aware about the processes and the ‘functional understandings’ that lead to certain outcomes, the deeper is the secret.Footnote 49 Fourth, secrecy depth (or shallowness) is affected by the passage of time. A secret can be extremely deep at inception, but the information, its content and workings can progressively trickle out of its keeper.Footnote 50 Secrecy may thus become more translucent over time and grow shallower. Taken together, the four indices of secrecy point to a dynamic and contextual notion of secrecy. Thus, it is key to ask, ‘how deep or shallow [a secrecy] is to society as a whole’,Footnote 51 as well as to consider how even deeper secrets are being or may be unearthed and brought more into (shallower) light, also in the context of AI.
Pozen’s conceptualisation may usefully be employed to describe secrecy of AI use by public bodies for migration governance. Depending on the level of concealment, both forms of private and public secrecy relate to the public fundamentally in two different ways. Some secrets around AI are deeper than others. This is the case when the publics (in the realm of border control, third country nationals, EU citizens, researchers and watchdogs) are ‘in the dark about the fact that they are being kept in the dark’ about AI, if you will. For instance, the deepest forms of secrecy occur when the publics do not have the slightest idea that a public body or agency has been using an AI system that affects their position in some way or another. Lacking this knowledge, they are not in the position to trigger the legal protection afforded to them. As Paragraph 3 shows, cases vary, with researchers and pressure groups often suspecting the use of AI systems and therefore being driven to make access requests. AI-related deep secrecy does not have quite the same pedigree in AI governance literature as its conceptual opposite, shallow secrecy. If a secret is shallow, the party from which information about AI is withheld is conscious about the AI system existing and operating. They might well not appreciate how it operates and functions, but they might know that they are exposed to an AI system. They are in the state of ‘not knowing’ for the reasons we have referred to in the Introduction, that is because AI is in and of itself opaque,Footnote 52 or because meanwhile information about AI has come to public attention.
With this framing in mind, the following sections will now probe deeper into how deep or shallow both political and private forms of secrecy of AI systems are utilised for migration governance.
3. Political secrecy
Political secrecy is traditionally linked to struggles over the distribution and preservation of political power.Footnote 53 For an important part, it was constructed around fortresses designed to prevent secret unearthing by anyone other than those with (strict) security clearances. As such, it comes with powerful ‘privileges’, also for the democratic state,Footnote 54 traditionally nestled within core government administration, mainly in matters of security and defence. Today, it has come to include ever widening circles of officials and private actors involved in the broader business of governing.Footnote 55
Nowadays a much broader securitisation environmentFootnote 56 provides fertile breeding ground for AI-related deeper secrets to thrive, and Frontex has been in the vanguard of policing its secrets often on very general grounds. Further political decisions may be hidden from view in datafication processes, including (big) databases, and their subsequent algorithmisation.Footnote 57 There are several examples in the EU where it is possible to discern political power in technical implementation, but not that many (yet) where it is clear whether AI components are being used.Footnote 58 As Musco Eklund aptly observes, ‘recognising the political in the technical’ is key where technology has regulating power and inherent normative force.Footnote 59
To map further the legal terrain of political secrecy in European migration governance, Paragraph 3 is divided into a further two subparagraphs that each highlight a different legal regime that regulates different aspects of political secrecy. In the Access RegulationFootnote 60 we observe how exceptions of public security, formulated almost three decades ago, can largely trump public access to AI-related information held by migration authorities in the digital era (Paragraph 3.A). In Paragraph 3.B the focus is on the evolving roles of the European Border Guard Agency (‘Frontex’) and eu-LISA, also in relation to the ETIAS data system which has been delayed to being operationalised in 2026.Footnote 61 Lastly, Paragraph 3.C looks into how the AIA leaves political secrecy untouched.
A. Seeing the political domination of ‘public’ security claims
Executive power is reinforced in practice and incrementally at the supranational level in a non-centralised and non-unitary way, in particular in European agencies. Frontex, the European Border and Coast Guard Agency, is a case in point. Frontex has emerged already for some years as an unseen (sub-)political actor operating below the constitutional radar with overtly technical governance that at the same time is substantively political. It is a ‘a hub for surveillance and data sharing’.Footnote 62 Some see in the way Frontex exercises its political discretion today that it in reality is ‘more akin to a national security or intelligence agency, rather than a civilian border management agency’.Footnote 63 Frontex is in any event considered at the forefront of quasi militarised, highly securitised enforcement practices in Europe,Footnote 64 and is thus readily able to resist disclosure of many of its documents and the documents it holds of others on grounds of public security.
The mandatory public security exception in the Access Regulation, in effect, furnishes the ideal legal framework to build secrecy around AI useFootnote 65 by Frontex. The existence of AI systems being used by security and migration authorities is surrounded by secrecy leaning towards the deep camp. In fact, the public is generally not aware of its ongoing existence and experimentation. This is not only a matter of original political design, but of ongoing and intentional concealment by politicised bureaucracies. Twenty-five years into the Access Regulation’s regime, this is all the more striking since the context has radically changed and the necessity of the concealment of automated decision making affecting vulnerable individuals is not a given ‘in the public interest’. If Frontex wishes to keep the public in the dark as to its tools and methods and obfuscate reality, then the Access Regulation positively assists this deliberate concealment.Footnote 66 This is so even when the same public authority has been actively exploring the utility of using AI in certain contexts for some years.Footnote 67 Researchers and NGOs trying in their perception of the public interest to unearth some of the facts, to get even a sense of the existence of AI tools being piloted or developed, are overall stopped in their tracks as the Access Regulation allows EU institutions to rely on secrecy where the institution or agency in question claims ‘public security in the public interest’.Footnote 68 In fact, secret-keeping on ‘public’ security (for whatever reason) is considered ‘in the public interest’. Hence the conclusion that the exact use of AI thus remains a deep secret. Interestingly, public interest considerations also inform migration authorities’ conduct in their capacity as public buyers in procurement procedures. When deciding on the contract award and the information to provide to candidates and tenderers, contracting authorities may, among other things, ‘decide to withhold certain information where its release would impede law enforcement’ or ‘would be contrary to the public interest’.Footnote 69 Even more fundamentally, the contracting authority may decide to declare a contract secret.Footnote 70 This ends up embedding deep secrecy in the fabric of the private–public relationship.
If some partial access is given, then some vistas of shallowness emerge with the prospect of more information in the future. Given the explicit terminology in the Access Regulation, there is however little to be done even where a negative decision is appealed to a court or the Ombudsman other than to ask the administration to double check the number of documents requested to see if some (partial) access can be given, or to confirm (having seen the documents in question, which anyway to be requested will need to be a relatively shallow secret) that the risk to ‘public security’ is not hypothetical.Footnote 71
By and large, there are several still relatively shallow furrows being ploughed by civil society applicants in search of even a little more precision on whether (when suspected) and how automated techniques like AI are (already) being used. Thus, information concerning use of publicly available data on social media platforms for example by Frontex (scraping profiles of migrants and asylum seekers) relating to migration routes is soughtFootnote 72 because they are often used in order to train algorithms and in order to develop AI solutions (algorithms).Footnote 73 Such information is used for the agency’s risk analysis according to Article 29 of the EBCG Regulation,Footnote 74 and subsequently for the conception and formulation of operational responses. But whether and how they are actually used for such purposes may remain a deep secret even after appeals to the European Ombudsman. Furthermore, it is hard to imagine that not just citizens and researchers, but also public officials outside those directly involved in migration governance are in the know of AI technologies being employed for migration governance. Yet, Frontex has already argued some years back in favour of the use of more publicly available data from social media in the framework of the democratisation of AI.Footnote 75
The contemporary role of Frontex includes as well more overtly political tasks, such as carrying out high-scale, AI-driven surveillance using procured technologies like drones.Footnote 76 On the website of Frontex a private contractor elaborates: ‘[o]ur drone detection software uses artificial intelligence and detection algorithms to look for unique patterns defining drone/RC protocols of communications, efficiently discriminating drones from other objects’.Footnote 77 That drones are being used emerges also through access to documents cases, but Frontex steadfastly refuses to give any access to documents or information, not even to an investigation report into the crash of a Frontex drone.Footnote 78 Interestingly, this latter decision shows that the private contractor was consulted about the request to access its documents and they refused any disclosure.Footnote 79 Frontex also argued that disclosing the document would affect its current and future ability to negotiate contracts of a similar kind. It relied on its wide margin of discretion whether disclosing certain information could pose a risk to the public interest as regards public security.
B. Technical and operational secrecy by political design: keeping ETIAS under wraps
Frontex will also run the European Travel Information and Authorisation System (ETIAS) that will grant travel authorisations to third country and non-EU travellers (TCN travellers) once it is launched finally in 2026. It has been under preparation (and its technical and design details under wraps) for almost a decade. ETIAS travel authorisations (they can be compared to ‘light visas’)Footnote 80 will be based on automated risk assessments as well as the social media scraping of migrants and asylum seekers. These automated risk assessments will be carried out by a profiling algorithm, called ‘screening rules’ in the Regulations’ nomenclature.Footnote 81 Moreover, ETIAS’s automated risk assessment may also draw on AI-driven management of the CRRS, which pools and renders interoperable statistical and historical data providing evidence for migration governance policymaking.Footnote 82
Overall, competent authorities appear intent on concealing (or downplaying at best) their use of AI in public discourse. In the words of Derave and others, ‘it is an open question whether ETIAS screening rules and their underlying profiling algorithms will be effectively based on AI’.Footnote 83 As Vavoula observes, ‘recourse to the term “algorithm” may be seen as either a means of avoiding the loaded term of “AI” or as leaving this issue open for introducing AI in the future’.Footnote 84 For now, although the potential use of AI in ETIAS risk assessment ‘is hidden into the different technical specifications of the systems’,Footnote 85 there is a clear (public) intention to use it in future. This makes efforts to get more information more possible. It is no deep secret that AI is contemplated quite actively to augment ETIAS. An Eu-LISA report from 2020 indicated that ETIAS can employ AI tools in several different ways,Footnote 86 and this falls very well in line with a detailed report by Deloitte on the same theme prepared for the Commission in 2020.Footnote 87 Thus, automated risk assessments under the ETIAS Regulation amount to AI secrets positioned within the shallow camp.
Yet, Frontex has actively challenged the politics around the use of (the term) AI, and has in effect denied its use even in complaints. For example, when a group of researchers at the Université Libre de Bruxelles sought public access from Frontex on the details of ETIAS’ risk assessment algorithm, Frontex insisted that it would be more appropriate to talk about filtering queries than algorithms and responded that no sophisticated analysis methods or any form of AI is involved in the risk assessment.Footnote 88 That was a few years ago now, but the explicit use of AI in this context remains a shallow secret even though this far-reaching border entry system is scheduled to enter into force late in 2026.Footnote 89
Frontex is not the only EU agency involved in ETIAS. eu-LISA is the agency responsible for its technical development and management.Footnote 90 The ETIAS regulation leaves considerable technical discretion to eu-LISA on how the algorithm will enable profiling by comparing special risk indicators with data in the application file.Footnote 91 How this is in fact to be done by eu-LISA, and with what software specifics, is nevertheless not public and is kept secret on grounds of public security.Footnote 92 eu-LISA in fact possesses significant discretion in relation to how the software, which will enable the ETIAS system, is being developed and designed for imminent application. eu-LISA is already using algorithms for biometric matchingFootnote 93 in three existing systems, all of which are based on fingerprints and facial images (Schengen Information System (SIS II), Visa Information System (VIS), European Dactyloscopy (Eurodac)).Footnote 94 It is not excluded that AI-related information may be shared with other internal executive actors, such as Europol, with access to EU databases such as ETIAS.
C. Hiding in plain sight: fencing digital borders off from the AIA
The AIA constitutes the first attempt to regulate AI in migration and border management,Footnote 95 albeit in the broader context of horizontal legislation applying to all AI systems placed on the market or used in the Union falling under its scope. For a variety of reasons, the regulation reinforces the wide margin of discretion that competent authorities already possess to keep AI systems used under wraps in some way or another. The following subparagraphs will illustrate how this plays out.
Public privilege through material and temporal exclusion
Competent authorities in migration governance are subject to a material and a temporal exemption, both placing them outside the scope of transparency requirements for high-risk systems provided by the AIA.Footnote 96
The AIA builds on the assumption that ‘migration, asylum and border control management’ AI systems are being used inter alia by Union institutions, bodies, offices or agencies. The regulation labels as ‘high-risk’ some migration applications of AI systems. Paragraph 7 of Annex III lists four specific use cases of AI systems in this area as falling within its scope as high risk systems: use as ‘polygraphs or similar tools’,Footnote 97 use to ‘assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State’,Footnote 98 use to support authorities in assessing migration applications and evidence,Footnote 99 and use to identify individuals, excluding travel document checks.Footnote 100 Leaving the competences of the Member States regarding national security untouched,Footnote 101 the AIA however does not apply to systems employed for military, defence or national security purposes.Footnote 102 As the boundaries of ‘national security’ are highly contested, ‘some Member States might be eager to expand the protective veil of national security upon other situations of public security or even border management’.Footnote 103
Secondly, the regulation provides a sweeping temporal exclusion. Article 111(2) of the AIA stipulates that ‘[i]n any case, the providers and deployers of high-risk AI systems intended to be used by public authorities shall take the necessary steps to comply with the requirements and obligations of this Regulation by 2 August 2030’. Therefore, competent authorities in migration enjoy a substantial grace period before they become bound by the requirements for high-risk AI.Footnote 104 By the same token, Article 111(1) temporarily stipulates that ‘AI systems which are components of the large-scale IT systems’ in AFSJ (established by the legal acts listed in Annex X) that have been placed on the market before 2 August 2027 only need to be brought into compliance with the AIA by 31 December 2030. This includes by specific mention ETIAS, as well as virtually all the other databases in border control and immigration (such as the Schengen Information System (SIS), the Visa Information System (VIS) and Eurodac),Footnote 105 and grants the institutions and agencies until the end of 2030 to comply with any of the provisions of the AIA. This demonstrates in our view that the AIA itself assumes that ETIAS is likely to feature AI systems within the meaning of the AIA, as otherwise there would be no need to exclude it or make a temporal derogation.Footnote 106 Further, the delayed conformity with the AIA comprises a buffer period in which public authorities have vast leeway in terms of keeping AI deployment under wraps in the ways we have analysed above.Footnote 107
As a matter of fact, migration and security actors enjoy a safe harbour that keeps AI implementation into ETIAS and other databases away from public scrutiny. What is striking is that the vast material and temporal exemptions in the AIA effectively sanction opaque practices involving both public and private actors in migration. However, even if the AIA were to apply to high-risk AI systems under Paragraph 7 of Annex III, it would only cover cases in which the use of AI for border control directly impacts a natural person.Footnote 108 By invoking the derogation in Article 6(3) of the AIA, competent authorities may in fact classify AI systems as non-high risk, thus sidestepping the Act’s key accountability and transparency obligations under Chapter III. Conversely, a risk assessment AI tool ‘used for strategic intelligence’Footnote 109 and orienting the general operations of the competent authorities in migration may fall outside the scope of high-risk AIs. Absent stringent requirements for non-high-risk AIs, authorities are effectively granted a licence to keep their design and implementation under wraps.
In the AI register, but behind a non-public wall
In line with all other high-risk AI systems, AI border and migration systems must be registered in a centralised and (in principle) publicly accessible database to be managed by the Commission.Footnote 110 According to the Commission, the registration in this database is supposed to ‘increase public transparency, and oversight and strengthen ex-post supervision by competent authorities’.Footnote 111 However, Article 49 (4) AIA provides that: ‘[f]or high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable (…)’.Footnote 112 This means that only the Commission and the EDPS as the designated supervisor for EU systems and agencies will have access to the registered details and will know of their existence. Once again, the logics of (deeper) secrecy inform this provision, reflecting ‘the fears of Member States that revealing information could compromise national security’.Footnote 113 In a bid to increase transparency, the European Parliament proposal suggested to scrap the exception to register high-risk AI systems used in migration governance that the Commission originally proposed.Footnote 114 The exemption eventually made its way into the final text under pressure from the Council, as provisions furthering transparency were viewed as ‘harmful to law enforcement and migration’.Footnote 115
For the public and civil society seeking to use public access laws the use of high-risk AI systems in the areas of law enforcement, migration, asylum and border control will likely remain deep secrets, unless some leaking takes place or partial access is granted. Even if the institution or agency concerned pleads the public security exception, it is not excluded that the European Ombudsman, in particular after an own inspection of the documents in question, comes to a negotiated arrangement with the EU deployer to release at least some information publicly. But this all remains highly contingent. The European Ombudsman may indeed be more reluctant in the future to intervene where there is a specific exclusion in the AIA and the EDPS is the designated supervisor. The publication of the EDPS audits of the AI components of EU IT systems could also support transparency in this field, even if not directly (like access to documents would).
Ubiquitous secrecy: Article 78 AIA
Article 78 of the AIA is arguably very far reaching both in terms of what is covered as who is covered. It is one of the most important provisions in the legislation as it places market supervisory authorities and – quite broadly – ‘any other natural or legal person involved in the application of this Regulation’ under an absolute legal obligation to respect the confidentiality of information and data they obtain as they perform their duties (Article 78(1) AIA). This confidentiality requirement does not create formal limits to the information-gathering powers conferred on the market surveillance authorities (‘MSAs’), and in fact it is provided elsewhere that they can have full access from providers.Footnote 116 Yet, it does prevent these supervisory authorities from disclosing information about AI systems after it has been given to them. It is in fact a striking illustration of how political secrecy spreads beyond the executive or administration deploying the AI system to its supervision, and how in that context supervisors who are at a clear remove from the deployment must keep all the secrets, whether of public or private nature. This is admittedly not a novel idea, but simply parallels political secrecy rules that have previously been applied to accountability forums such as courts, parliaments and the European Ombudsman.Footnote 117 Article 78 moreover gives special protection explicitly to both public and private interests, such as national and public securityFootnote 118 and intellectual property rights and trade secrets,Footnote 119 which often clash with the public interests promoted by transparency in AI systems and elsewhere.Footnote 120
Article 78 instead reinforces the way that political secrecy is simply the subterranean narrative of the AIA that in practice dominates transparency, no matter how unpicked and reformulated. There are two main implications of this. For one thing, direct access to their documentation is only possible to staff members of the MSAs holding the appropriate level of security clearance. In fact, MSAs can be drawn from various sectors. If you have a consumer watchdog or a telecoms regulator there, they might struggle to develop the kind of internal structures they need to deal with information classification.Footnote 121 On the other hand, any mediated public disclosure of information by the MSA requires the user of the AI system, namely the public body deploying the systemFootnote 122 to be consulted before disclosure is agreed. So, providers and certain types of users (and particularly public sector users of high-risk AI systems for law enforcement and asylum, migration, and border control management) can effectively maintain secrecy.Footnote 123
The public(s)’ limited agency to counter political secrecy in the AIA
Under the AIA, individuals and organisations are provided with limited countervailing rights to resist secrecy. On the one hand, natural and legal persons have a right to lodge a complaint with market surveillance authorities if the AIA has been infringed.Footnote 124 In migration governance, the limited application of high-risk AI requirements to AI systems used in the migration makes it all too easy to bypass this provision.Footnote 125
Relatedly and more restrictively, affected persons enjoy a right to explanation of the individual decision-making ‘taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III’.Footnote 126 The regulation nevertheless only applies to affected persons ‘that are located in the Union’.Footnote 127 Consequently, third country nationals and other individuals that are not in the EU territory are barred from exercising the right to explanation.Footnote 128 Moreover, it is not clear how civil society organisations and pressure groups can gain access to the relevant information by raising this point.Footnote 129 In light of the non-public section of the AI register, it is hard to see how NGOs, watchdogs and so forth can seek public access under the Access Regulation, or even file an adequate complaint if there is in effect no information.
4. Private secrecy
AI procurement in migration governance, we have seen above, gets private actors involved in developing AI systems deployed by migration and security authorities. The crux of the problem is that vendors can leverage harbours of private secrecy to prevent disclosure of AI systems, absent mandates in public contracts to do so.Footnote 130 Private secrets are, in essence, forms of shallow secrecy.Footnote 131 What is not known is not the existence or the use of the (procured) AI system per se, but the content and the workings of such AI system. A claimant, in fact, needs to be aware of the existence of an AI system being used by authorities to raise private secrecy. The public body has then acknowledged it some way or another and therefore resorts to the vendor to open up information on the AI system (eg, training datasets, algorithms, information about its workings and functioning, and so forth). Interestingly, the vendor may in principle assert secrecy claims even against the deploying public body and is therefore able to exert effective control over the deployed AI system. Thus, not only are NGOs, affected persons (such as third country national) and researchers impacted by private secrecy, but potentially migration agencies and other public bodies as well. That said, contractual practice regarding AI technology migration governance seems to put contracting authorities in the position of controlling the AI-related data and information.
Industry players in migration governance carve out secrecy in a variety of ways. The two primary tools are trade secrecy and contractual measures. The former provides legally safeguarded exclusivity that in principle enables providers to eschew public scrutiny over AI systems (Paragraph 4.A).Footnote 132 Even where trade secret claims cannot stretch, contractors and public buyers can rely on contractual arrangements erecting factual secrecy (Paragraph 4.B). Persons affected by AI systems as well as watchdogs and NGOs are instead denied access to AI-related information (Paragraphs 4.C and 4.D).
A. EU trade secrecy law shields AI systems from public authorities’ oversight
More than questions as to its proprietary nature, the crux of trade secrecy protectionFootnote 133 in the AI realm is that firms can leverage its flexibility to shield AI systems from access requests by public authorities.
For one thing, the protection requirementsFootnote 134 are broad enough to cover the vast majority of informational assets and components of AI systems, such as algorithms, data, but also potentially crucial technical documentation.Footnote 135 Trade secrecy revolves around the concept of information. The Trade Secret Directive provides three prerequisites for an ‘information’ to access protection: its secrecy, its commercial value due to its secrecy, and its being subject to reasonable steps to keep the information secret.Footnote 136 Among them, it has been observed that information coming from publicly accessible sources or collected ‘in plain sight’ does not meet the secrecy requirement.Footnote 137 The Court of Justice of the EU seems to have endorsed this view in Pari Pharma, a case concerning pharmaceuticals.Footnote 138 In this case, the EMA requested access to documentation illustrating the clinical results of a novel drug for curing respiratory diseases. This documentation formed part of reports drawn up by the Committee for Medicinal Products for Human Use (‘CHMP’). Here, Pari Pharma, the applicant, claimed that ‘information contained in the CHMP reports (namely raw data generated on its behalf or the compilation and analysis of publicly-accessible data) constitutes a trade secret’,Footnote 139 and therefore should not be disclosed. The court found that information in the CHMP reports does not meet the ‘secrecy’ requirement, since these are mostly made up of information that is assembled in compliance with specific ‘regulatory requirements’ and is directed ‘by the EMA’s precise questions’.Footnote 140 Moreover, the information, specifically relating to a niche subject matter and ‘coming from bodies and associations well known to pharmaceutical undertakings, could be obtained without difficulty and without any particular inventiveness’.Footnote 141 Therefore, a more nuanced understanding of the correct qualification can be drawn ‘between the publicly-accessible information and that which falls within the scope of the market survey and the applicant’s own conclusions’.Footnote 142 Public accessibility of information is the strongest element of resistance to recognising secrecy in unprocessed data observed from reality. Datasets used to train AI systems made up of easily accessible information available to the public (such as data scraped from social media by Frontex)Footnote 143 would thus not enjoy trade secret protection. The same arguably holds true for when public authorities procure AI systems from private providers furnishing them with detailed instructions on what data needs to be injected into the system. However, algorithms and model architecture may well be trade secrets, as well as the source code and the technical documentation used for model development.
Secondly and more importantly, trade secrecy operates upon assertion of its holder.Footnote 144 No public or regulatory authority is involved in the initial allocation of trade secret status in information. (Putative) trade secret holders are thus prompted to (over)claim protection even if the legal prerequisites are plainly not fulfilled at inception.Footnote 145 So, what makes trade secrecy a highly malleable tool is the wide leeway firms (including AI vendors in migration governance) are afforded to invoke the protection over informational assets in the first place. In a sense, it radically departs from the logic of state-made concession which characterises other IP standards, such as patents. It more closely resembles self-coronation: a corporate conglomerate confers it upon itself. The idea of a laissez-faire prerogative embedded in a private liberty sphere informs it.Footnote 146 Trade secret holders are entitled to carry the initial self-allocation without fearing public oversight. (Alleged) trade secret protection starts out as a de facto, self-enforced act of assertion over information (whether in the form of algorithms or data) that reflects the logic of possession in private law.Footnote 147
Of course, corporate (over-)claiming does not necessarily amount to legal protection sanctioned by trade secret law. A court or a regulatory authority can always redefine the ambit of protection when asked to do so in a misappropriation case.Footnote 148 In fact, the Court of Justice has determined the boundaries of protection in the case Dun & Bradstreet Austria,Footnote 149 which concerned the balancing of trade secrecy against the right to access information on automated decision making under Article 15(1)(h) GDPR.Footnote 150 However, what matters is the sort of ‘first mover advantage’ that a firm can secure by asserting trade secrecy over AI systems to prevent access to begin with. When developing an AI system, firms put in place reasonable measures to implement secrecy inside and outside the organisation.Footnote 151 These bestow secret status on the results of their practices to prevent misappropriation and diffusion of the informational apparatus of AI systems.Footnote 152 At worst, a public agency or a judicial authority re-determines the correct purview of trade secrecy after the fact, when they have the opportunity to do so, and with the possibility of the AI system being deactivated or replaced in the meantime. Even though the CJEU case law concerning pharmaceuticals and access to documentsFootnote 153 and more recently on credit scoringFootnote 154 by regulators opposes the practice of claiming broadly, the trend persists and may well stretch to migration governance. For example, contractors may well claim trade secrecy over informational assets that are not specifically addressed within the contractual arrangement with the public counterparty.Footnote 155
Yet, trade secrecy is not unlimited. Some scope restrictions and exceptions to it move in the direction of countering trade secret claims over AI systems. Article 1(2)(b) and (c) of the Trade Secret Directive respectively provide that the directive does not affect the disclosure obligations for the public interest and for the exercise of supervisory powers by European and domestic public authorities. These provisions enable public bodies to request access to documents and the disclosure of information for transparency in administrative proceedings.Footnote 156 Yet, the unclear confines of the ‘public interest’ do not talk much about when this prevails over private interests.Footnote 157 This would typically include information covered by the Access Regulation and environmental information.Footnote 158 On paper, the broad formulation of Article 1(2)(b) and (c) serves as the ideal basis for national data protection authorities to request disclosure of trade secrets,Footnote 159 as well as for the competent authorities within the AIA.Footnote 160 However, its missing operationalisation affects its practical salience in other fields, such as migration governance and security more broadly.
Article 5(d) provides an exception to trade secrecy for disclosures ‘for the purpose of protecting a legitimate interest recognised by Union or national law’. The perimeter of such exception is contested.Footnote 161 It is not clear what qualifies as a ‘legitimate interest’,Footnote 162 and therefore how to balance it against trade secrecy. The notion of legitimate interest may well include the transparency obligations that the AIA stipulates.Footnote 163 Ultimately, Article 5(d) holds the potential for tackling private secrecy, but its ultimate role is contested as it has yet to be seen in action.
B. Building secrecy in procurement contracts on migration AI technology
Firms are inclined to utilise contractual measures (such as confidentiality or non-disclosure agreements) to protect confidentiality of information that they hold.Footnote 164 Interestingly, the role of contracts in erecting factual exclusivity is even more salient in the eyes of companies. Empirical evidence flips the frequently presumed order of importance between the two protection forms. Trade secrecy is often considered as an ‘additional safety net’ on top of contractual measures to avoid data and information misappropriation.Footnote 165 Hence the extensive practice (frequently informed by skilful lawyers and spanning manifold industries)Footnote 166 of relying upon contractual arrangements to prop up factual secrecy in data and algorithms.
In contracts, parties are furnished with extensive leeway to regulate their own affairs. One may submit that the trade-off for such extensive flexibility is that only parties to such contracts are legally bound by its clauses. AI providers, in fact, cannot seek relief against infringing or misappropriating third parties as per the Trade Secret Directive.Footnote 167 However, this does not mean that contractual arrangements to keep factual secrecy are ineffective in practice. For one thing, claiming secrecy broadly through contracts has become the ‘preferred approach’ to keep AI systems away from the public, or even to employees.Footnote 168 On the other hand, if the trade secrecy regime does not apply, the related exceptions and limitations to trade secrecy (including those for the public interest) are not applicable either.Footnote 169 This ends up reinforcing the private interests and needs vis-à-vis public accessibility.
Factual secrecy features prominently in AI procurement networks in migration governance. The preceding section has cautioned against the fact that firms can leverage trade secrecy overclaiming to eschew oversight from public authorities. There are reasons to be a bit more optimistic in this respect if one looks at contractual practice in EU migration governance. The agreements between contracting authorities and contractors concerning AI migration technology raise a high bar of confidentiality that binds both parties. For example, the framework service contracts on the use of drones by Frontex,Footnote 170 on the provision of services for business content development related to the data management and analytics software used by FrontexFootnote 171 and on the implementation and maintenance of the biometric systems by eu-LISAFootnote 172 all feature a confidentiality clause that concerns ‘any information or documents, in any format, disclosed in writing or orally, relating to the implementation of the FWC and identified in writing as confidential’. The former also puts Frontex in control of the data processed by drones,Footnote 173 and bars the contractor from processing any data using its systems unless Frontex approved the technical design of the solution.Footnote 174 Moreover, the contractor must sign a non-disclosure agreement of information,Footnote 175 which binds all its duly authorised personnel.
Not only does confidentiality characterise public contracts, but it also captures the entire procurement procedure. Contractors enjoy a secrecy harbour based on the concept of ‘business secrets’, which partially overlaps with trade secrecy and antedates ‘trade secrecy’ in the EU acquis (the Trade Secret Directive).Footnote 176 Business secrets are a plethora of confidential information that firms may (be required to) provide to public administrations. Article 41(2)(b) of the CFREU associates them with the right of good administration.Footnote 177 Thus conceptualised, they form a corporate sphere of interest that public authorities are required to preserve while fulfilling tasks involving access to privately held information. A secondary law incarnation in migration governance is Regulation 2024/2509, which adapts procurement rules applicable to EU agencies and institutions to the general framework of the Public Procurement Directive. In particular, Annex I of Regulation 2024/2509 extends the confidentiality rules of the directiveFootnote 178 to various phases of migration procurement.Footnote 179 Confidentiality covers information that is not necessarily a trade secret,Footnote 180 but is considered confidential upon the unilateral decision of the firm.Footnote 181 In the Antea Polska judgement, the CJEU has espoused a broad interpretation of confidential information in procurement. Confidentiality rules in procurement do not only cover trade secrets, but also other business-related information whose disclosure might thwart competition and the mutual trust between tenderers and the contracting authority.Footnote 182 It goes without saying that information qualifying as a business secret has the potential to encompass various components of AI systems (data, algorithms) provided by vendors in procurement and deployed by migration and security authorities. The relevant case law has in fact espoused quite an expansive approach. The CJEU has embraced the following definition: ‘information of which not only disclosure to the public but also mere transmission to a person other than the one that provided the information may seriously harm the latter’s interest’.Footnote 183 The Court has also emphasised the three conditions to be met: (a) the information should only be restricted to a limited group of individuals; (b) its revelation could seriously harm either its provider or third parties; and (c) the interests that could suffer from its disclosure must be objectively worthy of protection.Footnote 184
C. Commercial interests, public detriments
At this juncture, one may well wonder if affected persons, civil society groups and NGOs can rely on transparency points vis-à-vis private secrecy. The most evident one is the Access Regulation, which informs citizens’ access to documents held by public bodies. In materialising Article 42 CFREU and Article 15 TFEU, Article 2 of the Access Regulation establishes a citizens’ right of access to documents held by EU institutions,Footnote 185 and a corresponding obligation for the latter to grant access to such documents.Footnote 186 On paper, this provides the basis for access by private citizens to algorithmic systems employed by public authorities on public interest grounds. The notion of ‘document’,Footnote 187 in fact, is broad enough to encompass AI systems (more specifically, AI algorithms).Footnote 188
Yet, citizens’ or researchers’ demands can be frustrated if the imperatives of private secrecy end up prevailing as much as political secrecy.Footnote 189 Article 4(2) of the Access Regulation restricts access to public documents if the disclosure undermines the ‘commercial interests of a natural or legal person, including intellectual property’. However, the exception can be resisted in case of ‘an overriding public interest in disclosure’ (the ‘counter-exception’). The case law on Article 4(2) has cast light on how the access rule under Article 2(1) and the corresponding exception and counter-exception function.
The CJEU has limited the potential for an overly sweeping scope of private interests. Since the latter form exceptions to the ‘widest possible’ access to public documents, they must be interpreted strictly.Footnote 190 Courts must carry out a balancing test between opposing public and private interests,Footnote 191 and the interest prevailing in the practice of the case determines whether the document is to be disclosed.Footnote 192 Moreover, the recipient of the access request has the burden of explaining (even by means of general presumptions applying to several categories of documents, as seen above) the extent to which access to documents would jeopardise the (commercial) interest protected by the exception relied on.Footnote 193 More fundamentally, they must prove that the risk of compromising such interest is reasonably foreseeable and not merely hypothetical.Footnote 194 The individual invoking the overriding public interest must conversely prove that the latter gains more salience given the specific circumstances of the case.Footnote 195
As ‘commercial interests’ are not defined anywhere in the legislation nor in the CJEU’s judgements,Footnote 196 this concept has the potential to conjure a larger pool of private entitlements and interests than just trade secrecy. The notion of ‘commercial interests’ seems to align with that of ‘business secrets’ that we have encountered in the previous paragraph. This does not mean nonetheless that the corporate inclinations to (over-)claim trade secrecy or use contractual measures are to be blindly ratified in all instances. For their part, however, some EU agencies (such as the European Medicines Agency, ‘EMA’) have proven to be quite permissive in letting firms define themselves the ambit of documental access,Footnote 197 even though applying confidentiality presumptions to prevent access to commercially valuable information ‘is always optional for the EU institution, body, office or agency to which such a request is addressed’.Footnote 198 By the same token, firms need to ‘specifically and precisely’ single out what information, once disclosed, would effectively undermine the commercial interests.Footnote 199 Information should thus be treated as confidential if the business priorities are tangible enough to call for secrecy.Footnote 200
The Court of Justice has also taken on various approaches to construe the ‘overriding public interest’. Turco has in fundamental terms opened the way to a broad understanding of the public interest as a product of ‘increased openness’.Footnote 201 Later, the Court has raised the bar for assessing how the individual can demonstrate the overriding public interest in disclosure. LPN stressed that the need for transparency that is invoked must be ‘pressing’.Footnote 202 The CJEU has recently provided further guidance on the ‘overriding public interest’ in data-driven applications. Although it has endorsed quite an extensive interpretation of the notion of ‘overriding public interest’ in Bayer CropScience,Footnote 203 it has later adopted a more restrictive perspective in the Breyer judgement. The decision concerns access to an AI system that detects lies in the context of border control. For the court, the applicant claimed too broad an overriding public interest to be relevant.Footnote 204 The automated system was developed for the EU Horizon 2020-funded iBorderCtrl pilot project, which employs AI lie-detector technology for automated border control security. The activist and politician Patrick Breyer requested access to the documentation of the Horizon project to the European Research Executive Agency (‘REA’). In early 2019, the REA granted access to some documentation related to the project, while denying access to other information. The refusal was based on the commercial interests’ exception to the right of access to public documents under Article 4(2) of the Access Regulation. On appeal the CJEU found that the applicant’s reliance on a general ground of public interest to access documents developed for an EU-funded research project ‘is not sufficient to establish that that interest must necessarily prevail over the reasons justifying the refusal to disclose those documents’.Footnote 205 Therefore, although an applicant may well rely on such grounds, these need to be aptly confined to proving their concrete relevance to the case. Moreover, the court clarified that, while the legal framework on the dissemination of research outcomes from Horizon 2020 ProjectsFootnote 206 ensures a high level of protection for fundamental rights, it does not imply that these rights have not been violated, and that there is no overriding public interest in disclosing the documents.Footnote 207 In the specific case, the fact that the iBorderCtrl project was ‘merely a research project under development, the sole purpose of which was to test technologies’Footnote 208 has tipped the scales against mandating broader opening of the technical documentation about the algorithmic system. The court might well have pursued a different course had the implementation setting been less experimental.
Yet, the restrictive approach that the Court of Justice adopted in Breyer runs the risk of leaving crucial AI-related technical information about the algorithmic system out of scope. More specifically, information about its technical configuration, such as its design, its accuracy (including false positives/negatives), its potential for discrimination and the steps taken to address these risks, have all been kept under wraps.Footnote 209 Taken together with LPN, the message that the Court hammers home is that applicants need to go to great lengths to demonstrate that transparency reasons are ‘pressing’. But this probably means that the bar really is set too high for applicants to access some informational components of AI systems.
D. Recouping transparency or paying lip service to it? Back to the GDPR and the AIA
In several recent cases involving algorithmic decision making, the CJEU has balanced trade secrecy against the right to obtain an explanation on automated decisions under Article 15(1)(h) GDPR, read in conjunction with Article 22 GDPR.Footnote 210 The result is that applicants may find their access requests granted when shallow secrets held by private contractors are involved. For example, in SCHUFA,Footnote 211 the court found that a credit agency which provides credit scoring reports to other parties (such as credit institutions and banks) must put data subjects in the position of accessing the ‘specific information’Footnote 212 on the existence of automated decision-making, on the logic involved and the significance and consequences of the data processing for the data subject.Footnote 213 In fact, the agency SCHUFA had refused to provide detailed information on the various components utilised to calculate the score based on (alleged) trade secret protection.Footnote 214 In his Opinion, AG Pikamäe stressed that trade secrecy ‘cannot under any circumstances justify an absolute refusal to provide information, a fortiori where there are appropriate means of communication which aid understanding while guaranteeing a degree of confidentiality’.Footnote 215 Moreover, even though Article 15(1)(h) GDPR ‘exclude[s] any obligation to disclose the algorithm, given its complexity’,Footnote 216
the obligation to provide ‘meaningful information about the logic involved’ must be understood to include sufficiently detailed explanations of the method used to calculate the score and the reasons for a certain result. In general, the controller should provide the data subject with general information, notably on factors taken into account for the decision-making process and on their respective weight on an aggregate level, which is also useful for him or her to challenge any ‘decision’ within the meaning of Article 22(1) of the GDPR.Footnote 217
The CJEU has more recently confirmed its view in Dun & Bradstreet Austria.Footnote 218 In particular, controllers are mandated to provide supervisory authorities or courts with information ‘allegedly protected’ by trade secrecy, in order for the latter to assess the extent of the information to be disclosed under Article 15(1)(h) GDPR.Footnote 219 This puts public authorities back in control in deciding the boundaries of trade secret protection, limiting the possibility for contractors to (over-)claim. Affected individuals (as long as they qualify as data subjects) may thus obtain meaningful information on the logic of the AI systems involved. According to the data processing agreements and clauses attached to the contracts reviewed for present purposes, contracting authorities usually act as controllers, whereas the contractors act as processors under data protection law.Footnote 220 Therefore, applicants would need to submit data access request to the contracting authorities, which will then rely upon the contractors to obtain the relevant information (unless they are able to fulfil them on their own). Having said that, legal persons such as NGOs and pressure groups are left out, since they would not qualify as data subjects.
One may wonder whether the transparency obligations under the AIA may help legal persons access secret AI-related data and information held or controlled by a contractor. The AIA imposes transparency requirements on high-risk AI providers that might well open up public access to informational components of AI systems. The providersFootnote 221 in this context would be the contractors. Under the AIA, they are primarily responsible for specifying the technical properties of a given AI system.Footnote 222 In light of its concise definition provided in recital 27,Footnote 223 transparency is understood as in relation primarily to the deployer (which would typically be the migration agency),Footnote 224 who can then instil systems of traceability and possess enough information to communicate explanations. Yet, the relevant requirements are underspecified in terms of detail, and it seems that the deployer will just have to install whatever systems of traceability they can based on the information they are given. Moreover, what ‘enough information’ is or will be to enable deployers to ‘communicate explanations’ remains an open question. It must in any event be enough to support ‘informed decision making by them’. Providers are urged to communicate information to deployers ‘in a clear and easily understandable way, free of misunderstandings or misleading statement’.Footnote 225 The very wording used to break the principle of transparency into operative, effective rules, however, implies that very different approaches may be taken by the provider in giving effect to it.
The rationale behind recital 27 radiates out through the transparency obligations of the AIA. Under Article 13(1), any high-risk AI system must be transparent enough to allow deployers to interpret its outputs and use them ‘appropriately’. The implication is that transparency has to be ensured throughout the design and development of the AI system. How this is to be done is yet another open question, although necessarily implying that providers are required to adopt technical approaches and measures of one kind or another that can foster the requisite level of transparency to the provider.Footnote 226 Similarly, Article 13(2) AIA requires that any high-risk AI system be accompanied by instructions to deployers. Yet, the caveats ‘when appropriate’ and ‘where applicable’ suggest that the provider is required to disclose rather little information to the deployer about the inner workings of the system.Footnote 227 The provider thus potentially retains full control over the information they release about the crucial technical aspects of the AI system.Footnote 228
In sum, the transparency requirements under the AIA ends up favouring deploying competent authorities in migration. They effectively reinforce the secrecy practices mandated by procurement contracts, which put public authorities in control of the informational components of AI technology.Footnote 229 However, actors other than the deployer, such as affected persons, NGOs and watchdogs, are left out of the picture.
5. Public transparency and accountability under challenge from AI secrecy
A. (Re-)conceptualising AI secrecy: main takeaways from EU migration law and governance
For different reasons and motivations, both migration authorities and AI vendors leverage legal regimes of AI secrecy, which grant them broad leeway to keep the public (including citizens and researchers) in the dark, either about the very existence and operation of an AI system (deep secrecy) or about how it functions (shallow secrecy). In this guise, AI secrecy in migration governance dovetails nicely with what Deeks has recently described as the ‘double black box’: the exacerbation of already opaque government decision-making with even more layers of concealment due to the corporate-designed features of AI systems.
AI secrecy stretches well beyond the conventional notion of hiding information or algorithms, as it has been frequently framed in the literature. It begins much earlier: with the decision to keep its development under wraps. Embedded in law and migration practice, that deliberate choice alone can make it nearly impossible for large parts of the public (individuals, civil society) to even imagine that such systems exist or are being used. This has been referred to as deep AI secrecy. This kind of secrecy is engrained in the way migration authorities (do not) address public access requests,Footnote 230 and is further reinforced by the broad protections granted to commercial interests and confidentiality in procurement processes.Footnote 231 Frontex and eu-LISA exist in a largely public access-free zone, also after the entry into force of the AIA. For instance, the deployment of AI in ETIAS remains hidden, despite public indications and reports suggesting AI tools will be employed in the future.Footnote 232 By the same token, eu-LISA is able to keep details about the system’s design, including the profiling algorithms, secret under the guise of public security concerns.Footnote 233
Although there exists at least a suspicion that AI systems are being used, information about them is being withheld from public access applicants.Footnote 234 Even if the public becomes aware that AI is being used and implemented in migration governance, layers of shallow secrecy obscure any understanding of its functioning. Shallow AI secrets characterise the tendency of migration authorities not to disclose technical information on the systems they use. Private conglomerates from which migration governance technology is procured enjoy various protection harbours thwarting access to the utilised AI systems. These regimes of shallow secrecy prevent affected persons and NGOs access to AI-related information (datasets and algorithms) that vendors build, even if public authorities managing migration governance technology are in agreement with opening up access to such information. Trade secretsFootnote 235 put (putative) secret holders in the position of invoking protection broadly considering the legal prerequisites to access trade secrecy.Footnote 236 Exceptions and limitations to trade secrecy are nevertheless poorly equipped to enable access.Footnote 237 However, recent case law has sought to place the defining of trade secrecy boundaries back in the hands of public authorities.Footnote 238 What is more, there is the business secrecy haven that companies invoke in procurement processes.Footnote 239 Whereas contracting authorities retain control over AI systems in procurement by means of procurement clauses that build confidentiality,Footnote 240 affected persons and civil society are both equally side-lined in access requests as vendors can readily invoke the commercial interest exception under Article 4(2) of the Access Regulation, as occurred in Breyer with regard to an AI system for migration control.Footnote 241
While the recent case law on the GDPR’s right to access to AI explanation holds promise for increased transparency for data subjects,Footnote 242 the AIA barely makes a dent in the secrecy apparatus informing access to information about AI (and its actual operation, in the case of deeper forms of secrecy). For one thing, the way transparency is formulated in the AIA means that it is mediated by the (private) providers involved, and is consonant with communication rather than public access as such.Footnote 243 Even more fundamentally, the AIA enables implementing AI technology in the context of border control and migration databases (which affect largely very vulnerable groups of people) to continue to be developed and applied in the dark, without subjecting such high-risk systems to the AIA’s governance apparatus for a long buffer period (until 2030).Footnote 244 Secrecy moreover permeates the entire regulation due to Article 78, which embeds confidentiality into nearly every aspect of governance under the regulation.Footnote 245 In sum, public sector deployers and private vendors of high-risk AI systems for migration and border control management can effectively maintain secrecy in a seemingly infinite variety of ways.
B. Breaking free of AI secrecy: tackling the public-private secrecy interstices
Broadly speaking, the picture that the paper has depicted is quite grim. Yet, it is worth considering several countervailing strategies and frameworks that can offer more promising avenues for tackling secrecy. Looking into each and every avenue lies outside the scope of this paper. A natural first impulse might be to call for sweeping, consistent reforms of a relatively new legal and regulatory framework. In light of the comprehensive approach to AI secrecy which we consider, however, it is worth focusing on the very operations that allow both political and private secrecy practices and decisions to shape up and flourish in migration governance, impacting affected persons and civil society actors the most.
A striking illustration is the context in which migration authorities operate. The working and operational environment in which Frontex operates is prone to a high level of opacity. Frontex’s working methods in the context of its Consultative Forum in which many NGO’s participate have also come under scrutiny.Footnote 246 In this context too it seems Frontex manages the narrative and by dint of a very strict confidentiality clause which entails risks of criminal liability if sensitive or non-public information is shared with even membership organisations, the working methods laid down by Frontex do not encourage meaningful input. For this reason, some membership organisations such as PICUM (representing more than 160 NGO’s) have withdrawn from participation in the Consultative Forum.Footnote 247 A new working document was later published online in 2024 which contains extensive rules on confidentiality and secrecy, outlined in the terms objected to by PICUM.Footnote 248 The Consultative Forum of Frontex struggles as a result to function as a direct link to civil society, given the far-reaching duty of confidentiality to which its members are bound.Footnote 249 This exemplifies the nested structures of secrecy and confidentiality, and how they spread beyond core government to also include supervisors and civil society. Dismantling the secrecy-infused environment of the Consultative Forum would empower civil society and pressure groups to gain a (better) understanding of Frontex’s policies.
Another driver of secrecy at operational level is the context in which contracting authorities and vendors interact and decisions are formed. Frontex organises ‘Industry Days’, limited-entry events during which high-up public officials and top executives from contractors come together to discuss technological solutions. Attendance to these events is restricted to selected industry players, private providers and representatives of Member State authorities. Although Frontex Industry Days may not always be solely ‘closed-door’,Footnote 250 demanding registration to participate onlineFootnote 251 and carefully vetting participants might indicate restricted public permeability. Frontex has admittedly sought to increase it by releasing informative materials about the eventsFootnote 252 and keeping a transparency register dedicated to lobbying. In their words,
[a]ll meetings and contacts of the Executive Director, Deputy Executive Directors and Heads of Divisions in matters concerning procurement procedures and tenders for services, equipment or outsourced projects and studies are registered in this tool.
However, the register does not display information on the content of the meetings. Injecting more public participation at the level of Industry Days has the potential to hold both private and public actors more accountable regarding the decisions made.
6. Conclusion
This paper has told a poignant story of how legal regimes, often originally intended for highly different technological contexts, impinge on access to, and transparency of today’s AI technology in a very critical area of public governance such as migration control. Procured AI technology in this sensitive domain is based on pervasive regimes of secrecy that shield both the development and deployment of AI from public scrutiny. The double ‘black box’ that we have contributed to describing has revealed the intensity with which political secrecy is more generally being conjoined with private secrecy. Both public and private actors employ a certain amount of self-serving reasoning, and there is often little recourse for supervisory authorities to overturn these tendencies, given the specific derogations in existing legal regimes adapted in a different era.
This public–private double black box raises concerns not only about transparency and accountability in migration governance, but also about the much broader implications of private corporations holding influential roles in areas traditionally governed within the public sphere. This mirrors the recent developments concerning large chunks of US security infrastructure being controlled and managed by private actors and new moves at the European level to instigate far-reaching and novel forms of defence and security cooperation, seemingly with private actors and procurement processes at its core.Footnote 253 The time is ripe for a new conversation that moves beyond the status quo of allowing secrecy to dominate as if it were consonant with the public interest and the private interest, simply because aged legal regimes say so. Reclaiming the necessity for secrecy as well as for its limits will instead focus on ways to dominate secrecy itself in the wider public interest and for more ambient forms of accountability. Our aim is not to disallow real and adapted needs for AI secrecy, but to breathe new life and vigour into what more transparency can and should mean in our contemporary, ever-evolving democratic systems. As AI technology becomes more and more integrated, establishing meaningful countermeasures and practices that resist the secrecy inherent in private–public interactions becomes more critical than ever. One countervailing driver may be promoting AI literacy initiatives that would meaningfully (re-)equip the public(s) with the tools necessary to challenge opaque decisions and the prevailing narratives behind AI deployment.Footnote 254
Acknowledgements
The authors gratefully acknowledge (in alphabetical order) Valerie Albus, Marco Almada, Maurizio Borghi, Mark Bovens, Evelien Brouwer, Gráinne de Búrca, Madalina Busuioc, Mateus Correia De Carvalho, Saverio Della Corte, Megan Donaldson, Michèle Finck, Martijn Hesselink, Margot Kaminski, Alexandra Karaiskou, Estela Maria Lopes, Albert Meijer, Lola Montero, Anna Morandini, Ludovica Paseri, Jennifer Raso, Marco Ricolfi, Ludivine Sarah Stewart and Thomas Streinz for their valuable comments and suggestions on earlier drafts of this paper. All remaining errors and omissions are the authors’ sole responsibility. Tommaso Fia’s research was carried out as part of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). The authors acknowledge support by the Open Access Publishing Fund of the University of Tübingen.
Competing interests
The authors have no conflicts of interest to declare.