Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-dnltx Total loading time: 0 Render date: 2024-04-17T07:48:05.129Z Has data issue: false hasContentIssue false

Part III - Safeguarding Privacy and Other Users’ Rights in the Age of Big Data

Published online by Cambridge University Press:  09 July 2021

Mira Burri
Affiliation:
University of Lucerne

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

9 Futuring Digital Privacy Reimaging the Law/Tech Interplay

Urs Gasser
Footnote *
A Introduction

The history of privacy is deeply intertwined with the history of technology. A wealth of scholarly literature tracks and demonstrates how privacy as a normative concept has evolved in light of new information and communication technologies since the early modern period, when face-to-face interactions were challenged by urbanization and the rise of mass communication.Footnote 1 In the beginning of the nineteenth century, a combination of societal changes, institutional developments, and technological advancements gave birth to a series of new threats to privacy. At the time, innovative technologies – such as telegraph communications and portable cameras – were among the key drivers (interacting with other factors, such as increased literacy rates) that led to growing concerns about privacy protection. These developments also set the stage for Samuel Warren and Louis Brandeis’s highly influential 1890 article The Right to Privacy,Footnote 2 which was written, in large part, in response to the combined negative effects of the rise of the ‘yellow press’ and the adaptation of ‘instantaneous photography’ as privacy-invading practices and technologies.Footnote 3 Similarly, advancements in information and communication technologies in the twentieth century, combined with other developments, such as the rise of the welfare state, challenged existing notions of information privacy and led to renegotiations of the boundaries between the private and public spheres.

Later in the twentieth century, the development, adaptation, and use of innovative technologies that enabled increased collection and use of personal information were also among the key drivers that led to the birth of modern information privacy law in the early 1970s. Starting in the United States and then extending to Europe, the increased use of computers for information processing and storage by government agencies was an important factor that led to the first generation of modern information privacy and data protection laws.Footnote 4 Anchored in a set of fair information practices,Footnote 5 many of these laws were expanded, adjusted, and supplemented over the following decades in light of evolving technologies and changing institutional practices, which – together with other factors – resulted in an ever-growing cascade of privacy concerns. In the 1990s, for instance, the widespread adoption of Internet technology as a global information and communication medium and the rise of the database industry led to a wave of legislative and regulatory interventions aimed at dealing with emerging privacy problems. More recent and more ambitious information privacy reforms, such as the revision of the influential OECD Privacy Guidelines at the international level,Footnote 6 the General Data Protection Regulation (GDPR) in the EU,Footnote 7 the proposed Consumer Privacy Bill of Rights Act,Footnote 8 or the California Consumer Privacy ActFootnote 9 in the United States seek to update existing or introduce new information privacy norms for the digital age – again driven, in part, by new technologies and applications such as cloud computing, big data, and artificial intelligence, among others.

Reflecting across centuries and geographies, one common thread emerges: advancements in information and communication technologies have largely been perceived as threats to privacy and have often led policymakers to seek, and citizens and consumers to demand, additional privacy safeguards in the legal and regulatory arenas. This perspective on technology as a challenge to existing notions of and safeguards for information privacy is also reflective of the mindset of contemporary law and policymaking. Whether considering the implications of big data technologies, sensor networks and the Internet of Things (IoT), facial recognition technology, always-on wearable technologies with voice and video interfaces, virtual and augmented reality, or artificial intelligence (AI), information privacy and data protection challenges have surfaced among the most pressing concerns in recent policy reports and regulatory analyses.Footnote 10

But over the decades, the development and adoption of new technologies across varying socio-economic contexts has periodically culminated in critical inflection points that offered individuals and society opportunities to re-examine and advance the notion of privacy itself.Footnote 11 Arguably, the current wave of privacy-invasive technologies marks another such inflection point. The scale and pace of society’s digital transformation suggest that what is unfolding are not just gradual technological changes, but rather seismic shifts in the information ecosystem that call for a deeper rethinking of privacy.Footnote 12 The magnitude of this historical moment is reflected in an array of trends: the rise of data colonialismFootnote 13 and surveillance capitalism,Footnote 14 increased privacy-awareness post Facebook’s Cambridge Analytica scandal,Footnote 15 AI’s ability to amplify privacy risks,Footnote 16 and many more.

Some current developments already indicate or suggest shifts and innovations within privacy and data protection regimes in response to the latest changes in the socio-technological environment. For example, basic ideas of how privacy should be defined have already begun to change. At a fundamental level, for instance, some scholars propose to (re-)conceptualize privacy as trust.Footnote 17 At a more granular level, scholars have argued for a movement away from understanding privacy as attached to the individual towards a notion of group privacy.Footnote 18 In the context of genomics, for example, this idea is particularly important – the exposure of one individual’s DNA data directly impacts the privacy rights of that individual’s entire extended family. Similarly, privacy risks are no longer generated only by exposure of private data; rather, they can also be triggered by inferences made through analytics.Footnote 19 Thus, privacy advocates have called for regulation that protects individuals in not only the inputs but also outputs of data processing.Footnote 20

As legal and regulatory frameworks gradually adapt to these and other facets of privacy, data-holding entities also face the challenge of figuring out the precise contours of their responsibilities to the individuals whose data they collect and process. The development of new accountability frameworks, for instance in the context of data-processing algorithms, as well as novel mechanisms to delineate the responsibilities of these entities, such as the idea of information fiduciaries,Footnote 21 also signal a potential paradigm shift in the ways information privacy and data protection are approached.

This chapter is interested in one specific cross-cutting dimension of what might be labelled as the rethinking privacy discourse. It asks whether and how the interplay between technology and privacy law – both systems that govern information flows – can be reimagined and organized in mutually productive ways. The chapter proceeds in four steps: (i) explaining some of the dynamics that motivate a rethinking of privacy in the modern moment; (ii) developing a historical understanding of the dominant patterns connecting the evolutions of law and technology; (iii) examining a potential way to reimagine the dynamic between these elements moving forward; and (iv) sketching elements of a pathway towards ‘re-coding’ privacy law.

B The Modern Moment in Technology

The culmination of multiple factors at the intersection among digital technologies, market paradigms, social norms, professional practices, and traditional privacy laws has prompted the urgency of the need to rethink privacy and data protection in the current moment. Among the most important drivers behind the intensified debates about the future of digital privacy as broadly defined are increasingly visible shifts in traditional power structures, more specifically towards governments with unprecedented surveillance capabilities as well as large technology companies that amass digital tracking technologies and large pools of data to develop the corresponding analytical capability to shape people’s lives.Footnote 22

From a historical perspective, it is worth remembering that it was also power shifts that triggered the emergence of the modern information privacy and data protection laws in the 1970s, when the adoption of new technologies in the form of mainframe computers created an imbalance in power between different branches of government.Footnote 23 Somewhat similarly, contemporary power struggles among governments, technology companies, and citizens/users might mark another milestone with the potential to affect the political economy of privacy in the longer term. In the United States, the significance of these changes are reflected in a backlash: a variety of developments, ranging from increased activity among lawmakers and regulatorsFootnote 24 to critique by leaders of tech companies themselves,Footnote 25 suggest that the ‘data-industrial complex’ (understood traditionally as the symbiosis between the technology companies of Silicon Valley and the US government) has eroded in the aftermath of the Snowden revelations and in light of the Facebook/Cambridge Analytica scandal, which have demonstrated how profound the effects of such power shifts can be. The ensuing flurry of proposals for privacy legislation at the local, state, and national levels can be understood as attempts to course-correct and address some of the previously less visible power shifts between public and private actors.Footnote 26

Different manifestations and perceptions of such power shifts also fuel international and regional debates that point out the urgent need to address the privacy crisis of the digital age. This crisis has inspired the enactment of the GDPR in Europe and similar legislative efforts in other parts of the world,Footnote 27 as well as intensified global debates about ‘data sovereignty’, which can be understood as an immune system response triggered by the power shifts associated with the unprecedented surveillance capabilities of foreign governments and technology companies.Footnote 28

In addition to tectonic power shifts, technology-induced changes also motivate the need to rethink privacy from within the field. A series of conceptual and definitional questions are illustrative in this respect. For example, is ‘personally identifiable information’ in a big data environment still a meaningful classification to trigger privacy laws?Footnote 29 What about the traditional privacy-protecting techniques, such as anonymization? In a world where volumes of ostensibly innocuous data are available on most individuals, composition effects make re-identification of individuals and reconstruction of databases possible, and even likely, in many cases.Footnote 30 How should privacy harms be defined when traditional legal standards do not easily apply to the new types of cumulative, often long-term, and immaterial effects of privacy invasions?Footnote 31 These examples are indicative of the need to revisit some of the conventional terms and concepts privacy laws have long relied upon now that they are challenged by technological advances and the socio-economic practices they enable.

Finally, in an increasingly digitally connected environment, privacy has become a complex right that requires re-evaluating the trade-offs inherent to the idea of ‘privacy’. Privacy is, of course, not an absolute right; there are limits, barriers, and frequently values that are in tension with each other. Although a concept deeply shaped by technology, it is also directly linked to shifting social norms and normative expectations.Footnote 32 In the age of big data, the balancing act of navigating trade-offs between normative values becomes increasingly important and difficult. For example, the right to be forgotten, by prioritizing privacy interests, necessarily reduces freedom of expression and commercial interests in the data market.Footnote 33 The real challenge of privacy has now become figuring out how to balance trade-offs in a scalable manner – whether that requires developing decision trees or balancing tests – that is not merely a post hoc rationalization for a particular outcome. As the design and processes of modern technology become more sophisticated, and as big societal challenges, such as climate change or public health, increasingly rely on the collection and analysis of large amounts of data, these trade-offs will only become more pervasive and more difficult.Footnote 34

Taken together, the modern era of digital technology has arguably pushed the need to rethink ‘privacy’ to become something more fundamental – a need to re-examine and potentially renegotiate the very concepts and values that society cares about in privacy. Both in terms of problem description and possible pathways forward, this may require, for example, reaching outside the frame of privacy and data protection law altogether to other areas of law and policy writ large. The interplay between technology and society and law is extraordinarily nuanced, and there are a wide variety of levellers and instruments available to help shape the societal effects of technologies in the human context.Footnote 35 More narrowly, and simplifying for the purposes of this chapter, it might be helpful to examine some archetypical response patterns from when law has responded to technology-induced information privacy concerns in the past.

C Historical Patterns of Interaction between Law and Technology

In considering the fundamentally defensive stance that privacy law has taken historically with regard to technology, it is important to note that law in the broader context of information and communication technology has often transcended its familiar role as a constraint on behaviour acting through the imposition of sanctions.Footnote 36 In areas, such as intellectual property and antitrust, law has sought to engage with technology in a more nuanced way by enabling or in some cases levelling desired innovative or disruptive activity.Footnote 37 With this understanding of law as a functionally differentiated response system, and acknowledging that legal responses to technological innovation should not be understood as a simple stimulus-response mechanism, it is possible to identify a series of historical response patterns that characterize the evolution of privacy and data protection law vis-à-vis technological change. At a general level, three analytically distinct, but in practice often overlapping, response modes can be identified.Footnote 38

  1. 1. When dealing with innovative technologies, the legal system – including privacy and data protection law – by default often seeks to apply the old rules to the (new) problem resulting from new technology and its uses (subsumption). One illustration of this default response mode is US courts’ application of privacy torts, for instance, to address complaints about improper collection, use, or disclosure of data by digital businesses, such as Google and Facebook, because these analyses largely rely on tort conceptions of privacy advanced in the late nineteenth century.Footnote 39

  2. 2. Where subsumption is considered insufficient due to the novelty of the issues raised by a new technology, the legal system might resort instead to innovation within its own system. One version of this response mode is to ‘upgrade’ existing (privacy) norms gradually, typically by setting new precedent or by adjusting and complementing current norms (gradual innovation). Proposals to introduce a tort for the misuse of personal information by data traders,Footnote 40 to provide legal recognition of data harms by extending developments from other areas of the law, such as torts and contracts,Footnote 41 to enact a Consumer Privacy Bill of Rights Act,Footnote 42 and to expand consumers’ rights to access their data records within reasonable timeframes,Footnote 43 are all examples of gradual legal innovations that leave core elements of the current regulatory approach unchanged.

  3. 3. A more radical, paradigm-shifting approach is deeper-layered law reform where not only are individual norms updated, but also entire approaches or instruments are changed. In addition to the proposals already mentioned in the introduction, examples in this category include efforts to reimagine privacy regimes based on models that emerged in the field of environmental law,Footnote 44 to reformulate the current crisis as data pollution and develop social instruments that address the external harms associated with the collection and misuse of personal data,Footnote 45 to create an alternative dispute resolution scheme, such as a ‘cyber court’ system to deal with large-scale privacy threats in the digital age,Footnote 46 or to introduce a ‘Digital Millennium Privacy Act’ that would provide immunity for those companies willing to subscribe to a set of information fiduciary duties,Footnote 47 to name just a few illustrations.

Perhaps the most interesting, and arguably the most promising, approach to reprogramming information privacy and data protection law in a more fundamental sense stems from such a paradigm-shifting approach: to embrace the multi-faceted, functional role of law and reframe technology, as broadly defined, no longer (only) as a threat to privacy, but as part of the solution space.

Precursors of such a potential shift date back to the 1970s, when researchers under the header of ‘Privacy-Enhancing Technologies’ (PETs) started to develop technical mechanisms in response to privacy challenges associated with new information and communication technologies.Footnote 48 Originally focused on identity protection and technical means to minimize data collection and processing without losing a system’s functionality, the scope of PETs and similar instruments have broadened over time to include encryption tools, privacy-preserving analysis techniques, data management tools, and other techniques that cover the entire lifecycle of personal data. Starting in the 1990s, PETs, one instrument in a toolbox of many more, were put into a larger context by the introduction of privacy by design, a ‘systematic approach to designing any technology that embeds privacy into [both] the underlying specification or architecture’Footnote 49 and, one might add, business practices. Although still a somewhat amorphous and evolving concept that seeks to integrate legal and technical perspectives, privacy by design can be understood as an important movement that promotes a holistic approach to managing the privacy challenges that result from a wide range of emerging technologies across their life cycles and within their contexts of application. The concept has been endorsed by privacy regulators from across the globeFootnote 50 and adopted on both sides of the Atlantic, with the GDPR among the most prominent recent examples.Footnote 51 In addition to research efforts and scholarly contributions that deepen, advance, and critically examine the privacy by design concept, a range of implementation guidelines and methodologies have been issued by regulatory authorities, standards organizations, and other sources to help operationalize typically abstract privacy-by-design-requirements.Footnote 52 Despite all the progress made, careful examinations of the approach have highlighted both conceptual questionsFootnote 53 and implementation challenges,Footnote 54 including economic obstacles, interoperability barriers, and usability and design issues.Footnote 55 Conversely, additional work is also required to close privacy law’s ‘design gap’, at least in practice.Footnote 56

D Reimagining the Relationship of Law and Technology

This relatively recent ‘discovery’ of technology as an approach to address the very privacy challenges it (co-)creates in the law has potential. The more technical dimensions to regulating information privacy have been the focus of intense study by computer scientists and resulted in a rich theoretical literature and numerous practical tools for protecting privacy. Yet, in the past such discussion has by and large occurred in a space separate from the sphere of legal norms, regulations, policies, ethics codes, and best practices. In addition to the larger shifts mentioned earlier in this chapter, a number of specific trends make it now more important as well as urgent to foster knowledge sharing and integration between the two spheres and to embrace technological approaches to support legal privacy across a number of different functions.

First, technological advances enable sophisticated attacks that were unforeseen at the time when many of the still-applicable legal standards for privacy protection were drafted. Computer scientists now need to develop approaches that are robust not only against new modes of attack, but also against unknown future attacks, in order to address challenges posed by next-generation privacy threats.Footnote 57 For example, database reconstruction attacks have already demonstrated that large collections of data such as the United States Census – although ostensibly confidential – are now vulnerable to discovery of a particular individual’s personal, private characteristics, so new means of protection for these datasets are required.Footnote 58 Similarly, the omnipresence of predictive analytics makes it difficult for individuals to understand and control the usage of their own data, rendering traditional regulatory control paradigms increasingly ineffective against developments in technology.Footnote 59

Furthermore, patchworks of privacy laws, the lack of interoperability among them, and different interpretations of their requirements can all result in wide variations in the treatment and protection of data across contexts and geographies, depending on the jurisdictions, industry sectors, actors, and categories of information involved. More robust frameworks for evaluating privacy threats that are based on integrated legal and scientific standards for privacy protection are required to provide more comprehensive, consistent, and robust information privacy protection, thereby furthering the end goals of the law.

Finally, traditional legal approaches for protecting privacy while transferring data, making data-release decisions, or drafting data-sharing agreements, among other activities, are time-intensive and not readily scalable to big data contexts at a time when some of the biggest global challenges urgently require more, not less, privacy-respecting data sharing. Technological approaches need to be designed with compliance with legal standards and practices in mind in order to help automate data-sharing decisions and ensure consistent privacy protection at a massive scale.Footnote 60 For example, personalization of the conventional means of ensuring privacy, such as disclosure mandates, could help incorporate more granular legal norms and requirements into an individual’s privacy in a scalable fashion.Footnote 61

These reasons already indicate that the need for enhanced interoperability between technological and legal approaches to privacy is not limited to the mechanical level of individual privacy-preserving techniques and tools and goes beyond efforts to require companies to protect privacy by embedding it into the design of technologies and business practices. Rather, the scale of the challenge of reimagining the relationship between technology and privacy – as well as the potential benefits of increased levels of interoperability between the two – becomes visible when considering the variety of interrelated functional perspectives that such an approach situated at the law/technology interface would open up when dealing with the privacy challenges of the digital age. The following questions can be raised in this context.

  1. 1. How can technological and legal perspectives be integrated more closely to enable more robust problem descriptions and analyses? Approaches like privacy by design signal a departure from binary notations of privacy and ad hoc balancing tests of competing interests toward more holistic and rigorous privacy risk assessment models that rely both on modeling approaches from information security and an understanding of privacy informed by recent theoretical advances across different disciplines. Technical research, for example, may better quantify the privacy risks associated with more traditional privacy-protection techniques like anonymizationFootnote 62 and thus help establish a legal framework that articulates which privacy risks should be considered ‘unacceptable’. Similarly, using both computational and sociological measures could establish a more empirical evidence base about consumers’ attitudes and expectations towards privacy.Footnote 63 A growing body of interdisciplinary research demonstrates the theoretical and practical promise of such modern privacy analyses that are based in holistic analytical frameworks incorporating recent research from fields ranging from computer science and statistics to law and the social sciences.Footnote 64 Indeed, such frameworks are increasingly recognized by expert recommendations and standards.Footnote 65

  2. 2. How can legal and technological tools be combined in order to enable more effective, scalable, and accountable solutions to privacy problems, including the need for trustworthy data sharing? A wealth of research and practical examples show how emerging technical privacy solutions, including sophisticated tools for data storage, access control, analysis, and release, can act in concert with legal, organizational, and other safeguards to better manage privacy risks across the different stages of the lifecycle of data.Footnote 66 Consider, for instance, the important role encryption plays in securing access to and storage of data,Footnote 67 the technological development of a personal data store that enables individuals to exercise fine-grained control over where information about them is stored and how it is accessed,Footnote 68 the movement in AI towards transparent and explainable automated decision-making that makes technology more accountable,Footnote 69 or the development of technical ways to implement the right to be forgotten by deleting an individual’s records from machine learning models efficiently.Footnote 70 Formal mathematical guarantees of privacy can also reliably lower privacy risks. Differential privacy is one such example of a mathematical framework that manages the privacy challenges associated with the statistical analysis of information maintained in databases.Footnote 71 Secure multiparty computation, to add another example, is a methodology that enables parties to carry out a joint computation over their data in such a way that no single entity needs to hand a dataset to any other explicitly.Footnote 72 While some of these technologies are still in development, others have been tested out in practice and are already recommended as best practices in selected fields of application. Real world examples include the implementation of differential privacy in the United States Census,Footnote 73 as well as the use of security multiparty computation to investigate pay gaps,Footnote 74 or maintain data on student outcomes in higher education.Footnote 75

  3. 3. How can enhanced levels of interoperability between technological and legal approaches to privacy enable better matching of solutions to problems? The Harvard University Privacy Tools Project, for example, is a multidisciplinary effort to develop technical tools to address specific, identified policy needs.Footnote 76 Among other contributions, the project demonstrates, for certain categories of use cases, including data sharing in research contexts, how interdisciplinary approaches can guide actors to engage in more robust privacy risk assessments and then select the best solution from a set of integrated privacy tools, such as tiered access models, that combine both legal and technical approaches to privacy protection.Footnote 77 As another example, the LINDDUN approach, developed at Leuven University, creates a taxonomy of mitigation strategies to address privacy threats in a given high-level system and identifies effective, targeted PETs by creating data flow diagrams, mapping privacy threats, and performing risk analyses on these privacy threats.Footnote 78

  4. 4. How can a closer integration of technical and legal concepts and applications aimed at protecting privacy make it easier to demonstrate compliance and ‘measure progress’ over time? Again, differential privacy is a key example of using a highly technical conception of ‘privacy’ to give the vague legal words used to define privacy in statutes and regulations more precision, which in turn increases the accuracy of assessment of compliance in individual cases and over time.Footnote 79 More generally, legal standards could adopt more technically robust descriptions of an intended privacy goal rather than simply endorsing traditional approaches like de-identification. This would provide a clearer basis for demonstrating whether new classes of emerging privacy technologies are sufficient to fulfil the requirements of these standards. These examples indicate how policymakers and technologists could seek to employ a hybrid of legal and technical reasoning to demonstrate a privacy solution’s compliance with legal standards for privacy protection.Footnote 80

Taken together, the integration of legal and technical approaches across different functional areas can help pave the way for a more strategic and systematic way to conceptualize and orchestrate the contemporary interplay between law and technology in the field of information privacy and data protection. The process of re-imagination through enhanced interoperability – here illustrated along four functional areas with the open-ended possibility of adding others – builds heavily upon the concept of privacy by design and is informed by related approaches such as privacy impact assessments. However, as already mentioned, this process is less focused on embedding privacy requirements into the design and architecture of individual technological systems and business practices. Rather, it is more broadly interested in finding ways to overcome the traditional interaction patterns between technology and law in order to offer new system-level opportunities to develop notions and manifestations of privacy that might only emerge after combining different substantive and methodological ‘lenses’. At a moment of rethinking privacy, such an exercise might inform the evolutionary path of privacy and data protection laws at both the conceptual and implementation levels by challenging their underlying assumptions, definitions, protection requirements, compliance mechanisms, and so on.

E Towards Recording Privacy Law

Over time, enhanced interoperability between technological and legal approaches to privacy might ultimately culminate in a deeper-layered recoding of privacy law that transcends the traditional response patternsFootnote 81 discussed earlier in this chapter by leveraging the synergies between perspectives and instruments from both domains in order to cope with the complex privacy-relevant challenges of our future. The path towards such an outcome, however, is long and faces many obstacles given the economic, geopolitical, and other forces at play that were described earlier in this chapter.

As a precondition of any progress, such a strategy requires significant investments in interdisciplinary education, research, and collaboration.Footnote 82 Despite all the advancements made in recent years, there is much yet to be uncovered: development of novel systems of governance requires not only interdisciplinary mutual understandings but also deep inquiry into the most effective roles for law and legal governance in such a dynamic, fast-changing system. Programs designed to stimulate such collaboration and interdisciplinary learning have already started being developed at universities.Footnote 83 Furthermore, technology positions in government, such as the Chief Technologist position at the Federal Trade Commission and the President’s Council of Advisors on Science and Technology, to name two examples from the United States, recognize the need for experts in computer science who can inform privacy regulation and serve as models of cross-disciplinary communication and knowledge-sharing in policy circles.Footnote 84 Similarly, it is becoming increasingly important for technologists to understand legal and policy approaches to privacy protection, so that they can implement measures that advance the specific goals of such standards. Doing so will also likely require policymakers to develop mechanisms and resources for communicating their shared understanding of the interface between law and technology with privacy practitioners. Regulatory systems and institutions will also need to support additional research on policy reasoning, accountable systems, and computable policies for automating compliance with legal requirements and enforcement of privacy policies.Footnote 85

Reimagining the relationship between technology and privacy law in the digital age can be seen as a key component of a larger effort aimed at addressing the current digital privacy crisis holistically. Under contemporary conditions of complexity and uncertainty, the ‘solution space’ for the multifaceted privacy challenges of our time needs to do more than treat the symptoms of discrete privacy ills. It needs to combine approaches, strategies, and instruments that span all available modes of regulation in the digital space, including technology, markets, social norms and professional practices, and the law. If pursued diligently and collaboratively, and expanding upon concepts, such as privacy by design or privacy impact assessments, as written into modern privacy frameworks like the GDPR, such a turn toward coordinated privacy governance could result in a future-oriented privacy framework that spans a broad set of norms, control mechanisms, and actorsFootnote 86 – ‘a system of information privacy protection that is much larger, more complex and varied, and likely more effective, than individual information privacy rights’.Footnote 87 Through such nuanced intervention, the legal system (understood as more than merely a body of constraining laws) can more proactively play the leading role in directing and coordinating the various elements and actors in the blended governance regime, and – above all – in ensuring the transparency, accountability, and legitimacy that allow democratic governance to flourish.Footnote 88

10 The Algorithmic Learning Deficit Artificial Intelligence, Data Protection and Trade

Svetlana Yakovleva and Joris van Hoboken Footnote *
A Introduction

Commercial use of personal and other data facilitates digital trade and generates economic growth at unprecedented levels. A dramatic shift in the composition of the top twenty companies by market capitalisation speaks vividly to this point. While, in 2009, 35 per cent of those companies were from the oil and gas sector, in 2018 – just nine years later – 56 per cent of those companies were from the technology and consumer services sectors.Footnote 1 Meanwhile, the share of oil and gas companies, a pillar among traditional industries, declined to just 7 per cent. The share of digitally deliverable services in global services exports more than doubled in the last thirteen years: it increased from USD 1.2 trillion in 2005 to USD 2.9 trillion in 2018.Footnote 2

Data also constitutes a crucial resource for the development, continuous refinement and application of artificial intelligence (AI). The availability of data and its free flow across borders are often viewed as pre-requisites for the development and flourishing of AI technology.Footnote 3 However, in the context of AI, it is not the data itself, but the knowledge and insights obtained with the help of AI algorithms from that data (in other words, the ‘fruits’ of the data) that constitute the main added value. Learning, or ‘digital intelligence’, in the words of UNCTAD, is crucial for the market of big data. One of the upshots of this is that without the necessary infrastructure and technologies, data concerning individual persons or even aggregated data cannot by itself generate value. It is the ‘learning’, and not raw data itself, that constitutes a valuable economic resource and can be used in targeted online advertising, the operation of electronic commerce platforms, the digitisation of traditional goods into rentable services and the renting out of cloud services.Footnote 4 For example, personalisation, which is an important component in the production, marketing and distribution of online services, uses AI systems to transform individuals’ online behaviour, preferences, likes, moods and opinions (all of which constitute personal data, at least in the European Union) into commercially valuable insights.Footnote 5 Focusing solely on data in the context of regulatory conversations on AI – both in domestic and international trade contexts – may be misguided.

AI development is at the top of the domestic and international policy agendas in many countries around the world. Just in the last couple of years, more than thirty countries and several international and regional stakeholders, including the European Union (EU), G20 and Nordic-Baltic Region adopted AI policy documentsFootnote 6 revealing their ambitions to compete for dominance in AI. Digital trade provisions, including rules governing cross-border data flows, access to proprietary algorithms and technology transfers and access to open government data, have taken centre stage in bilateral, regional and international trade negotiations.Footnote 7

Different levels of advancement in digital technologies in general, and in AI specifically, as well as the concentration of data in the hands of a few countries, make international negotiations on digital trade challenging. To illustrate the point, according to the 2019 UNCTAD Digital Economy Report, China and the United States account for 90 per cent of the market capitalisation value of the worlds’ seventy largest digital platform companies and ‘are set to reap the largest economic gains from AI’.Footnote 8 In contrast, the EU accounts for only 3.6 per cent of this market capitalisation.Footnote 9 The report further demonstrates that China, the United States and Japan together account for 78 per cent of all AI patent filings in the world.Footnote 10 Data – one of the key components of data analytics – is highly concentrated in Asia Pacific and the United States: 70 per cent of all traffic between 2017 and 2022 is expected to be attributed to these two regions.Footnote 11 Representing 87 per cent of the B2B e-commerce, the United States is the market leader in global e-commerce, while China is the leader in B2C e-commerce followed by the United States.Footnote 12 As a result, economic value derived from data is captured by countries where companies having control over storage and processing of data reside.Footnote 13

The high concentration of control over AI technologies, digital platforms and data in specific parts of the world raise concerns about ‘digital sovereignty’ related to control, access and rights of the data and appropriation of the value generated by the monetisation of the data.Footnote 14 This issue is not limited to the dynamics of negotiations between developed and developing countries. For example, the new European Commission’s Digital Strategy is strongly anchored in the principles of digital sovereignty and shaping technology in a way respecting European values.Footnote 15 Public policy interests implicated by international data governance and data flows, indispensable for the global governance of AI, stretch far beyond issues of economic growth and development. They also involve a broader set of national and regional priorities, such as national security, fundamental rights protection (such as the rights to privacy and to protection of personal data) and cultural values, to name just a few. Differences in the relative weight accorded to each such priority when contrasted with the economic and political gains from cross-border data flows have resulted in a diversity of domestic rules governing cross-border flows of information, especially when it relates to personal data, and a diversity of approaches to govern the use of AI in both private and public law contexts.

Against this backdrop, this chapter’s aim is twofold. First, it provides an overview of the state of the art in international trade agreements and negotiations on issues related to AI, in particular, the governance of cross-border data flows. In doing so it juxtaposes the EU and the US approaches and demonstrates that the key public policy interests behind the dynamics of digital trade negotiations on the EU’s side are privacy and data protection. Second, building on the divergent EU and US approaches to governing cross-border data flows, and the EU policy priorities in this respect in international trade negotiations, this chapter argues that the set of EU public policy objectives weighted against the benefits of digital trade in international trade negotiations, especially with a view to AI, should be broader than just privacy and data protection. It also argues that an individual rights approach has limitations in governing data flows in the context of AI and should be expanded to factor in a clearer understanding of who wins and who loses from unrestricted cross-border data flows in an age of data-driven services and services production.

The chapter proceeds as follows. The next section maps out the recent developments on digital trade on the international trade law landscape. The third section discusses, from an EU perspective, the limits of data protection in regulating AI domestically and as a catch-all public policy interest counterbalancing international trade commitments on cross-border data flows. The fourth section contains a brief conclusion.

B Cross-Border Digital Trade and Artificial Intelligence

The immense potential of data to generate economic value has given rise to a so-called ‘digital trade discourse’, which, on the one hand, views the freedom of cross-border data flows as one of the pre-requisites of international digital trade and AI-driven innovation and, on the other hand, predicts that restrictions on data flows will hamper economic growth and undermine innovation.Footnote 16 This discourse is advanced not only by the United States, which has a strong competitive advantage in digital technologies, and the big tech companies, which invest millions of dollars in lobbying activities on digital trade, but also by the EU.Footnote 17

Policy debates in international trade negotiations on digital trade, relevant in the AI context, revolve around the liberalisation of cross-border data flows in order to enable accumulation of large data sets to train AI systems and restrictions on those data flows in the public interest. The following subsections provide an overview of recent developments in this area.

Countries have not yet achieved a multilateral consensus on the design and scope of digital trade provisions, which have so far only appeared in bilateral and regional trade agreements and have somewhat overshadowed the multilateral efforts of the WTO in this area.Footnote 18 Although proposals on electronic commerce in the WTO increasingly focus on barriers to digital trade and ‘digital protectionism’,Footnote 19 the WTO has not yet made any tangible progress on this issue.Footnote 20 The discussions continue, however. In early 2019, seventy-six WTO members, including Canada, China, the EU, and the United States, started a new round of negotiations on electronic commerce at the WTO in order to create rules governing e-commerce and cross-border data flows.Footnote 21 It remains to be seen how these negotiations will play out. Despite a seemingly firm consensus on the use of the terms ‘digital trade’ and ‘digital protectionism’ – the axes around which the discourses governing international negotiations revolve – the value structures underlying these discourses diverge,Footnote 22 as the US and the EU examples below will illustrate. The next section on international trade law governance of cross-border data flows then explicates how trade provisions on cross-border data flows, advanced by the US and the EU, mirror this divergence.

In the spirit of its ‘digital agenda’, the United States has been a pioneer in including provisions on free cross-border data flows in international trade agreements.Footnote 23 The United States has managed successfully to advance broad and binding horizontal obligations enabling unrestricted data flows in the digital trade (or electronic commerce) chapters of its recent trade agreements. The Comprehensive and Progressive Agreement on Trans-Pacific Partnership (CPTPP), (where the US led digital trade discussions before its withdrawal from the TPP agreementFootnote 24), the United States–Mexico–Canada Agreement (USMCA) and the Digital Trade Agreement with Japan examples are of trade agreements to contain a binding provision requiring each party to allow (or not to restrict) the cross-border transfer of information by electronic means, including personal information, when this activity is for the conduct of the business of a covered person.Footnote 25 The US proposal for the ongoing e-commerce talks at the WTO replicates this ‘gold standard’ provisions on digital trade.Footnote 26 All of the earlier mentioned free trade agreements (FTAs) also contain an exception which allows the parties to adopt or maintain measures inconsistent with this obligation to achieve a legitimate public policy objective, provided that the measure (i) is not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination or a disguised restriction on trade; and (ii) does not impose restrictions on transfers of information greater than are required (necessary – in the USMCA and US–Japan Digital Trade Agreement) to achieve the objective.Footnote 27

The exception closely resembles the general exception under Article XIV(c)(ii) of the General Agreement on Trade in Services (GATS),Footnote 28 a threshold which has been particularly hard to meet in the past.Footnote 29 Similar to the general exception clause, the FTA text requires that a measure prima facie inconsistent with the data flow obligation should be subject to a two-level assessment. First, it should pass the so-called ‘necessity test’, where the necessity of the contested measure is assessed, based on an objective standard of ‘necessity’ by trade adjudicators. Second, its application should not amount to arbitrary or unjustifiable discrimination or a disguised restriction on trade (pursuant to the chapeau of the general exception provision). Under WTO case law, the ‘necessity test’ requires that a WTO law–inconsistent measure be the least trade restrictive of all reasonably available alternatives allowing to achieve the same level of protection of a public interest, raised by the claimant in a dispute.Footnote 30 In short, just like the GATS general exception, the FTA exception sets a high threshold for justifying a domestic measure inconsistent with relevant trade disciplines. An important difference of the earlier quoted FTA exception from the GATS general exception, however, is that it does not specify the public policy objectives that may be invoked to justify a restriction on the free cross-border data flows. In this sense, the exception is more ‘future-proof’, as it can rest on any public policy interest that may be implicated by the cross-border data flow obligation in the future, such as cybersecurity or even technological sovereignty (not mentioned in Article XIV GATS exception), provided of course that the measure passes the two-level assessment of the exception.

In addition, the digital trade (electronic commerce) chapters of the earlier mentioned agreements contain an article on the protection of personal information (the term used to refer to personal data in the United States), which contains a mixture of binding and aspirational provisions on the protection of privacy by the parties to the agreements.Footnote 31

The EU largely shares the ‘digital trade’ discourse on the benefits of cross-border data flows for global economic growth with the United States and, in principle, supports the idea of regulating cross-border data flows in international trade agreements.Footnote 32 Largely but not completely, because there is one important point on which the EU approach diverges very significantly from that of the United States: namely, with regard to the protection of the rights to privacy and personal data. It is for this reason that the EU has until recently been cautious in including provisions on cross-border data flows in its trade agreements.Footnote 33 Understanding the EU’s domestic framework on the protection of personal data and, in particular, its approach to transfers of personal data outside the European Economic Area (EEA), is essential for explaining its trade policy in the domain of cross-border data flows. Therefore, before delving into the EU’s proposed provisions on the latter topic, let us first briefly discuss the EU’s domestic regime for transfers of personal data outside the EEA.

The rights to privacy and the protection of personal data are protected as binding fundamental rights in the EU.Footnote 34 From an EU data protection law perspective, personal data is distinct from other types of information because of its inextricable link to the data source: individuals. One of the pillars of this protection, as the CJEU has ruled,Footnote 35 is the restriction on transfers of personal data outside the EEA in order to ensure that the level of protection guaranteed in the EU by the General Data Protection Regulation (GDPR)Footnote 36 is not undermined or circumvented as personal data crosses EEA borders.Footnote 37 As a consequence of the broad definition of ‘personal data’, EU restrictions on transfers of personal data apply to a broad range of data that can be essential for developing, fine tuning and application of AI systems. Furthermore, the restrictions also apply to mixed data sets, in which personal and non-personal data are ‘inextricably linked’ – which, as mentioned earlier, fall under the scope of the GDPR.Footnote 38 The restrictions do not apply to non-personal data, including non-personal data in mixed data sets, under the condition that those can be separated from personal data. At the same time, the distinction between personal and non-personal data is not set in stone. If, due to technological developments, this anonymised data can be reidentified, it will become ‘personal’ and the GDPR restrictions will again apply.Footnote 39 Some scholars argue that these restrictions limit the cross-border aggregation of data and thus stifle the development of AI.Footnote 40

The GDPR’s restrictions on transfers of personal data apply when personal data is transferred or is accessed from outside the EEA, including when this is done for training AI systems, and in the phase of fine-tuning or cross-border application of already existing AI systems located outside the EEA to individuals located in the EEA.Footnote 41 This is because feeding an EEA individual’s data to the non-EEA AI system will most likely constitute a transfer of personal data.

Turning to the intersection of the GDPR with international trade law, only one FTA to which the EU is a party includes a binding provision on cross-border data flows. The 2019 Economic Partnership Agreement with Japan (Japan–EU EPA), where such a provision was initially proposed by Japan, merely includes a review clause allowing the parties to revisit the issue in three years’ time after the agreement’s entry into force.Footnote 42 The EU and Japan have agreed to use a mutual adequacy decision following the route for cross-border transfers of personal data laid down in the GDPR.Footnote 43 This was due to the inability of EU institutions to reach a common position on the breadth of the data flows provision and exceptions from it for the protection of privacy and personal data, following a strong push back from academics and civil society to an attempt of including such provisions in the – currently stalled – plurilateral Trade in Services Agreement (TiSA) and the Transatlantic Trade and Investment Partnership (TTIP) between the EU and the US.Footnote 44

In 2018, the European Commission reached a political agreement on the EU position on cross-border data flows. This position was expressed in the model clauses, which, in particular, include a model provision on cross-border data flows (Article A) and an exception for the protection of privacy and personal data (Article B).Footnote 45 The EU has included these model clauses in its proposals for digital trade chapters in the currently negotiated trade agreements with Australia, Indonesia, New Zealand and Tunisia,Footnote 46 as well as into the EU proposal for the WTO rules on electronic commerce,Footnote 47 which are intended to co-exist with the general exception for privacy and data protection modelled after Article XIV(c)(ii) GATS included in the same agreements.Footnote 48 The 2021 EU-UK Trade and Cooperation Agreement (TCA), however, contains provisions different and, arguably, awarding less regulatory autonomy to protect privacy and personal data, than those in the above-mentioned model clauses.Footnote 49 It is unclear whether the TCA provisions are merely outliers or represent the new model approach of the EU. Given that the above-mentioned model clauses have not been amended following the TCA and still represent the EU position in multiple ongoing trade negotiations, including those at the WTO, this chapter assumes that they still represent the EU mainstream approach and, therefore, the discussion below focuses solely on these clauses.

Model Article A provides for an exhaustive list of prohibited restrictions on cross-border data flows. Model Article B on the protection of personal data and privacy states that the protection of personal data and privacy is a fundamental right and includes an exception from the provision on cross-border data flows. The model clauses, on their face, safeguard the EU’s broad regulatory autonomy, much more so than the general exception for privacy and data protection in existing trade agreements. This is made manifest in five different ways. First, as compared to the US model provision on cross-border data flows, the prohibition of restrictions on cross-border data flows in Article A is formulated more narrowly, in that it specifically names the types of restrictions that are outlawed by this provision. Second, the provisions of Article B(1) assert that the normative rationale for the protection of personal data and privacy is the protection of fundamental rights. This rationale – as opposed to economic reasons for protecting privacy and personal data – signals a higher level of protection and, therefore, arguably requires a broader autonomy to regulate vis-à-vis international trade commitments.Footnote 50 This provision is likely to be interpreted as a part of the digital trade exception for privacy and data protection in Article B(2) of the proposal. Third, the proposed exception for privacy and the protection of personal data establishes a significantly more lenient threshold – ‘it deems appropriate’ – than the ‘necessity test’ of the general exception under the GATS. Drawing the parallel with the threshold in the GATS national security exception – ‘it considers necessary’Footnote 51 – one can argue that the proposed exception affords an almost unlimited autonomy to adopt measures inconsistent with Article B(2) to protection of privacy and personal data.Footnote 52 Fourth, the exception in Article B(2) explicitly recognises the adoption and application of rules for cross-border transfers of personal data – the gist of the EU’s framework for transfers of personal data – as one of the measures that a party may deem appropriate to protect personal data and privacy, in spite of its international trade commitments. Fifth and finally, the provision of Article B(2) protects the safeguards afforded by a party for personal data and privacy from being affected by any other provision of the trade agreement.

At the same time, despite these apparent strengths of the EU proposal in view of privacy and data protection, Article B suffers from at least four clear weaknesses. First, declaring that the protection of privacy and personal data are fundamental rights is EU-centric and does not leave the EU’s trading partners any autonomy to choose another level of protection of these public policy interests they might see fit for their own legal and cultural tradition. Given that, as things stand now at least, the fundamental rights protection of privacy and personal data is, essentially, a European phenomenon, EU trading partners may be reluctant to commit to this level of protection in a trade agreement. Second, the exception for privacy and data protection in Article B(2) of the EU’s proposal is designed for digital trade chapters and fails to clarify its relationship with the general exception for data protection, which remains intact – at least in available draft trade agreements – in which the EU has included the proposed model clauses.Footnote 53 Third, modelling an exception for privacy and data protection after the national security exception essentially creates an almost unconditional escape valve from virtually any trade commitment, as long as there is at least a remote nexus to the protection of privacy and personal data. Although this may seem justified at first glance given that privacy and data protection are fundamental rights in the EU, it creates a precedent for using this wide margin for a variety of public policy interests (other than national security), which may undermine the global rules-based trading system. Fourth, and most relevant in the context of this chapter’s discussion, the public policy interests that can justify violation of Article A under Article B(2) are limited to the protection of privacy and personal data. Although this underscores the relative importance of the rights to data protection and privacy as opposed to the goal of digital trade liberalisation on the values scale, the limitation of the exception to these particular rights may have negative effects. Given that the threshold for important public policy interests, such as public morals, safety, human, animal or plant life, in the general exception clause is narrower than the threshold in model Article B(2), the regulatory autonomy to protect personal data and privacy ends up being much broader than the protection of other rights that are also recognised under the EU Charter of Fundamental Rights.Footnote 54 This elevates privacy and the protection of personal data above other rights that are equally protectedFootnote 55 and may even create an incentive to – artificially – frame other public policy interests, especially those not mentioned in the GATS general exception, as protection of privacy and personal data. In the context of AI, this could steer domestic AI regulation in the EU deeper into the realm of data protection as opposed to creating a separate regulatory framework – an issue currently discussed in the EU institutions.Footnote 56 Public policy interests, such as industrial policy,Footnote 57 cybersecurityFootnote 58 and digital sovereignty,Footnote 59 are cited as public policy interests that may require restricting digital trade in general or data flows in particular. The first is especially relevant for developing countries, for which free data flows essentially mean ‘one-way flows’, as these countries’ data flows are constrained by the limited availability of digital technologies and of the skills necessary to produce digital intelligence from data.Footnote 60 This issue, as already mentioned, has gained prominence in the European Commission’s 2020 digital strategy. In its European Strategy for Data, the European Commission stated:

The functioning of the European data space will depend on the capacity of the EU to invest in next-generation technologies and infrastructures as well as in digital competences like data literacy. This in turn will increase Europe’s technological sovereignty in key enabling technologies and infrastructures for the data economy. The infrastructures should support the creation of European data pools enabling Big Data analytics and machine learning, in a manner compliant with data protection legislation and competition law, allowing the emergence of data-driven ecosystems.Footnote 61

Turning to cybersecurity interests, they may require restrictions on data flows, data localisation or restrictions on import of certain information technology products.Footnote 62 These interests are relevant for both developing and developed countries. The blurring boundary between public and private spheres in the surveillance context – where governments increasingly rely on private actors for access to data for surveillance purposes – explains why cross-border data flows may raise sovereignty concerns as well.Footnote 63

To sum up, although the regulation of cross-border data flows, especially in the context of AI, implicates a variety of public policy interests, the EU trade policy on this topic has solely focused on one of them – namely privacy and the protection of personal data. This, arguably, has something to do with the institutional dynamics between EU institutions. However, it may not be sustainable either in the EU or in a multilateral context, such as with regard to the electronic commerce negotiations at the WTO. According to UNCTAD, the early meetings of the group on data flows at the WTO have, so far, mainly reflected the views of proponents of the free flow of data.Footnote 64 However, for these negotiations to result in concrete WTO legal norms, members will have to reach a consensus on how to balance the economic gains of free data flows with multiple competing interests, which include not only the protection of privacy and personal data – the main point of contention for the EU – but also other fundamental rights, as well as industrial policy, cybersecurity and economic development interests of other countries involved in the negotiations.Footnote 65

In contrast to the position taken both by the United States and the EU that data flows should be free (unless their restriction can be justified by an exception), when it comes to the protection of the source code, or algorithms expressed in that source code incorporating the learning derived from processing of data – the position is the exact opposite. As explained in the introduction, learning, or digital intelligence, is where the real economic value of personal and other data lies. Thus, while data and data flows are viewed as ‘free’, the value obtained from data are up for grabs by whomever possesses the infrastructure and resources necessary to process that data. At this juncture, these entities are concentrated in the United States and China. Two recent US-led FTAs, namely the USMCA and the US–Japan Digital Trade Agreement (DTA), contain specific provisions on the protection of source code and algorithms.Footnote 66 The EU’s proposal for the WTO negotiations on e-commerce also contains a prohibition on access to and forced transfer of the source code of software owned by a natural or juridical person of other members.Footnote 67 Similar provisions are included in the EU proposals for digital trade chapters of currently negotiated FTAs, such as with Mexico,Footnote 68 AustraliaFootnote 69 and New Zealand.Footnote 70

C The Limits of Personal Data Protection in the Context of Trade Law Policy on Cross-Border Data Flows in AI Context

The earlier discussion demonstrates that the only public policy interests that are fully accounted for in the exception from a proposed provision on the free cross-border flow of data in draft EU trade agreements are privacy and the protection of personal data. In the context of AI, this mirrors the currently prevailing approach in the EU to regulate AI through the governance structure of the GDPR. This section focuses on two limitations of this approach. First, this approach is based on a distinction between personal and non-personal data, because only data that qualifies as personal falls under the EU data protection framework. The distinction is increasingly hard to make, especially in the context of AI. Second, EU privacy and personal data protection takes us to an individual rights framework that does not account for the value produced from data and the impact of applying the learning derived from AI to larger societal groups or populations.

I Thin Borderline between Personal and Non-personal Data in AI Context

EU law maintains a rigid distinction between personal and non-personal data,Footnote 71 in the sense that there are two different legal frameworks for personal and non-personal data. While cross-border transfers of personal data are subject to a ‘border control’Footnote 72 regime, as discussed earlier, transfers of non-personal data outside the EEA are unrestricted. This distinction is increasingly unworkable in practice as it is becoming ever more difficult to draw a line between personal and non-personal (or anonymous) data, especially in the AI context.Footnote 73

Schwartz and Solove succinctly summarise four main problems with the distinction. First, ‘built-in identifiability’ in cyberspace makes anonymity online a ‘myth’, as essentially all online data can be linked to some identifier.Footnote 74 Second, non-personal information can be transformed into personal data over time.Footnote 75 Third, the distinction between personal and non-personal data has a dynamic nature, as the line between the two depends on technological developments. Fourth and finally, the borderline between personal and non-personal data is not firm, but rather contextual, as many types of data are not non-identifiable or identifiable in the abstract.Footnote 76

The EU regulation on a framework for the flow of non-personal data illustrates a number of those points. It specifically mentions that examples of non-personal data include ‘aggregate and anonymised datasets used for big data analytics, data on precision farming that can help to monitor and optimise the use of pesticides and water, or data on maintenance needs for industrial machines’.Footnote 77 The regulation also notes, however, that ‘[i]f technological developments make it possible to turn anonymised data into personal data, such data are to be treated as personal data, and [the GDPR] is to apply accordingly’.Footnote 78 As can be seen, although the very existence of this regulation is grounded on the possibility of separating the notions of personal and non-personal data, the regulation itself suggests that such distinction is not clear-cut and requires constant reassessment.

Another limitation of a data protection approach to restrictions on cross-border data flows in the AI context is that its scope is limited to data that qualifies as personal data. However, it is not the data fed into an AI system itself, but the knowledge derived from the data through learning that integrates the value of big data into different organisational processes. Training of AI systems transforms personal data into an aggregate representation of such data, which may no longer qualify as personal data. Interestingly, some scholars have argued in this context that AI models vulnerable to inversion attacks can still be considered personal data.Footnote 79 Moreover, it is not only personal, but also non-personal – machine-generated – data that is extremely useful and valuable in AI context. As the European Commission rightly noted in its 2020 White Paper on AI:

AI is one of the most important applications of the data economy. Today most data are related to consumers and are stored and processed on central cloud-based infrastructure. By contrast a large share of tomorrow’s far more abundant data will come from industry, business and the public sector, and will be stored on a variety of systems, notably on computing devices working at the edge of the network.Footnote 80

Although cross-border flows of non-personal data and learning produced from it may not have implications for individual rights to privacy and the protection of personal data, they may present risks for other policy objectives, such as cybersecurity or digital sovereignty. The argument in this chapter is not to suggest that cross-border flows of non-personal data should be restricted, although a possibility of such restrictions already features in the European Commission’s proposal for a Data Governance Act.Footnote 81 Neither does it suggest that a strong exception for domestic privacy and data protection rules is inappropriate. Rather, it underscores the importance of assessing the implications of cross-border data flows in the context of AI against a broader set of public policy interests that matter for the EU and its trading partners in the long term. For example, Gürses and van Hoboken are doubtful that, in the context of digital services produced in an agile way where users also act as producers of such services, privacy law, traditionally centred around regulating information flows, is able to tackle the implications for individuals of such agile production.Footnote 82 They argue that such problems should not all be framed as questions of information flows and data protection, but instead addressed by other, or complementary regulatory tools, such as consumer protection, software regulation or treatment of certain services as new types of utility providers.Footnote 83

II Individual Rights Framework Does Not Factor in the Value of Knowledge Derived from Data

In the digital trade discourse where unrestricted cross-border data flows are viewed as a source of tremendous – aggregated – value gains, not every country participating in data flows ‘wins’ from those data flows. Yet, the issue of who wins and who loses from unrestricted data flows is typically not raised in this discourse. As mentioned earlier, only countries that possess the necessary infrastructure and skills to refine data and extract value from large corpora of data generated in the course of the provision of online services will really benefit from the free flow of data. As a result, countries that lack these resources are merely supplying primary goods, which are worth much less than the learning that can be derived from them, just as countries that produce raw materials are rarely the largest winners when compared to countries where those materials are transformed. Just as the real value lies in the transformation of raw materials, the real value in AI lies in the value of processing the data. Against this backdrop, focusing on data instead of learning derived from data misses the point.

This brings us to the second limitation of the data protection framework being central in cross-border provision of AI, especially in the way it is designed in the EU, where personal data is primarily viewed as the subject matter of a fundamental right rather than an economic asset. This is manifested, for example, in regulatory choices that avoided recognising personal data as consideration for online services (in other words, as a form of currency) in the 2019 Digital Content Directive.Footnote 84 In its opinion on the draft of this directive, the European Data Protection Supervisor (EDPS) underscored that ‘personal data cannot be considered as a mere commodity’.Footnote 85 Although the fact that the personal data cannot be considered as a ‘mere’ commodity does not mean that it cannot have economic value, viewing the protection of personal data as a fundamental right could be one of the reasons why the EU could be restrained in putting a price tag on personal data in trade negotiations on cross-border data flows.

UNCTAD stresses that platforms harnessing data generated by individuals, businesses and organisations of other countries, while based in only a few countries, raises concerns about ‘digital sovereignty’, in view of the control, access and rights with respect to the data and the appropriation of the value generated from monetising the data.Footnote 86 UNCTAD explains that economic value derived from data is captured by developed countries where companies having control over storage and processing of data reside.Footnote 87 It follows, that ‘[t]he only way for developing countries to exercise effective economic “ownership” of and control over the data generated in their territories may be to restrict cross-border flows of important personal and community data’.Footnote 88 Although this particular report makes an argument in the context of imbalance between developed and developing countries, given the high concentration of digital technologies in the very few developed countries, it could also be relevant in relations between those few and other developed countries. It should be emphasised that restricting the outgoing flows of personal data does not mean that those countries that impose such restrictions will have the means to process and generate value from such data within their borders. It may be about sovereignty, but it is not necessarily about endogenous economic development unless measures to ensure this development accompany the data flow restrictions.

In a similar vein, Couldry and Mejias speak about ‘data colonialism’, by which they mean that big data processing practices make human relations and social life overall ‘an “open” resource for extraction’.Footnote 89 They compare big data to appropriation or extraction of resourcesFootnote 90 – another parallel between data and oil. Global data flows, they argue, ‘are as expansive as historic colonialism’s appropriation of land, resources, and bodies, although the epicentre has somewhat shifted’.Footnote 91 In their view, the transformation of human actors and social relations formalised as data into value leads to a fundamental power imbalance (colonial power and colonised subjects).Footnote 92 In a similar vein, Zuboff has famously labelled the business of accumulation and monetising data ‘surveillance capitalism’, which leads not only to the accumulation of capital, but also of individual rights.Footnote 93

There is some movement in the governance of data reflecting those concerns. A 2019 Opinion of the German Ethics Commission shows a tendency towards expanding the scope of individual rights in data beyond the non-economic rights to privacy and personal data protection. According to the commission, under certain circumstances individuals should be granted data-specific rights, which include a right to obtain an economic share in profits derived with the help of the data.Footnote 94 The potential design of a legal framework of distribution of economic gains from the use of data is addressed in a growing body of scholarly and policy research. This research explores frameworks or organisations acting as intermediaries between individuals and entities wishing to use (and profit from) their data, such as data trusts or collective data ownership (such as data funds).Footnote 95 Data trusts are viewed as an attractive tool to facilitate access to large data sets of aggregated data for the purposes of developing and applying AI, to generate trust around the use of data by various stakeholders, and as mechanisms for paying back a fair share of benefits from the use of data to individuals.Footnote 96 There is, however, little clarity regarding the structure that data trusts should take and the method for sharing value derived from the commercial use of personal data.Footnote 97 The German Ministry of Economic Affairs and the Dutch Government are investigating the possibilities of setting up data trusts in their respective countries.Footnote 98 Research on data funds views personal data as a public resource, drawing a parallel with natural resources that constitute the country’s resource. From this perspective, data collected within a certain jurisdiction should ‘belong’ to that jurisdiction.Footnote 99 Data funds are viewed as a form of collective data ownership, allowing individuals to exercise control over which data is collected about them and how it is used, as well as to receive payment for commercial access to the data in the fund.Footnote 100

These economic rights are unlikely to become a part of the EU data protection framework precisely due to their economic nature. At the same time, they could interfere with international trade disciplines which aim to facilitate the unrestricted cross-border data flows. This is why they should form part, in addition to the fundamental rights to protection of privacy and personal data, of a nuanced rebalancing of the EU’s trade policy on this issue.

D Conclusion

The analysis in this chapter of recent developments in the governance of cross-border data flows in international trade law showed that the main public policy interests discussed in the context of EU trade policy on this issue are the protection of the fundamental rights to privacy and personal data. This chapter argued that other policy objectives, such as cybersecurity and digital sovereignty – which have recently become one of the anchors of EU’s internal AI policy – should also be considered. The chapter has also shown that the individual rights–centred data protection framework has limits in governing AI both in domestic and international trade policy.

11 Panta Rhei A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows

Kristina Irion
Footnote *
A Introduction

Pantha rhei (‘everything flows’) turns out to be a very fitting metaphor for how terabytes of digital data rush through the network of networks. Attributed to the philosopher Heraclitus panta rhei connotes that change is the fundamental essence of the universe.Footnote 1 Data flows are the undercurrent of digital globalization that transforms our societies. How data flows will likely underpin digital services in a not so distant future is vividly described in Anupam Chander’s contribution (Chapter 5) in this volume. Data’s liquidity tends to undermine outdated regulatory formations and erode the paradigms that used to underpin a society’s conventional right to self-governance.Footnote 2 Everything is in flux.

Human rights do however remain valid currency in how we approach planetary-scale computation and accompanying data flows. As we enter ‘the age of digital interdependence’, a UN expert panel urges ‘new forms of digital cooperation to ensure that digital technologies are built on a foundation of respect for human rights and provide meaningful opportunity for all people and nations’.Footnote 3 Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The transnational protection of data privacy is a case in point, which required legal reforms in order not to fall into the cracks between different domestic legal systems. Furthermore, the transnational provision of artificial intelligence (AI) is going to have a bearing on the conditions of human freedom prompting calls for a human rights–based approach to AI governance.Footnote 4

Through the contribution in this volume it emerges that international trade law has successfully co-opted cross-border data flows as a desirable baseline for digital trade. This raises the question how the inclusion of the free flow of data in international trade law would affect the prospects for the transnational protection of human rights. As a stand-alone commitment, the free flow of data namely lacks any normative underpinning and only through the interplay with domestic legal frameworks do human rights become recognized.

In my contribution I argue that the inclusion of cross-border data flows as a new trade law discipline would be opportunistic in light of the morality to protect human rights online. International trade law, which has been criticized for the ‘economization of human rights’,Footnote 5 would subtly reinforce the transformative power of data flows leaving human rights enforcement to domestic institutions which in themselves have been found inferior to deal with the issues at hand. In other words, the opportunity structures offered by international trade law will not advance the construction of a global information civilization that is founded on the respect for human rights. Rather, multilevel economic governance should provide for constitutional pluralism and sufficient margin for experimentation with novel strategies to give effect to human rights in the online context.Footnote 6 I conclude with a plaidoyer for a new quid pro quo in digital trade in which the liberalization of cross-border data flows recognizes better the enhanced need for human rights accountability. This contribution intersects human rights law with international economic law, liberally borrowing from transnational legal theory and Internet governance literature. It advances its arguments through a combination of doctrinal and critical legal research with a certain predisposition to European legal thinking.

This chapter proceeds as follows: after the backdrop has been set, the following section takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. The subsequent section explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. The last section turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In the conclusion, the different arguments are linked together to advocate for a re-balancing act that recognizes human rights inside international trade law.

B Data Flow as a Policy Metaphor

Data is the building block of today’s digital economy. As a virtual unit data can represent any type of digital infrastructure, platform, or system, that undergird an infinite range of virtual goods, services, transactions and expressions. Digital supply and value chains are ultimately representations of data which are assembled to perform varying functionalities.Footnote 7 Besides data is exponentially generated from any human and machine activity, which in turn are a key input for machine learning and algorithmic decision-making. Everything that can be expressed in data is inherently liquid because it can be de-assembled, moved across space and re-assembled again.

In social theory ‘flow’, ‘fluidity’ or ‘liquidity’ are used as a metaphor to connote how circulation and velocity forge a new kind of information or network society.Footnote 8 According to Castells, contemporary society is constructed around flows: ‘flows of capital, flows of information, flows of technology, flows of organizational interaction, flows of images, sounds, and symbols. Flows are not just one element of the social organization: they are the expression of processes dominating our economic, political, and symbolic life’.Footnote 9 Sociologist Deborah Lupton, by contrast, criticizes that writers on digital technologies rely on liquid concepts when discussing the circulation of digital data.Footnote 10 For Lupton, ‘[t]he apparent liquidity of data, its tendency to flow freely, can also constitute its threatening aspect, its potential to create chaos and loss of control’.Footnote 11 Lupton nevertheless resolves that such conceptions can help making sense of the phenomenon. It must be conceded that the recourse to the data flow metaphor should not divert from analyzing actors, the epistemology and affordances of concrete sociotechnical systems.Footnote 12 Globalization researchers, however, consistently use the cross-border movement or flows of persons, capital, goods and services as a conceptual lens, which is currently complemented by data flows. It is precisely the circulation of data which underpins the processes that lead to the reconfiguration of the spatial organization of social relations and transactions that characterize globalization.Footnote 13 A 2016 report by the McKinsey Global Institute proclaimed that globalization had entered ‘a new era defined by data flows’.Footnote 14

A powerful coalition of international and intergovernmental organizations, including the G7 and the G20, the Organisation for Economic Cooperation and Development (OECD) and the World Economic Forum (WEF), among others, have intensified their work on promoting cross-border data flows as an international economic policy principle. For instance, following the initiative of Japan’s government on ‘Data Free Flow with Trust’, the 2019 G20 Osaka Leaders’ Declaration states:

Cross-border flow of data, information, ideas and knowledge generates higher productivity, greater innovation, and improved sustainable development, while raising challenges related to privacy, data protection, intellectual property rights, and security. By continuing to address these challenges, we can further facilitate data free flow and strengthen consumer and business trust. In this respect, it is necessary that legal frameworks, both domestic and international, should be respected. Such data free flow with trust will harness the opportunities of the digital economy.Footnote 15

While the statement correctly reflects the unabated tension between cross-border data flows and domestic legal frameworks, it falls short of identifying common strategies that would mitigate this tension and thereby forging trust in legitimate cross-border data flows. The endorsement of ‘Data Free Flow with Trust’ perfectly encapsulates the influential narrative of innovation, growth and development associated with cross-border data flow while leaving the intricacies of protecting human rights and societal values to domestic institutions that are themselves increasingly contested in an interdependent world. From the perspective of domestic public policy, the cross-border flow of data more fittingly compares to a maelstrom that potentially erodes constitutionally guaranteed rights and societal values.

C Human Rights Do Not Flow Easily across Borders

Adopted over seventy years ago, the Universal Declaration of Human Rights (UDHR) protects a canon of universal and indivisible human rights the interpretation of which evolves with the time.Footnote 16 Already twice the UN Human Rights Council has affirmed that human rights must be protected offline and online regardless of frontiers.Footnote 17 International human rights law is addressed to states which are bound to respect and uphold the obligations in their domestic legal system. Whereas international human rights law can take different levels of commitment from non-binding to binding, its enforcement overwhelmingly takes place at the domestic level.Footnote 18 ‘The multilevel human rights constitution’, as Ernst-Ulrich Petersmann explains, ‘remains embedded into national constitutionalism as protected by national and regional courts’.Footnote 19 Human rights thus wield universal protection from their geopolitically fragmentated implementation by states. This construction has largely been workable in an offline and static world where different jurisdictions could coexist by the intuitive demarcations of territoriality. In the age of digital interdependence, however, interferences with human rights frequently take a transnational dynamic. According to Julie Cohen, domestic protections for human rights that are built on outdated regulatory formations have begun to fail comprehensively.Footnote 20 Different trends, such as the intermediation of human transactions by digital platforms, and strategies that would outsmart national legal frameworks have been held responsible for the sad state of affairs.Footnote 21 From the outset the Internet-mediated sphere has attracted much libertarianism and utopism,Footnote 22 but in hindsight too little concern about the impeding policy and regulatory challenges online.

I Who Should Be in Charge of the Internet?

In its infancy the Internet has attracted utopian ideas of a free and borderless cyberspace, a human-made global commons in the service of an international community of users. Famously, John Perry Barlow in his ‘Declaration of the Independence of Cyberspace’ called on governments of the world to leave the Internet and its users alone.Footnote 23 Another proposal was to transform cyberspace into an international commons and to root Internet governance in international agreements. Analogies to Hugo Grotius’ 1609 dissertation ‘Mare Liberum’Footnote 24 have been offered to extend a similar regime to the Internet as is practiced today in international maritime law and space law. Despite gigantic efforts to nourish international multi-stakeholder Internet governance up to this point, this approach has never gained sufficient authority to actually deliver tangible outcomes.Footnote 25 The upshot is that the protection of individuals’ human rights online has never been uploaded to a supranational level.

Simultaneously, the Westphalian nation state that derives sovereignty and jurisdiction from territory has been contested as ‘an ordering device for the borderless Internet’.Footnote 26 Cedric Ryngaert and Mark Zoetekouw are looking at ‘community-based systems’ as jurisdictional alternatives to territory which would better respond to the peculiar nature of the Internet as a ‘borderless, prima facie, non-territorial phenomenon’.Footnote 27 Correspondingly, Francesca Bignami and Giorgio Resta expect that ‘the social interactions fostered by borderless digital communications should give rise to a common set of moral commitments that will gradually replace those of the nation-state’.Footnote 28 It somewhat resonates with how large user-backed digital platforms frequently invoke their community in matters that affect platform governance.Footnote 29 Lee Bygrave highlights the peculiar contribution of contract law to manage large numbers of users across countries and legal systems via terms of service, for example.Footnote 30 Whereas transnational private law could achieve private platform governance from the inside, it does not compare to an external human rights–based governance framework.

II Reactive Jurisdictional Claims

Legal thinking moreover diverges over the question whether online activities and Internet transactions should be treated as distinct from jurisdictional claims based on geographical location.Footnote 31 To Hannah Buxbaum, conflicts about jurisdiction are a strategy where ‘claims of authority, or of resistance to authority’ are made by actors to advance a particular interest.Footnote 32 The beneficiaries of a global reach for that matter reflexively push back jurisdictional claims from countries where the recipients of online service are based. Frequently technology-based arguments are invoked to deny the existence of a sufficient nexus for jurisdiction and the applicability of rules interdicting certain behaviour.Footnote 33 Joel Reidenberg intriguingly warns that this in turn would disable states from effectively protecting their citizens online.Footnote 34

Not being set in stone domestic legal institutions are reactive to the very context they are embedded in. The transnational protection of data privacy is a case in point to illustrate the crucial role of domestic legal frameworks in upholding human rights. When it became apparent that the regulation of domestic businesses no longer suffices to govern cross-border data transactions, legislators as well as courts resort to the external application of domestic laws. The European Union’s General Data Protection Regulation (GDPR)Footnote 35 is a prominent example for this legal technique that refocuses the territorial scope of application to organizations that are not established in the Union as long as they collect and use personal data of individuals who are inside the Union.Footnote 36 Likewise the California Consumer Privacy Act (CCPA) applies to businesses around the whole world as long as they reach out to California residents.Footnote 37 This is how after some backlog domestic legal institutions tweak jurisdictional concepts in their quest for asserting domestic rules which would still resonate with public international law.Footnote 38

Predictably, such reactions are bound to run into an impasse about their effectiveness or legitimacy depending on from whose perspective one wishes to look at a particular issue. As a result, the international order now faces additional challenges, such as overlapping claims of authority and the transnational export of rules. Inquiries from the field of transnational data privacy also have shown that the extraterritorial reach of domestic rules may be overly formalistic and not matched with corresponding enforcement powers.Footnote 39 In their quest to overcome the enforcement fallacy domestic authorities are increasingly turning to governance by platforms deputizing ‘multinational corporate data intermediaries to carry out and enforce their orders’.Footnote 40 Yet, asserting domestic human rights regardless of jurisdiction, citizenship and location of data with the help of powerful digital platforms further entrenches the power of private economic interests over the conditions of human freedom.Footnote 41

D International Trade Law Laying Claim to Free Data Flows

The flow of data crucially undergirds the organization of international production, trade and investments into global value chains (GVC).Footnote 42 Activating international trade law for cross-border digital trade issues can be seen as ‘forum shopping in global governance’,Footnote 43 where trade venues are traditionally more conducive to economic interests than for that matter the multi-stakeholder Internet governance fora.Footnote 44 What is more, since trade rules on e-commerce could not advance under the auspices of the World Trade Organization (WTO), a number of countries have turned to preferential trade agreements instead, be they bilateral, regional or plurilateral.Footnote 45

The United States has been the key force behind efforts to proliferate its digital trade agenda through international trade law, albeit with a mixed record.Footnote 46 On the one hand, a new generation of mega-regional trade agreements that were negotiated between the United States and like-minded countries incorporate a new set of digital trade rules that introduce horizontal provisions on the free flow of data, such as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP)Footnote 47 and the United States–Mexico–Canada Agreement (USMCA).Footnote 48 On the other hand, the liberalization of the cross-border flow of data has been controversial in negotiations for the EU–US Transatlantic Trade and Investment Partnership (TTIP) and for a multilateral Trade in Services Agreement (TiSA), which both stalled in 2017 over uncertainties over the stance of the incoming US administration under President Trump.

Repeated efforts to multilateralize digital trade rules through the WTO have not so far yielded tangible outcomes.Footnote 49 Initiated in 1998, the WTO Work Programme on Electronic Commerce has stalled until in early 2019 seventy-six WTO members agreed to launch negotiations on trade-related aspects of electronic commerce.Footnote 50 The resurrection of the e-commerce negotiations, however, takes place during a rather dire crisis of the multilateral forum of the WTO that has left its Appellate Body as a part of the dispute settlement system incapacitated.Footnote 51 The very capacity to adjudicate disputes, however, has oftentimes been referred to as the ‘jewel in the crown’ of the WTO that made it the centre of the rule-based international trading system. The timing of the negotiations seems to support Jane Kelsey’s argument that e-commerce has turned into a ‘proxy battleground for the future of the WTO’.Footnote 52

Absent a broad international consensus in key areas of public interest regulation, already the General Agreement on Trade in Services (GATS)Footnote 53 curtails a member’s regulatory autonomy by subjecting public interest regulation to certain trade-conforming conditions.Footnote 54 The GATS preamble explicitly recognizes the right of a member state to regulate in order to pursue its national policy objectives.Footnote 55 This right to regulate is however confined as follows: a member may adopt a measure that is from the outset not inconsistent with its GATS commitments or, in case of a GATS inconsistent measure, to justify the measure under one of the general exceptions.Footnote 56 Even though the deregulation of services is not the objective of the GATS,Footnote 57 a member’s behind-the-border regulations that aim to afford a high level of protection of human rights run the risk to be deemed protectionist under international trade rules. The EU’s regulatory framework on personal data protection makes for a well-researched example. We have concluded elsewhere that ‘unreservedly committing to free cross-border data flows likely collides with [the EU’s] approach of affording a high level of protection of personal data as is called for by Article 8 of the Charter and as implemented by the GDPR’.Footnote 58

With eminent cross-border trade in AI, individual and societal implications can be critically larger and more pervasive.Footnote 59 The circulation of AI raises the stakes for human rights–based governance given that the technology can be deployed fairly location-independent.Footnote 60 Not only data and machine learning code can be moved across today’s digital ecosystem but the predictive outcomes of an AI system can be applied at a distance.Footnote 61 Societies have diverse set-ups of rights, freedoms and indeed also ethics. Take facial recognition systems, for example, which are the state policy in China but have prompted calls for strict regulation in Western democracies.Footnote 62 Chander rightly notes in this volume that transnational transplants of AI might prove problematic if they do not correspond to the social and legal contexts of the society it interacts with.

The prospect that the first binding framework for the international governance of AI might be international trade law can be frightening unless WTO members retain sufficient margin for experimentation with novel strategies to give effect to human rights in the cross-border context. Susan Aaronson points at the disconnection between efforts to promote the free flow of data and efforts to promote digital human rights at national and international levels.Footnote 63 As trade agreements have gone beyond import tariffs and quotas into regulatory rules and harmonization, Kelsey has criticized that new e-commerce rules impose ‘significant constraints on the regulatory authority of governments, irrespective of their levels of development, and includes matters that belong more to Internet governance, than to trade’.Footnote 64

E Conclusion

Everything is in flux. Cross-border data flows are pervasive and a defining characteristic of the age of digital interdependence. So far, our global information civilization is not founded on a shared commitment to protect human rights regardless of jurisdiction, citizenship and location of data. Engendering respect for human rights remains for the foreseeable future a paramount function of domestic legal institutions which must be reactive to respond to the challenges of cross-border data flows.Footnote 65 We are also beginning to grasp that the challenges for the multi-level governance of human rights are not just about overlapping claims of authority and the transnational export of rules but go to the core of the conditions of human freedom and the democratic constitution of societies.Footnote 66

International trade law is laying claim to the governance of cross-border digital trade and the liberalization of cross-border flow of data. From the domestic protection of data privacy and how data privacy rules may conflict with international trade law, we can draw lessons for the emerging multi-level governance of AI. With respect to AI governance, the EU’s fundamental rights approach holds unique value in an international context where the other major players, like the United States and China, move ahead without paying much attention to these underlying human values. It will be important to critically assess the impact of the WTO e-commerce negotiations on the human rights–based governance of AI before the ‘free trade leviathan’Footnote 67 further restricts the policy choices not only of individual states but also of the EU itself.Footnote 68

Where international trade rules prevail, they should provide for constitutional pluralism and a sufficient margin for domestic experimentation with novel strategies to give effect to human rights in the online context.Footnote 69 This should not be construed as an argument in favour of a uniform interpretation or even a mandate for the positive harmonization of (digital) human rights through international (trade) law.Footnote 70 Yet, trade law should not move ahead in setting the rules for cross-border trade in the era of big data and AI without recognizing the members’ responsibility to take appropriate measures that would ensure that artificial intelligence and overall data governance are fully accountable to domestic human rights frameworks. Identifying strategies and approaches that effectively ground individual interests and societal values in transnational algorithmic systems ought to strike a balance between the rule of law and innovation policy that crucially undergird a robust information civilization.

Footnotes

9 Futuring Digital Privacy Reimaging the Law/Tech Interplay

* Urs Gasser is Professor of Practice and Executive Director of the Berkman Klein Center for Internet and Society, Harvard Law School. Contact: ugasser@law.harvard.edu.

1 See, e.g., C. J. Bennett, Regulating Privacy: Data Protection and Public Policy in Europe and the United States (Ithaca: Cornell University Press, 1992); P. M. Regan, Legislating Privacy: Technology, Social Values, and Curiosity from Plymouth Rock to the Internet (Chapel Hill: The University of North Carolina Press, 1995); R. E. Smith, Ben Franklin’s Web Site: Privacy and Curiosity from Plymouth Rock to the Internet (Providence: Privacy Journal, 2000); D. J. Solove and P. M. Schwartz, Information Privacy Law, 5th edn (New York: Wolters Kluwer 2015); D. J. Solove, ‘The Origins and Growth of Information Privacy Law’, in J. B. Kennedy, P. M. Schwartz, and F. Gilbert (eds), Fourth Annual Institute on Privacy Law: Protecting Your Client in a Security-Conscious World (New York: Practising Law Institute, 2003), 2983; D. Vincent, Privacy: A Short History (Cambridge: Polity Press, 2016); A. F. Westin, Privacy and Freedom (New York: Atheneum, 1967); I. R. Kramer, ‘The Birth of Privacy Law: A Century Since Warren and Brandeis’, Catholic University Law Review 39 (1990), 703724; W. L. Prosser, ‘Privacy [a Legal Analysis]’, in F. D. Schoeman (ed), Philosophical Dimensions of Privacy: An Anthology (Cambridge: Cambridge University Press, 1984), 104155; D. J. Solove, ‘A Brief History of Information Privacy Law’, in C. Wolf (ed), Proskauer on Privacy (New York: Practising Law Institute, 2006), 146.

2 S. D. Warren and L. D. Brandeis, ‘The Right to Privacy’, Harvard Law Review 4 (1890), 193220. The article had a profound impact on the development of state tort law and privacy-related causes of action. See, e.g., W. L. Prosser, ‘Privacy’, California Law Review 48 (1960), 386423; see also D. Solove, ‘Does Scholarship Really Have an Impact? The Article that Revolutionized Privacy Law’, TeachPrivacy, 30 March 2015.

3 See A. Busch, ‘Privacy, Technology, and Regulation: Why One Size Is Unlikely to Fit All’, in B. Roessler and D. Mokrosinska (eds), Social Dimensions of Privacy: Interdisciplinary Perspectives (Cambridge: Cambridge University Press, 2015), 303323.

4 See US Department of Health, Education, and Welfare, Records, Computers, and the Rights of Citizens: Report of the Secretary’s Advisory Committee on Automated Personal Data Systems (Cambridge, MA: MIT Press, 1973).

5 In essence, Fair Information Principles ‘are a set of internationally recognized practices for addressing the privacy of information about individuals’. R. Gellman, ‘Fair Information Practices: A Basic History’, unpublished manuscript, 17 June 2016, available at http://bobgellman.com/rg-docs/rg-FIPShistory.pdf.

6 OECD, The OECD Privacy Framework: Supplementary Explanatory Memorandum to the Revised OECD Privacy Guidelines (Paris: OECD, 2013).

7 Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), OJ L [2016] 119/1.

8 The White House, Administration Discussion Draft: Consumer Privacy Bill of Rights Act, 2015, available at https://obamawhitehouse.archives.gov/sites/default/files/omb/legislative/letters/cpbr-act-of-2015-discussion-draft.pdf.

9 State of California Department of Justice, California Consumer Privacy Act (CCPA), 2020, available at https://oag.ca.gov/privacy/ccpa.

10 See, e.g., Executive Office of the President, Big Data: Seizing Opportunities, Preserving Values (Washington, DC: The White House, 2014); US Federal Trade Commission, Internet of Things: Privacy and Security in a Connected World (Washington, DC: Federal Trade Commission, 2015); Independent High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (Brussels: The European Commission, 2019); OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, 21 May 2019.

11 See, e.g., S. Rodotà, ‘Data Protection as a Fundamental Right’, in S. Gutwirth et al. (eds), Reinventing Data Protection? (New York: Springer, 2009).

12 W. Hartzog and N. M. Richards, ‘Privacy’s Constitutional Moment and the Limits of Data Protection’, Boston College Law Review 61 (2020), 16871761.

13 N. Couldry and U. A. Mejias, The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism (Stanford: Stanford University Press, 2019).

14 S. Zuboff, Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Hachette Book Group, 2019).

15 I. Lapowsky, ‘How Cambridge Analytica Sparked the Great Privacy Awakening’, Wired, 17 March 2019.

16 K. Manheim and L. Kaplan, ‘Artificial Intelligence: Risks to Privacy and Democracy’, Yale Journal of Law and Technology 21 (2019), 106188.

17 See, e.g., A. E. Waldman, Privacy as Trust: Information Privacy for an Information Age (Cambridge: University Printing House, 2018).

18 B. Mittelstadt, ‘From Individual to Group Privacy in Big Data Analytics’, Philosophy and Technology 30 (2017), 475494.

19 S. Wachter and B. Mittelstadt, ‘A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI’, Oxford Business Law Blog, 9 October 2018.

21 J. M. Balkin, ‘Information Fiduciaries and the First Amendment’, UC Davis Law Review 49 (2016), 11831234.

22 Similar shifts triggered a ‘rethinking’ exercise about a decade ago; see H. Burkert, ‘Towards Next Generation of Data Protection Legislation’, in S. Gutwirth et al. (eds), Reinventing Data Protection? (New York: Springer, 2009).

23 H. Burkert, ‘Theories of Information in Law’, Journal of Law and Information Science 1 (1982), 120130.

24 See, e.g., K. Chen, ‘Yanging and Hanging onto Our Own Data’, Berkeley Technology Law Journal Blog, 30 December 2019; M. Cantwell, ‘Cantwell, Senate Democrats Unveil Strong Online Privacy Rights’, Press Release, 26 November 2019, available at www.cantwell.senate.gov/news/press-releases/cantwell-senate-democrats-unveil-strong-online-privacy-rights; ‘Senator Wicker Circulates Draft Privacy Bill’, Hunton Andrews Kurth, 3 December 2019, available at www.huntonprivacyblog.com/2019/12/03/senator-wicker-circulates-draft-privacy-bill/; D. Shepardson, ‘Trump Administration Working on Consumer Data Privacy Policy’, Reuters, 27 July 2018.

25 N. Lomas, ‘Apple’s Tim Cook Makes Blistering Attack on the “Data Industrial Complex”’, TechCrunch, 24 October 2018.

26 See, e.g., ‘US 50-State Statutory and Legislative Charts’, IAPP, available at https://iapp.org/resources/article/us-50-state-statutory-and-legislative-charts/.

27 See, e.g., ‘Data Protection Laws of the World: Full Handbook’, IAPP, 6 March 2019, available at https://iapp.org/media/pdf/resource_center/Data-Protection-Full.pdf.

28 See, e.g., S. Couture and S. Toupin, ‘What Does the Concept of “Sovereignty” Mean in Digital, Network, and Technological Sovereignty?’, GigaNet Annual Symposium, 2017.

29 See, e.g., P. M. Schwartz and D. J. Solove, ‘The PII Problem: Privacy and a New Concept of Personally Identifiable Information’, New York University Law Review 86 (2011), 18141894; E. Ramirez, ‘Protecting Consumer Privacy in the Digital Age: Reaffirming the Role of Consumer Control’, Keynote Address of FTC Chairwoman Edith Ramirez Technology Policy Institute Aspen Forum, Aspen, 22 August 2016.

30 See, e.g., C. Dwork et al., ‘Exposed! A Survey of Attacks on Private Data’, Annual Review of Statistics and Its Application, 2017, 61–84; A. Fluitt et al., ‘Data Protection’s Composition Problem’, European Data Protection Law Review 5 (2019), 285292, at 285–286.

31 See D. J. Solove and D. Citron, ‘Risk and Anxiety: A Theory of Data Breach Harms’, Texas Law Review 96 (2018), 737786, at 745–746; ‘In re U.S. Office of Personnel Management Data Security Breach Litigation’, Harvard Law Review 133 (2020), 1095–1102.

32 K. Nissim and A. Wood, ‘Is Privacy Privacy?’, Philosophical Transactions of the Royal Society A 376 (2018), 119.

33 ‘Tradeoffs in the Right to Be Forgotten’, Harvard Civil Rights–Civil Liberties Law Review, 26 February 2012, available at https://harvardcrcl.org/tradeoffs-in-the-right-to-be-forgotten/.

34 See, e.g., I. Graef and J. Prüfer, ‘Mandated Data Sharing Is a Necessity in Specific Sectors’, Economomisch Statistische Berichten 103 (2018), 298301; C. L. Borgman, ‘The Conundrum of Sharing Research Data’, Journal of the American Society for Information Science and Technology 63 (2012), 10591078.

35 See generally Y. Benkler, ‘The Role of Technology in Political Economy: Part I’, Law and Political Economy, 25 July 2018, available at https://lpeblog.org/2018/07/25/the-role-of-technology-in-political-economy-part-1/; Y. Benkler, ‘The Role of Technology in Political Economy: Part II’, Law and Political Economy, 26 July 2018, available at https://lpeblog.org/2018/07/26/the-role-of-technology-in-political-economy-part-2/; Y. Benkler, ‘The Role of Technology in Political Economy: Part 3’, Law and Political Economy, 27 July 2018, available at https://lpeblog.org/2018/07/27/the-role-of-technology-in-political-economy-part-3/.

36 J. E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (Oxford/New York: Oxford University Press, 2019).

37 See U. Gasser, ‘Perspectives on the Future of Digital Privacy’, Zeitschrift für Schweizerisches Recht 134 (2015), 338448, at 368–369. On the innovation-enabling function of law, see also A. Chander, ‘How Law Made Silicon Valley’, Emory Law Journal 63 (2014), 639694.

38 Footnote Ibid., at 368–369.

39 See, e.g., Boring v. Google Inc., 362 Fed. App’x 273, 278–80 (3d Cir. 2010).

40 See S. Ludington, ‘Reining in the Data Traders: A Tort for the Misuse of Personal Information’, Maryland Law Review 66 (2006), 140193, at 173.

41 See Solove and Citron, Footnote note 31.

42 See The White House, Footnote note 8.

43 M. Korolov, ‘California Consumer Privacy Act (CCPA): What You Need to Know to Be Compliant’, CSO, 4 October 2019, available at www.csoonline.com/article/3292578/california-consumer-privacy-act-what-you-need-to-know-to-be-compliant.html.

44 See D. D. Hirsch, ‘Protecting the Inner Environment: What Privacy Regulation Can Learn from Environmental Law’, Georgia Law Review 41 (2006), 163.

45 O. Ben-Shahar, ‘Data Pollution’, Coase-Sandor Working Paper in Law and Economics No 854 (2018).

46 See L. M. Ponte, ‘The Michigan Cyber Court: A Bold Experiment in the Development of the First Public Virtual Courthouse’, North Carolina Journal of Law and Technology 4 (2002), 5191.

47 J. M. Balkin and J. Zittrain, ‘A Grand Bargain to Make Tech Companies Trustworthy’, The Atlantic, 3 October 2016.

48 See G. W. van Blarkom, J. J. Borking, and J. G. E. Olk (eds), Handbook of Privacy and Privacy-Enhancing Technologies: The Case of Intelligent Software Agents (The Hague: CBP, 2003).

49 I. S. Rubinstein, ‘Regulating Privacy by Design’, Berkeley Technology Law Journal 26 (2011), 14091456, at 1411–1412.

50 See ‘Resolution on Privacy by Design’, 32nd International Conference of Data Protection and Privacy Commissioners, Jerusalem, 27–29 October, 2010; also C. Perera et al., ‘Designing Privacy-Aware Internet of Things Applications’, Information Sciences 512 (2020), 238257; M. Veale, R. Binns, and J. Ausloos, ‘When Data Protection by Design and Data Subject Rights Clash’, International Data Privacy Law 8 (2018), 105123; A. Romanou, ‘The Necessity of the Implementation of Privacy by Design in Sectors Where Data Protection Concerns Arise’, Computer Law and Security Review 34 (2018), 99110.

51 Specifically, Article 25 GDPR requires that data controllers, in order to protect the rights of data subjects, implement appropriate technical and organizational measures designed to both embed data protection principles and integrate safeguards into data processing. See, e.g., L. A. Bygrave, ‘Data Protection by Design and by Default: Deciphering the EU’s Legislative Requirements’, Oslo Law Review 4 (2017), 105120.

52 See, e.g., G. Danezis et al., Privacy and Data Protection by Design – From Policy to Engineering (Heraklion: ENISA, 2014); European Data Protection Board, Guidelines 4/2019 on Article 25: Data Protection by Design and by Default, 13 November 2019.

53 See, e.g., D. K. Mulligan and K. A. Bamberger, ‘Saving Governance-by-Design’, California Law Review 106 (2018), 697784.

54 S. Spiekermann-Hoff, ‘The Challenges of Privacy by Design’, Communications of the ACM 55 (2012), 3840.

55 A. Tamò-Larrieux, Designing for Privacy and Its Legal Framework: Data Protection by Design and Default for the Internet of Things (Berlin: Springer, 2018).

56 See W. Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies (Cambridge, MA: Harvard University Press, 2018).

57 A. Wood et al., ‘Differential Privacy: A Primer for a Non-technical Audience’, Vanderbilt Journal of Entertainment and Technology Law 21 (2018), 209276.

58 S. Garfinkel, J. M. Abowd, and C. Martindale, ‘Understanding Database Reconstruction Attacks on Public Data’, ACMQueue 16 (2018), 126, at 5–7.

59 D. D. Hirsch, ‘From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics’, Ohio States Public Law Working Paper No 506 (2019).

60 M. Altman, S. Chong, and A. Wood, ‘Formalizing Privacy Laws for License Generation and Data Repository Decision Automation’, Proceedings on Privacy Enhancing Technologies 2 (2020), 119.

61 C. Busch, ‘Implementing Personalized Law: Personalized Disclosures in Consumer Law and Data Privacy Law’, University of Chicago Law Review 86 (2019), 309331, at 312.

62 W. H. Lee et al., ‘Quantification of De-anonymization Risks in Social Networks’, ICISSIP 2017Proceedings of the 3rd International Conference on Information Systems Security and Privacy, 1 January 2017.

63 S. Barth and M. D. T. de Jong, ‘The Privacy Paradox – Investigating Discrepancies between Expressed Privacy Concerns and Actual Online Behavior – A Systematic Literature Review’, Telematics and Informatics 34 (2017), 10381058.

64 For examples from research that illustrate the benefits of such a blended approach, see M. Altman et al., ‘Towards a Modern Approach to Privacy-Aware Government Data Releases’, Berkeley Technology Law Journal 30 (2015), 19672072; and I. S. Rubinstein and W. Hartzog, ‘Anonymization and Risk’, Washington Law Review 91 (2016), 703760.

65 R. M. Groves and B. A. Harris-Kojetin (eds), Multiple Data Sources, and Privacy Protection: Next Steps (Washington, DC: The National Academies Press, 2017); S. L. Garfinkel, ‘De-identifying Government Datasets’, NIST Special Publication 800-188 (2016).

66 See, e.g., ‘Privacy Tools for Sharing Research Data’, Harvard University Privacy Tools Project, available at https://privacytools.seas.harvard.edu/project-description.

67 For a description of encryption standards for federal government information systems, see, for example, National Institute of Standards and Technology, Security Requirements for Cryptographic Modules: Federal Information Processing Standards, Federal Information Processing Standards Publication, FIPS PUB 140-2, 25 May 2001.

68 See T. Kirkham et al., ‘The Personal Data Store Approach to Personal Data Security’, IEEE Security and Privacy 11 (2013), 1219, at 12–13.

69 S. Wachter, B. Mittelstadt, and L. Floridi, ‘Transparent, Explainable, and Accountable AI for Robotics’, Science Robotics 2 (2017).

70 M. Hutson, ‘Researchers Can Make AI Forget You’, IEEE Spectrum, 15 January 2020.

71 See C. Dwork, ‘Differential Privacy’, in H. C. A. van Tilborg and S. Jajodia (eds), Encyclopedia of Cryptography and Security, 2nd edn (New York: Springer, 2011), 338340.

72 See Y. Lindell and B. Pinkas, ‘Secure Multiparty Computation for Privacy-Preserving Data Mining’, Journal of Privacy and Confidentiality 1 (2009), 5998, at 60.

73 United States Census Bureau, ‘Disclosure Avoidance and the 2020 Census’, 19 December 2019, available at www.census.gov/about/policies/privacy/statistical_safeguards/disclosure-avoidance-2020-census.html.

74 R. Barlow, ‘Computational Thinking Breaks a Logjam’, Boston University, 27 April 2015, available at www.bu.edu/articles/2015/computational-thinking-breaks-a-logjam.

75 M. R. Warner, ‘Warner, Rubio, Wyden Reintroduce “Student Right to Know before You Go Act”’, Press Release, 7 March 2019, available at www.warner.senate.gov/public/index.cfm/2019/3/warner-rubio-wyden-reintroduce-student-right-to-know-before-you-go-act.

76 Harvard University Privacy Tools Project, Footnote note 66.

77 See, e.g., Altman et al., Footnote note 64.

78 ‘LINDDUN Privacy Engineering’, LINDDUN: Privacy Threat Modeling, available at www.linddun.org/.

79 K. Nissim et al., ‘Bridging the Gap between Computer Science and Legal Approaches to Privacy’, Harvard Journal of Law and Technology 31 (2018), 687780.

80 Footnote Ibid.; Nissim and Wood, Footnote note 32; A. Cohen and K. Nissim, ‘Towards Formalizing the GDPR’s Notion of Singling Out’, arXiv:1904.06009, 12 April 2019, available at https://arxiv.org/abs/1904.06009.

81 See also H. Burkert, ‘Changing Patterns: Supplementary Approaches to Improving Data Protection’, Presentation at CIAJ 2005 Annual Conference on Technology, Privacy and Justice, Toronto, 2005.

82 See, e.g., US National Science and Technology Council, ‘National Privacy Research Strategy’, White House, June 2016, available at https://obamawhitehouse.archives.gov/sites/default/files/nprs_nstc_review_final.pdf.

83 Examples in the field of research are initiatives such as the Privacy Tools for Sharing Research Data at Harvard University mentioned earlier, which brings together computer scientists, statisticians, legal scholars, and social scientists to tackle difficult problems at the intersection of privacy technology, or the efforts by the Center on Privacy and Technology at Georgetown University Law Center, which aims to build interdisciplinary bridges between law and computer science with respect to privacy. Interdisciplinary courses in privacy at Princeton, CMU, MIT, and Harvard serve as possible sources of inspiration in the educational realm. See, e.g., Massachusetts Institute of Technology, Course: Privacy Legislation: Law and Technology, available at https://groups.csail.mit.edu/mac/classes/6.S978; Harvard Law School, Course: Comparative Online Privacy, available at http://hls.harvard.edu/academics/curriculum/catalog/default.aspx?o=69463; Carnegie Mellon University, Course: Privacy Policy, Law, and Technology, available at https://cups.cs.cmu.edu/courses/pplt-fa16; A. Narayanan, Privacy Technologies: An Annotated Syllabus, Princeton University, available at www.cs.princeton.edu/~arvindn/publications/privacyseminar.pdf.

84 See L. Sweeney, ‘Technology Science’, Federal Trade Commission Blog, 2 May 2014, available at www.ftc.gov/news-events/blogs/techftc/2014/05/technology-science.

85 See, e.g., D. J. Weitzner et al., Computer Science and Artificial Intelligence Laboratory Technical Report: Information Accountability (Cambridge, MA: MIT Press, 2007); L. Kagal and J. Pato, ‘Preserving Privacy Based on Semantic Policy Tools’, IEEE Security and Privacy 8 (2010), 2530; H. DeYoung et al., ‘Experiences in the Logical Specification of the HIPAA and GLBA Privacy Laws’, in Proceedings of the 9th Annual ACM Workshop on Privacy in the Electronic Society (New York: ACM, 2010), 7382; US National Academies of Sciences, Engineering, and Medicine, Innovations in Federal Statistics: Combining Data Sources While Protecting Privacy (Washington, DC: The National Academies Press, 2017); Groves and Harris-Kojetin, Footnote note 65.

86 See C. J. Bennett and C. Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Cambridge, MA: MIT Press, 2006).

87 V. Mayer-Schönberger, ‘Beyond Privacy, Beyond Rights – Toward a “Systems” Theory of Information Governance’, California Law Review 98 (2010), 18531885, at 1883 (emphasis in the original).

88 See M. Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Cheltenham: Edward Elgar, 2015).

10 The Algorithmic Learning Deficit Artificial Intelligence, Data Protection and Trade

* Svetlana Yakovleva is a Postdoctoral Researcher at the Institute for Information Law (IViR), University of Amsterdam and Senior Legal Adviser at De Brauw Blackstone Westbroek, Amsterdam. Contact: mail@svyakovleva.com. Joris van Hoboken is Associate Professor at the Institute for Information Law (IViR), University of Amsterdam and Professor of Law at the Interdisciplinary Research Group on Law Science Technology & Society (LSTS), Vrije Universiteit Brussel. Contact: j.v.j.vanhoboken@uva.nl.

1 UNCTAD, Digital Economy Report 2019: Value Creation and Capture: Implications for Developing Countries (New York/Geneva: United Nations Publications, 2019), at 17.

2 Footnote Ibid., at 48.

3 See, e.g., S. A. Aaronson, ‘Data Minefield? How AI Is Prodding Governments to Rethink Trade in Data’, in CIGI (ed), Special Report: Data Governance in the Digital Age (Waterloo: CIGI, 2018).

4 UNCTAD, Footnote note 1, at 24 et seqq.

5 J. Crémer, Y.-A. de Montijoye, and H. Schweitzer, Competition Policy for the Digital Era (Luxembourg: Publications Office of the European Union, 2019), at 73.

6 For an overview, see OECD, AI Initiatives Worldwide, available at www.oecd.org/going-digital/ai/initiatives-worldwide/.

7 S. Azmeh and C. Foster, ‘The TPP and the Digital Trade Agenda: Digital Industrial Policy And Silicon Valley’s Influence on New Trade Agreements’, LSE Working Paper No 16-175 (2016); J.-A. Monteiro and R. Teh, ‘Provisions on Electronic Commerce in Regional Trade Agreements’, WTO Working Paper No ERSD-2017-11 (2017). See also Chapter 1 in this volume.

8 UNCTAD, Footnote note 1, at 8–9.

10 Footnote Ibid., at 8–9, 21.

11 Footnote Ibid., at 11.

12 Footnote Ibid., at 15.

13 Footnote Ibid., at 89.

15 European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, COM(2020) 65 final, 19 February 2020 [hereinafter: White Paper on Artificial Intelligence].

16 UNCTAD, Footnote note 1, at 91. For overview and discussion, see S. Yakovleva, ‘Privacy Protection(ism): The Latest Wave of Trade Constraints on Regulatory Autonomy’, University of Miami Law Review 74 (2020), 416519, at 469 et seqq. See also Chapter 3 in this volume.

17 Yakovleva, Footnote note 16, at 473, 482; UNCTAD, Footnote note 1, at 88–89.

18 M. Burri, ‘The Regulation of Data Flows through Trade Agreements’, Georgetown Journal of International Law 48 (2017), 407448, at 417.

19 A. D. Mitchell and N. Mishra, ‘Data at the Docks: Modernizing International Trade Law for the Digital Economy’, Vanderbilt Journal of Entertainment and Technology Law 20 (2018), 10731134, at 1111.

20 See Chapter 1 in this volume.

21 European Commission, ‘76 WTO Partners Launch Talks on E-Commerce’, News Archive, 26 January 2019, available at http://trade.ec.europa.eu/doclib/press/index.cfm?id=1974.

22 Yakovleva, Footnote note 16, at 469 et seqq. See also Chapter 12 in this volume, in particular with regard to the position of China.

23 M. Burri, ‘The Governance of Data and Data Flows in Trade Agreements: The Pitfalls of Legal Adaptation’, UC Davis Law Review 51 (2017), 65132, at 99, S. A. Aaronson, ‘Redefining Protectionism: The New Challenge in the Digital Age’, IIEP Working Paper No 30 (2016), at 59; M. Geist, ‘Data Rules in Modern Trade Agreements: Toward Reconciling an Open Internet with Privacy and Security Safeguards’, in CIGI (ed), Special Report: Data Governance in the Digital Age (Waterloo: CIGI, 2018).

24 This provision was included in CPTPP before the US withdrawal from the agreement. The version of the agreement with the United States as a party was known as the Transpacific Partnership Agreement (TPP). See Executive Office of the President, Office of the United States Trade Representative, Letter to the TPP Depository, 30 January 2017.

25 Article 14.11(2) CPTPP and Article 19.11(1) USMCA. For other agreement containing a similar rule, see Chapter 1 in this volume.

26 I. Manak, ‘US WTO E-Commerce Proposal Reads Like USMCA’, International Economic Law and Policy Blog, 8 May 2019, available at https://worldtradelaw.typepad.com/ielpblog/2019/05/us-wto-e-commerce-proposal-reads-like-usmca.html.

27 Article 14.11(3) CPTPP, Article 19.11(2) USMCA and Article 11 US–Japan DTA contain an almost identical provision. Emphasis added.

28 General Agreement on Trade in Services, 1869 U.N.T.S. 183; 33 I.L.M. 1167 (1994), entered into force 1 January 1995 [hereinafter: GATS].

29 P. Delimatsis, ‘Protecting Public Morals in a Digital Age: Revisiting the WTO Rulings on US – Gambling and China – Publications and Audiovisual Products’, Journal of International Economic Law 14 (2011), 137; I. Venzke, ‘Making General Exceptions: The Spell of Precedents in Developing Article XX GATT into Standards for Domestic Regulatory Policy’, German Law Journal 12 (2011), 11111140, at 1118–1119.

30 For more references, discussion and critique in the privacy and data protection context, see S. Yakovleva, ‘Should Fundamental Rights to Privacy and Data Protection Be a Part of EU’s International Trade “Deals”?World Trade Review 17 (2018), 477508; S. Yakovleva, ‘Personal Data Transfers in International Trade and EU Law: A Tale of Two “Necessities”’, Journal of World Investment and Trade 21 (2020), 881919.

31 Article 14.8 of CPTPP and Article 19.8 of USMCA. These articles are discussed in more detail in S. Yakovleva, ‘Privacy and Data Protection in the EU- and US-led Post-WTO Free Trade Agreements’, in R. Hoffmann and M. Krajewski (eds), European Yearbook of International Economic Law (Berlin: Springer, 2020), 95115.

32 For elaborate discussion on the US and EU digital trade discourses, see Yakovleva, Footnote note 16, at 469 et seqq.

33 For more details on the reasons for this, see Yakovleva, Footnote note 16, at 492–493. For the first time the EU included binding provisions on cross-border data flows in Article DIGIT 6 of the 2021 EU-UK Trade and Cooperation Agreement.

34 Respectively Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (2000/C 364/01), OJ L [2000] 364/1.

35 C-362/14, Maximilian Schrems v. Data Protection Commissioner and Digital Rights Ireland Ltd. [2015], ECLI:EU:C:2015:650 [hereinafter: Schrems], at para. 72. This goal is now explicitly incorporated in Article 44 GDPR.

36 Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation, GDPR), OJ L [2016] 119/1.

37 Article 44 GDPR; Schrems, Footnote note 35, para. 72. See also G. González Fuster, ‘Un-Mapping Personal Data Transfers’, European Data Protection Law Review 2 (2016), 160168, at 168. Restrictions are provided for in chapter V, GDPR. For an overview of restrictions, see Yakovleva, Footnote note 31.

38 Article 2(2) Regulation 2018/1807 of the European Parliament and of the Council on a Framework for the Free Flow of Non-personal Data in the European Union, OJ L [2018] 303/59, 28 November 2018 [hereinafter: EU Regulation 2018/1807]; European Commission, Guidance on the Regulation on a Framework for the Free Flow of Non-personal Data in the European Union, COM(2019) 250 final, 29 May 2019, at para. 2.2.

39 EU Regulation 2018/1807, Footnote note 38, Recital 9.

40 A. Chander and U. P. Lê, ‘Breaking the Web: Data Localization vs. the Global Internet’, UC Davis Legal Studies Research Paper No 378, at 40; A. Goldfarb and D. Trefler, ‘AI and International Trade’, NBER Working Paper No 24254 (2018), at 20–22.

41 The notion of ‘transfer’ of personal data is not clearly defined in the GDPR or in the guidance of the Data Protection Authorities. It can indirectly be implied from the existing guidance on the mechanisms for transfers of personal data that a ‘transfer’ is understood broadly, as it also captures continuous cross-border access to EEA personal data from abroad. See European Data Protection Board, Guidelines 2/2018 on Derogations of Article 49 under Regulation 2016/679, 25 May 2018.

42 Article 8.81 of EU–Japan EPA. The same provision is also included in Article XX chapter 16 of draft EU–Mexico FTA, negotiated roughly at the same time as the EU–Japan EPA. See also B. Fortnam, ‘EU Punts on Data Flow Language in Japan Deal, Leaving Position Unresolved’, Inside US Trade, 7 June 2017.

43 European Commission, ‘European Commission Adopts Adequacy Decision on Japan, Creating the World’s Largest Area of Safe Data Flows’, Press Release, 23 January 201.

44 K. Irion, S. Yakovleva, and M. Bartl, Trade and Privacy: Complicated Bedfellows? How to Achieve Data Protection-Proof Free Trade Agreements (Amsterdam: Institute for Information Law, 2016), at 44–45, 59–60; M. Fernández Pérez, ‘Corporarivacy Confusion in the EU on Trade and Data Protection’, EDRi, 12 October 2016; European Parliament, Resolution of 8 July 2015 Containing the European Parliament’s Recommendations to the European Commission on the Negotiations for the Transatlantic Trade and Investment Partnership (TTIP) (2014/2228(INI)); European Parliament, Resolution of 3 February 2016 Containing the European Parliament’s Recommendations to the Commission on the Negotiations for the Trade in Services Agreement (TiSA) (2015/2233(INI)).

45 European Commission, Horizontal Provisions for Cross-Border Data Flows and for Personal Data Protection in EU Trade and Investment Agreements, February 2018, available at https://trade.ec.europa.eu/doclib/docs/2018/may/tradoc_156884.pdf.

46 European Commission, EU’s Proposal for the Digital Trade Chapter of EU–New Zealand FTA, 25 September 2018 [hereinafter: EU Proposal Digital Trade Chapter EU–New Zealand FTA], available at http://trade.ec.europa.eu/doclib/docs/2018/december/tradoc_157581.pdf; European Commission, EU’s Proposal for the Digital Trade Chapter of EU–Australia FTA, 10 October 2018 [hereinafter: EU Proposal Digital Trade Chapter EU–Australia FTA], available at http://trade.ec.europa.eu/doclib/docs/2018/december/tradoc_157570.pdf; European Commission, EU’s Proposal for the Digital Trade Chapter of EU–Tunisia FTA, 9 November 2018, available at https://trade.ec.europa.eu/doclib/docs/2019/january/tradoc_157660.%20ALECA%202019%20-%20texte%20commerce%20numerique.pdf; European Commission, Report of the 5th Round of Negotiations for a Free Trade Agreement between the European Union and Indonesia, 9–13 July 2018, Brussels, available at http://trade.ec.europa.eu/doclib/docs/2018/july/tradoc_157137.pdf. The EU’s Proposal for Digital Trade Chapter for a Modernised EU–Chile Association Agreement only contains a placeholder for provisions on data flows (see EU–Chile FTA, 5 February 2018, available at https://trade.ec.europa.eu/doclib/docs/2018/february/tradoc_156582.pdf).

47 WTO, Joint Statement on Electronic Commerce: EU Proposal for WTO Disciplines and Commitments Relating to Electronic Commerce, Communication from the European Union, INF/ECOM/22, 26 April 2019 [hereinafter: EU Proposal Joint Statement Initiative].

48 See, e.g., Article X.1(2) of the EU proposal for Chapter X, ‘Exceptions’ of the EU–New Zealand FTA, 25 June 2019, available at https://trade.ec.europa.eu/doclib/docs/2019/july/tradoc_158278.pdf [hereinafter: Proposal for Exceptions]. This provision includes a general exception for privacy and data protection modelled after the general exception in Article XIV(c)(ii) GATS. EU proposals for ‘Exceptions’ chapters of other FTAs discussed in this chapter are not available as of the time of writing.

49 Articles DIGIT. 6 and DIGIT. 7 of the TCA; for a critical assessment from a data protection perspective, see Opinion 3/2021 of the European Data Protection Supervisor on the conclusion of the EU and UK trade agreement and the EU and UK exchange of classified information agreement.

50 For argumentation on this point, see Yakovleva, Footnote note 16, at 507–511.

51 Article XIV bis GATS.

52 The national security exception is the broadest of all the existing exceptions in international trade law. It is for this reason that it was labelled as ‘all-embracing and seemingly omnipotent’. See J. Yeong Yoo and D. Ahn, ‘Security Exceptions in the WTO System: Bridge or Bottle-Neck for Trade and Security?’, Journal of International Economic Law 19 (2016), 417444, at 426.

53 Proposal for Exceptions, Footnote note 48.

54 See, for example, Articles 2 (right to life), 6 (right to liberty and security), 37 (environmental protection) EU Charter of Fundamental Rights.

55 K. Lenaerts, ‘Exploring the Limits of the EU Charter of Fundamental Rights’, European Constitutional Law Review 8 (2012), 375403, at 392–393.

56 Compare White Paper on Artificial Intelligence (Footnote note 15) with EDPB Response to the MEP Sophie in’t Veld’s Letter on Unfair Algorithms, 29 January 2020, available at https://edpb.europa.eu/our-work-tools/our-documents/letters/edpb-response-mep-sophie-int-velds-letter-unfair-algorithms_en.

57 C. Foster and S. Azmeh, ‘Latecomer Economies and National Digital Policy: An Industrial Policy Perspective’, The Journal of Development Studies 56 (2020), 117.

58 Mitchell and Mishra, Footnote note 19, at 1079.

59 See European Commission, A New Industrial Strategy for Europe, COM(2020) 102 final, 10 March 2020; White Paper on Artificial Intelligence, Footnote note 15. See also K. Propp, ‘Waving the Flag of Digital Sovereignty’, Atlantic Council, 11 December 2019.

60 UNCTAD, Footnote note 1, at 91.

61 European Commission, A European Strategy For Data, COM (2020) 66 final, 19 February 2020, at 5 (emphasis added).

62 J. P. Meltzer and C. F. Kerry, ‘Cybersecurity and Digital Trade: Getting It Right’, Brookings, 18 September 2019.

63 R. D. Williams, ‘Reflections on TikTok and Data Privacy as National Security’, Lawfare, 15 November 2019.

64 UNCTAD, Footnote note 1, at 137.

65 S. Yakovleva and K. Irion, ‘Toward Compatibility of the EU Trade Policy with the General Data Protection Regulation’, AJIL Unbound 114 (2020), 1014, at 14.

66 Article 19.16 USMCA; Article 17 US–Japan DTA.

67 EU Proposal Joint Statement Initiative, Footnote note 47, at para. 2.6.

68 Article 9 of the draft EU–Mexico FTA.

69 Article 11 EU Proposal Digital Trade Chapter EU–Australia FTA.

70 Article 11 EU Proposal Digital Trade Chapter EU–New Zealand FTA.

71 B.-J. Koops, ‘The Trouble with European Data Protection Law’, International Data Privacy Law 4 (2014), 250261, at 257.

72 D. J. B. Svantesson, ‘The Regulation of Cross-Border Data Flows’, International Data Privacy Law 1 (2011), 180198, at 184.

73 See, e.g., O. Tene and J. Polonetsky, ‘Big Data for All: Privacy and User Control in the Age of Analytics’, Northwestern Journal of Technology and Intellectual Property 11 (2013), 239273; N. Purtova, ‘The Law of Everything: Broad Concept of Personal Data and Future of EU Data Protection Law’, Law, Innovation and Technology 10 (2018), 4081; P. Ohm, ‘Broken Promises of Privacy’, UCLA Law Review 57 (2010), 17011777.

74 P. M. Schwartz and D. J. Solove, ‘The PII Problem: Privacy and a New Concept of Personally Identifiable Information’, New York University Law Review 86 (2011), 18141894, at 1836–1848.

77 EU Regulation 2018/1807, Footnote note 38, at Recital 9.

79 M. Veale, R. Binns, and L. Edwards, ‘Algorithms That Remember: Model Inversion Attacks and Data Protection Law’, Philosophical Transactions of the Royal Society A 376 (2018), 115.

80 White Paper on Artificial Intelligence, Footnote note 15, at 1.

81 See, e.g., Articles 5, 30 of the Proposal for a Regulation of the European Parliament and of the Council on European data governance (Data Governance Act) COM/2020/767 final.

82 S. Gürses and J. van Hoboken, ‘Privacy after the Agile Turn’, in E. Selinger, J. Polonetsky, and O. Tene (eds), The Cambridge Handbook of Consumer Privacy (Cambridge: Cambridge University Press, 2018), 597601.

84 Directive 2019/770 of the European Parliament and of the Council on Certain Aspects Concerning Contracts for the Supply of Digital Content and Digital Services, OJ L [2019] 136/1, 22 May 2019. For discussion, see European Data Protection Supervisor (EDPS), Opinion 4/2017 on the Proposal for a Directive on Certain Aspects Concerning Contracts for the Supply of Digital Content, 14 March 2017.

85 Footnote Ibid., at 3 (emphasis added).

86 UNCTAD, Footnote note 1, at 89.

88 Footnote Ibid. (emphasis added).

89 N. Couldry and U. A. Mejias, ‘Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject’, Television and New Media 20 (2019), 336349, at 337.

90 Footnote Ibid., at 338.

91 Footnote Ibid., but see M. Mueller and K. Grindal, ‘Data Flows and the Digital Economy: Information as a Mobile Factor of Production’, Digital Policy, Regulation and Governance 21 (2019), 7187, at 82, challenging this point of view.

92 Couldry and Mejias, Footnote note 87, at 337–338.

93 S. Zuboff, ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’, Journal of Information Technology 30 (2015), 7589.

94 German Data Ethics Commission, Opinion of the Data Ethics Commission: Executive Summary (Berlin: Data Ethics Commission of the Federal Government, 2019), at 9–10.

95 For an overview, see UNCTAD, Footnote note 1, at 132–134.

96 J. Hardinges, ‘What Is a Data Trust? What’s the Definition and How Is One Applied?’, Open Data Institute, 10 July 2018; S. Delacroix and N. D. Lawrence, ‘Bottom-Up Data Trusts: Disturbing the “One Size Fits All” Approach to Data Governance’, International Data Privacy Law 9 (2019), 236252; W. Hall and J. Pesenti, Growing the Artificial Intelligence Industry in the UK (London: Government of the United Kingdom, 2017).

97 Hall and Pesenti suggesting that the trusts should take a form of a repeatable framework. Footnote Ibid.

98 Motie Buitenweg c.s. over vormgeving van data trusts in Nederland – Initiatief nota van het lid Verhoeven over mededinging in de digitale economie, Tweede Kamer der Staten-Generaal, 35134 nr. 7, 18 December 2019, available at: www.parlementairemonitor.nl/9353000/1/j9vvij5epmj1ey0/vl4jjboml8yr.

99 UNCTAD, Footnote note 1, at 132.

100 See, e.g., E. Morozov, ‘To Tackle Google’s Power, Regulators Have to Go after Its Ownership of Data’, The Guardian, 2 July 2017.

11 Panta Rhei A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows

* Kristina Irion is Associate Professor at the Institute for Information Law (IViR), University of Amsterdam. I would like to enthusiastically thank Dr Mira Burri and her team as well as the participants of the conference ‘Big Data and Global Trade Law’, 16–17 November 2018, Lucerne, Switzerland. Contact: k.irion@uva.nl.

1 Actually, it is Plato’s interpretation based on an aphorism from the Heraclitean River Fragments that reads in English translation, ‘On those stepping into rivers staying the same, other and other waters flow’. See D. W. Graham, ‘Heraclitus: Flux, Order, and Knowledge’, in P. Curd and D. W. Graham (eds), The Oxford Handbook of Presocratic Philosophy, Vol. 1 (Oxford: Oxford University Press, 2009), 167188.

2 J. E. Cohen, Between Truth and Power (Oxford: Oxford University Press, 2019), at 200.

3 United Nations Secretary-General’s High-Level Panel on Digital Cooperation, Report of the UN Secretary-General’s High-Level Panel on Digital Cooperation: The Age of Digital Interdependence (New York/Geneva: United Nations Publications, 2019).

4 N. A. Smuha, ‘Beyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea’, Philosophy and Technology (2020).

5 C. Breining-Kaufmann, ‘The Legal Matrix of Human Rights and Trade Law: State Obligations versus Private Rights and Obligations’, in T. Cottier, J. Pauwelyn, and E. Bürgi (eds), Human Rights and International Trade (Oxford: Oxford University Press, 2005), 95136, at 104.

6 E.-U. Petersmann, ‘Need for a New Philosophy of International Economic Law and Adjudication’, Journal of International Economic Law 17 (2014), 639669, at 663.

7 Instructive on this: B. H. Bratton, The Stack: On Software and Sovereignty (Cambridge, MA: MIT Press, 2015).

8 T. Sutherland, ‘Liquid Networks and the Metaphysics of Flux: Ontologies of Flow in an Age of Speed and Mobility’, Theory, Culture and Society 30 (2013), 323.

9 M. Castells, The Rise of the Network Society, 2nd edn (Oxford: Blackwell, 2010), at 442.

10 D. Lupton, Digital Sociology (Abingdon: Routledge, 2014), at 106.

12 See, e.g., B. Bodo et al., ‘Tackling the Algorithmic Control Crisis – The Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents’, Yale Journal of Law and Technology 19 (2017), 133180; J.-C. Plantin et al., ‘Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook’, New Media and Society 20 (2018), 293310; A. Helmond, The Web as Platform (PhD thesis, University of Amsterdam, 2015).

13 D. Held et al., Global Transformations: Politics, Economics, and Culture (Stanford: Stanford University Press, 1999), at 16.

14 J. Manyika et al., Digital Globalization: The New Era of Global Flows (Washington, DC: McKinsey Global Institute, 2016).

15 G20, ‘G20 Osaka Leaders’ Declaration’, 2020, available at www.consilium.europa.eu/media/40124/final_g20_osaka_leaders_declaration.pdf, at para. 11.

16 General Assembly of the United Nations, Universal Declaration of Human Rights, 3rd Session, A/RES/217(III), adopted 10 December 1948.

17 United Nations Human Rights Council, The Promotion, Protection and Enjoyment of Human Rights on the Internet, A/HRC/RES/32/13, adopted on 18 July 2016; United Nations Human Rights Council, The Promotion, Protection and Enjoyment of Human Rights on the Internet, A/HRC/38/L.10/Rev.1, adopted 4 July 2018.

18 One exception is the European Convention on Human Rights, which is enforceable through the European Court of Human Rights for state members of the Council of Europe.

19 Petersmann, Footnote note 6, at 644.

20 Cohen, Footnote note 2, at 239.

21 Footnote Ibid.; J. van Dijck, T. Poell, and M. de Waal (eds), The Platform Society (Oxford: Oxford University Press, 2018).

22 F. Turner, From Counterculture to Cyberculture (Chicago: The University of Chicago Press, 2006); I. de Sola Pool, Technologies of Freedom (Cambridge, MA: Harvard University Press, 1983).

23 J. P. Barlow, ‘A Declaration of the Independence of Cyberspace’, 8 February 1996, available at http://homes.eff.org/~barlow/Declaration-Final.html.

24 H. Grotius and R. van Deman Golphin, in J. Brown Scott (ed), The Freedom of the Seas (Oxford: Oxford University Press, 1916).

25 See L. DeNardis, ‘Hidden Levers of Internet Control: An Infrastructure-Based Theory of Internet Governance’, Information Communication and Society 15 (2012), 720738; J. Hofmann, C. Katzenbach, and K. Gollatz, ‘Between Coordination and Regulation: Finding the Governance in Internet Governance’, New Media and Society 19 (2017), 14061423.

26 F. Bignami and G. Resta, ‘Human Rights Extraterritoriality: The Right to Privacy and National Security Surveillance’, in E. Benvenisti and G. Nolte (eds), Community Interests Across International Law (Oxford: Oxford University Press, 2018), at 357.

27 C. Ryngaert and M. Zoetekouw, ‘The End of Territory? The Re-Emergence of Community as a Principle of Jurisdictional Order in the Internet Era’, in U. Kohl (ed), The Net and the Nation State: Multidisciplinary Perspectives on Internet (Cambridge: Cambridge University Press, 2017).

28 Bignami and Resta, Footnote note 26.

29 R. MacKinnon, Consent of the Networked: The Worldwide Struggle for Internet Freedom (New York: Basic Books, 2012).

30 L. A Bygrave, Internet Governance by Contract (Oxford: Oxford University Press, 2015).

31 Well recorded is the debate between the proponents of exceptionalism and its opponents, the non-exceptionalists, arguing over the source of authority that should regulate the Internet. See, e.g., D. R. Johnson and D. G. Post, ‘Law and Borders: The Rise of Law in Cyberspace’, Stanford Law Review 48 (1996), 13671402; J. Goldsmith and T. Wu, Who Controls the Internet: Illusions of a Borderless World (Oxford: Oxford University Press, 2006), for the opposing positions.

32 H. L. Buxbaum, ‘Territory, Territoriality, and the Resolution of Jurisdictional Conflict’, American Journal of Comparative Law 57 (2009), 631675, at 635.

34 J. R. Reidenberg, ‘Technology and Internet Jurisdictions’, University of Pennsylvania Law Review 153 (2005), 19511974.

35 Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC, OJ L [2016] 119/1.

36 M. Gömann, ‘The New Territorial Scope of EU Data Protection Law: Deconstructing a Revolutionary Achievement’, Common Market Law Review 54 (2017), 567590; C. Ryngaert and M. Taylor, ‘The GDPR as Global Data Protection Regulation?AJIL Unbound 45 (2019), 59.

37 The California Consumer Privacy Act of 2018, AB-375, 28 June 2018.

38 For details, see Ryngaert and Taylor, Footnote note 36.

39 D. J. B. Svantesson, ‘The Regulation of Cross-Border Data Flows’, International Data Privacy Law 1 (2011), 180198; C. Kuner, ‘Reality and Illusion in EU Data Transfer Regulation Post Schrems’, German Law Journal 18 (2017), 881918.

40 P. S. Berman, ‘Conflicts of Law and the Challenge of Transnational Data Flows’, in P. Zumbansen (ed), The Many Lives of Transnational Law: Critical Engagements with Jessup’s Bold Proposal (Cambridge: Cambridge University Press, 2020), 240268.

41 J. Barry and E. Pollman, ‘Regulatory Entrepreneurship’, Southern California Law Review 90 (2016), 383448; Cohen, Footnote note 2, at 329.

42 M. Burri, Current and Emerging Trends in Disruptive Technologies: Implications for the Present and Future of EU’s Trade Policy (Brussels: European Parliament, 2017), at 11; J. P. Meltzer, ‘Governing Digital Trade’, World Trade Review 18 (2019), 2348.

43 See H. Murphy and A. Kellow, ‘Forum Shopping in Global Governance: Understanding States, Business and NGOs in Multiple Arenas’, Global Policy 4 (2013), 139149.

44 For instance, the Internet Governance Forum (IGF), see www.intgovforum.org/multilingual/; the NetMundial initiative, see https://netmundial.org/; and RightsCon, see www.rightscon.org/.

45 M. Burri and T. Cottier, ‘Introduction’, in M. Burri and T. Cottier (eds), Trade Governance in the Digital Age (Cambridge: Cambridge University Press), 114, at 6. See also Chapter 1 in this volume.

46 See Chapter 2 in this volume.

47 The Comprehensive and Progressive Agreement for Transpacific Partnership, available at http://international.gc.ca/trade-commerce/trade-agreements-accords-commerciaux/agr-acc/cptpp-ptpgp/text-texte/index.aspx?lang=eng. The CPTPP incorporates by reference the original Trans-Pacific Partnership Agreement (TPP) signed in 2016 and later abandoned by the incoming US administration.

48 See Chapter 1 in this volume.

49 For an overview of the WTO work on e-commerce, see, e.g., S. Yakovleva and K. Irion, ‘Pitching Trade against Privacy: Reconciling EU Governance of Personal Data Flows with External Trade’, International Data Privacy Law 10 (2020), 121; J. Kelsey, ‘How a TPP-Style E-Commerce Outcome in the WTO Would Endanger the Development Dimension of the GATS Acquis (and Potentially the WTO)’, Journal of International Economic Law 21 (2018), 273295; S. Wunsch-Vincent, ‘Trade Rules for the Digital Age’, in M. Panizzon, N. Pohl, and P. Sauvé (eds), GATS and the Regulation of International Trade in Services (Cambridge: Cambridge University Press, 2008), 497529.

50 WTO, Joint Statement on Electronic Commerce, WT/L/1056, 25 January 2019.

51 C. D. Creamer, ‘From the WTO’s Crown Jewel to Its Crown of Thorns’, AJIL Unbound 113 (2019), 5155.

52 Kelsey, Footnote note 49, at 275.

53 General Agreement on Trade in Services, 15 April 1994, Marrakesh Agreement Establishing the World Trade Organization, Annex 1B, 1869 U.N.T.S. 183 [hereinafter: GATS].

54 See, e.g., M. Krajewski, National Regulation and Trade Liberalization in Services: The Legal Impact of the General Agreement on Trade in Services (GATS) on National Regulatory Autonomy (The Hague: Kluwer Law International, 2003).

55 Recital 3, Preamble to the GATS.

56 WTO Appellate Body Report, Argentina – Measures Relating to Trade in Goods and Services, WT/DS453/R, adopted 9 May 2016, at para. 6.115.

57 P. van den Bossche and W. Zdouc, The Law and Policy of the World Trade Organization, 3rd edn (Cambridge: Cambridge University Press, 2014), at 515.

58 Yakovleva and Irion, Footnote note 49, at 20; K. Irion and S. Yakovleva, ‘The Best of Both Worlds? Free Trade in Services and EU Law on Privacy and Data Protection’, European Data Protection Law Review 2 (2016), 191208; S. Yakovleva and K. Irion, ‘Toward Compatibility of the EU Trade Policy with the General Data Protection Regulation’, AJIL Unbound 114 (2020), 1014.

59 See, e.g., M. Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 2018, available at https://maliciousaireport.com/.

60 See Chapter 5 in this volume.

61 K. Irion and J. Williams, Prospective Policy Study on Artificial Intelligence and EU Trade Policy (Amsterdam: Institute for Information Law, 2020).

62 L. Stark, ‘Facial Recognition Is the Plutonium of AI’, XRDS: Crossroads, the ACM Magazine for Students 25 (2019), 5055.

63 S. A. Aaronson, ‘Why Trade Agreements Are Not Setting Information Free: The Lost History and Reinvigorated Debate over Cross-Border Data Flows, Human Rights and National Security’, World Trade Review 14 (2015), 671700.

64 Kelsey, Footnote note 49, at 256.

65 Cohen, Footnote note 2, at 238.

66 See, e.g., Reidenberg, Footnote note 34; H. Farrell and A. L. Newman, Of Privacy and Power: The Transatlantic Struggle over Freedom and Security (Princeton, NJ: Princeton University Press, 2019), at 27 et seqq.; P. P. Swire and R. E. Litan, ‘None of Your Business: World Data Flows, Electronic Commerce, and the European Privacy Directive’, Harvard Journal of Law and Technology 12 (1999), 683702.

67 G. de Búrca and J. Scott, ‘The Impact of the WTO on EU Decision-Making’, in G. de Burca and J. Scott (eds), The EU and the WTO Legal and Constitutional Issues (Oxford: Hart Publishing, 2001).

68 Irion and Williams, Footnote note 61.

69 Petersmann, Footnote note 6, at 663.

70 P. Alston, ‘Resisting the Merger and Acquisition of Human Rights by Trade Law: A Reply to Petersmann’, European Journal of International Law 13 (2002), 815844.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×