Technological development has transformed the way societies function, creating unprecedented opportunities while giving rise to complex human rights issues. Digital technologies, from artificial intelligence and algorithmic decision-making to data collection and surveillance systems, have altered systems of governance and individual autonomy. While these technologies have the potential to enhance human rights – by improving access to information, supporting equality, and fostering innovation – they also risk undermining them through privacy violations, discrimination, and erosion of accountability. As the United Nations Secretary-General has emphasised, we need to ‘ensure that new technologies are anchored in the values of […] the Universal Declaration of Human Rights’.Footnote 1 Accordingly, this part of the volume examines how human rights law should adapt to the ever-evolving situation. Each chapter addresses the core question: How should human rights law respond to the challenges of digital technologies? By delving into specific issues such as online disinformation, privacy violations, public health surveillance, and the intersection of digitisation with foreign investment, the chapters collectively explore the tensions between innovation and human rights protection, offering insights into how legal frameworks can adapt to contemporary technological realities.
These topics reflect key challenges where digital technologies intersect with human rights. Microtargeted disinformation threatens democratic participation and free expression, requiring legal responses to preserve public trust. The case study of COVID-19 apps highlights the balance between public health and privacy, offering lessons for future crises. The rise of drones necessitates safeguards to prevent privacy violations and misuse. Finally, the interplay between foreign investment and digitisation shows how human rights can shape global economic and regulatory strategies. Together, these chapters illustrate the need for a legal framework that protects human rights while fostering responsible innovation.
Chapter 13. The Paradox of Digitalisation in the Case of the COVID-19 Apps: What Lessons Can We Learn from This Strange Experience?
Chapter 13 examines the role of digital technologies, particularly contact-tracing apps, during the COVID-19 pandemic and draws lessons from this experience that should be taken into consideration when crisis situations occur. Paula Veiga expresses how these technologies were developed and implemented as public health tools and digs into the reasons for their widespread failure in Western countries, despite their technical feasibility. She provides insight into how digitalisation impacts public trust, governance, and human rights. Digital technologies have reshaped societal norms, created new types of data-centric goods, and introduced novel legal challenges, such as redefining rights and the role of public authorities. The chapter shows how the pandemic highlighted tensions between public health measures, privacy, and the rule of law. The exceptional circumstances required balancing individual freedoms with collective security. In relation to the core question of this part of the volume, the chapter refers to the following ways in which human rights law should adapt to address digital technologies: (a) in balancing human rights with other considerations (such as public health) during crises, the temporary measures taken need to be proportionate; (b) legal frameworks should include mechanisms to build and maintain public trust (e.g., transparency in data use, clear communication); (c) the focus should be on inclusivity, ensuring that digital technologies do not deepen existing inequalities; and (d) new human rights (e.g., the right to digital education, the right to neutrality on the internet) should be recognised and formally included in international human rights frameworks.
Chapter 14. Aerial Surveillance in the Digital Age: Drone-Related Privacy Concerns and the Protection of Other Human Rights
In Chapter 14, Skirgailė Žalimienė and Saulius Stonkus explore the challenges and opportunities presented by drone technology, focusing on the human rights implications of its widespread adoption. The chapter examines how drones, as platforms integrating advanced technologies (e.g., cameras, the Global Positioning System, artificial intelligence), enable extensive surveillance capabilities that pose significant risks to privacy and other human rights. It also addresses the regulatory landscape, particularly within the European Union (EU), and notes how current regulations often lag behind technological advancements, with unclear or inadequate provisions for drone registration, data minimisation, and cybersecurity. The chapter offers recommendations for striking a balance between enabling innovation and safeguarding fundamental rights. In response to this part’s core question – How should human rights law respond to the challenges of digital technologies? – the chapter notes the following: (a) human rights law should anticipate technological developments and establish forward-looking regulatory frameworks; (b) legal frameworks should require the integration of privacy-enhancing measures (e.g., automatic anonymisation) within technologies; (c) the principle of proportionality should be used as a guiding method, when the need arises to balance different human rights; and (d) a co-ordinated global approach is critical for ensuring consistency in legal standards.
Chapter 15. Online Disinformation, Microtargeting, and Freedom of Expression: Moving beyond Human Rights Law?
Chapter 15 studies the rise of microtargeted online disinformation (MOD) and its implications for human rights, democracy, and regulatory frameworks. Birgit Schippers looks at how microtargeting – a practice that uses data analytics to deliver tailored messages to specific audiences – amplifies the spread of disinformation, distorts public discourse, and impacts democratic processes. The chapter highlights the challenges posed by MOD to freedom of expression and privacy while evaluating the adequacy of current legal frameworks, particularly within the EU. It demonstrates how human rights law, particularly Article 10 of the European Convention on Human Rights, limits state interference with speech, making it difficult to regulate disinformation. Schippers answers this part’s core question by suggesting that (a) human rights law alone may not sufficiently address MOD (and other human rights issues in the digital environment), so complementary approaches (e.g., media literacy initiatives) are needed; (b) regulatory measures must balance the need to counter disinformation with protecting free speech to avoid disproportionate restrictions; and (c) international cooperation is necessary to address cross-border human rights challenges such as disinformation, harmonise regulatory standards, and counteract the influence of global tech platforms.
Chapter 16. Digital Boom: Current Issues from International Investment to Human Rights
Chapter 16 explores the relationship between international investments, digital technologies, and human rights. Cristina Elena Popa Tache, Cӑtӑlin-Silviu Sӑraru, and Sergio de Souza Salles demonstrate how digitalisation has transformed international investment practices and created new legal challenges, particularly regarding data protection, privacy, and equitable access to technology. The authors propose a human rights-centred approach to regulating digital investments, focusing on balancing innovation, investment protection, and fundamental rights.
In relation to the core question of this part of the volume, they note the following: (a) digitalisation amplifies the need for legal frameworks that protect human rights while fostering technological innovation and international investment; (b) legal systems should anticipate technological advancements by adopting forward-looking frameworks that address potential human rights implications before they arise; (c) effective governance should reconcile state sovereignty, international investment interests, and the protection of human rights, ensuring legal systems can respond to evolving digital challenges; and (d) a multi-stakeholder approach should be adopted (involving states, businesses, and international organisations) to harmonise global standards.
Shared Themes and Interconnections
The four chapters illustrate some of the ways in which human rights law can respond to the challenges posed by digital technologies, offering insights into areas ranging from health and surveillance to microtargeting and international investment. In the following, their common themes and approaches are summarised to highlight how they converge in addressing the question of this part: How should human rights law respond to the challenges of digital technologies?
A. Balancing Human Rights with Technological Innovation
Each chapter emphasises the need to strike a balance between safeguarding human rights and enabling technological development. The authors share the view that human rights law must evolve to navigate the tension between regulating technology to prevent harm and fostering innovation that supports societal progress:
Chapter 13 on COVID-19 apps explores how public health technologies must balance privacy with the right to health, highlighting the need for proportionality in emergencies.
Chapter 14 on drone surveillance argues that regulations must balance privacy with public safety and innovation in drone use.
Chapter 15 on microtargeted online disinformation emphasises that countering disinformation requires balancing freedom of expression with protections for privacy and democracy.
Chapter 16 on digital investments emphasises the necessity to reconcile the drive for international digital investments with the need to protect rights such as equitable access and privacy
B. Proportionality as a Guiding Principle
All chapters recognise the importance of proportionality as a key legal principle and tool to be utilised in applying human rights law in the digital environment. Proportionality is seen as having the potential to ensure that regulatory responses to digital technologies are targeted, avoiding unnecessary or overly restrictive measures that may infringe on fundamental rights:
Chapter 13 discusses proportionality in using emergency measures to ensure they are temporary and necessary without becoming tools for long-term surveillance.
Chapter 14 advocates for proportionality in drone regulations to prevent overreach while addressing privacy concerns.
Chapter 15 highlights proportionality in regulating disinformation to avoid a chilling effect on free expression.
Chapter 16 states that proportional regulations are essential for balancing investment growth and human rights.
C. Trust, Transparency, and Accountability
The chapters highlight trust and transparency as critical components of effective digital governance. They note that trust in digital governance can be fostered through transparency, vigorous accountability mechanisms, and public engagement.
Chapter 13 stresses the importance of public trust in ensuring the success of health technologies, facilitated by transparency in data usage.
Chapter 14 emphasises accountability mechanisms to prevent the misuse of surveillance technologies.
Chapter 15 highlights the need for transparent content moderation and data usage practices to counter the effects of microtargeting.
Chapter 16 argues for transparent and inclusive governance in digital investments to prevent exploitation and exclusion.
Like the chapters in Part II, all four chapters in this part of the volume also call for international cooperation and harmonised legal standards. The global nature of digital technologies necessitates co-ordinated international responses to protect human rights effectively while promoting innovation and economic growth.
The chapters collectively emphasise that human rights law and its implementation must evolve to address the challenges posed by digital technologies. This involves balancing regulation with innovation, expanding the scope of human rights frameworks (or their interpretation), fostering transparency and trust, and adopting a globally harmonised, multi-stakeholder approach. Together, these insights (and others contained in the chapters) outline a strategy for ensuring that human rights are protected in the face of rapid digital transformation.
13.1 Introduction
The rise of the internet and digitalisation have profoundly changed social relations and spread their influence in the juridical field. There is nothing new about this realisation. But the failure of COVID-19 apps during the pandemic in the Western world indicates that the trend of social adherence to digitalisation is neither automatic nor without restraint.
The mainstream analysis of contact-tracing apps is commonly closely related to data protection standards and digital surveillance technologies in order to obtain important gains: to prevent mass surveillance, and to protect human rights and the rule of law.Footnote 1 I reiterate ‘closely related’ because this second millennium has brought dramatic progress towards the recognition and enforcement of human rights, which is a positive development.
In this chapter, another perspective is sought – a broader one, perhaps even more comprehensive – based on the perplexity from our recent experience of contact-tracing apps, which embodies, as already said, a non-adherence to digital platforms in this particular case, unlike other areas (online shopping contracting, networks…).Footnote 2 These mentioned areas also pose several juridical problems, namely one of the real threat of the ‘private’ profiling of citizens and exploiting their vulnerabilities for commercial purposes. Therefore, one cannot assess the contact-tracing app experience in light of this element only. The scenario may be more complex than that.
Perhaps a broader analysis of this experience can answer how human rights law should develop in the face of the challenges of digital technologies.
In the first two decades of the twenty-first century, the rule of law scenario and its paradigms changed through digitalisation, the exceptionalism of the pandemic, and the need for the protection of human rights worldwide.Footnote 3 That is the background to the contact-tracing app experience, and we must take a deeper and more serious look at this apparently transitory phenomenon.
13.2 The Impact of Digitalisation: Some Highlights from the Public Law Perspective
Digitalisation is a hyper-complex phenomenon that uses new resources and new formats, and creates new problems in the juridical realm. The internet is not just an information platform, nor an ordinary channel of communication; it is the new centre of social communications. Almost all areas of our lives are reflected in cyberspace: economy, politics, trade, education, family, financial transactions, but also criminal activities, terrorism, and so on. This means that the issue is not only technological or economic, but also legal. The technological process has changed cultural norms and social behaviour, which also changes legal behaviour.
Among the juridical problems, there is one that deserves to be specifically mentioned: digitalisation has created a new good, very different from the classical physical goods we were used to – that is, data. In the twenty-first century, data will be the most valuable asset, especially personal data, and, as far as we can see, policy will struggle to control the flow of data. Data demands a different approach to regulating juridical relations, as it is not intended for exclusive use as the classical physical goods were. As we immediately intuited, this will be reflected in legal concepts as it is warrantable that classical notions, such as ‘ownership’, can be modified to form new or alternative concepts. It is now clear that digitalisation will always be accompanied by the collection, storage, and processing of large amounts of data, so-called big data, that will differ in terms of shape, size, and speed. Before the digital age, data essentially meant information about a certain content, collected by identifiable people. This has changed with digitalisation. Data is now a container for information. As the historian Yuval Noah Harari says, in a rather pessimistic view, there is a new universal narrative that replaces religious authority and the humanist ideology with the authority of the algorithm and big data.Footnote 4
Another aspect that deserves special attention in light of public law is connected to more recent activities in the digital world. While in the early years of this century, discussions focused on the opportunities and risks of the internet, namely the protection of privacy and data; we are now facing a new type of activity, and therefore new challenges – those associated with the globalisation of communication infrastructures and markets, artificial intelligence (AI), big data, and its consequences for the collective interest (e.g., health services, political elections). Considering this, some of the most important problems caused by digitalisation in public law concern the information and technology revolution and its connections with the humanist traditions in which constitutions are based. How is it possible to enhance the legitimacy of the new information order, the impact of digitalisation on democracy and will-formation processes, the impact of digitalisation on the courts and administrations, namely the limits of digital justice, and the impact of the digital revolution on data protection, privacy, and human rights.
At the constitutional level, since the constitutional state and constitutions were built around the principle of human dignity, it is inevitable that the domain of the machine and the emergence of AI, with its new paradigms (acceleration, instantaneous action, and connectivity), will bring changes to several pillars of constitutionalism, namely the pillar of rights, the institutional pillar, and the pillar of legitimacy.
In this field, the most immediate concerns of digitalisation are related to: (a) the protection of rights, (b) the birth of new rights (e.g., the right to be forgotten, the right to social access to the internet), (c) the new understanding of competences, since the internet goes beyond the jurisdiction of state or region (legal orders, both national and international, are based on the Westphalian concept of sovereignty and state-centred power), and (d) public discourse and the formation of political will.
This means a change in the constitutional system, especially because the internet increases the ability of citizens to exercise their fundamental rights, but also increases the risk of threats to fundamental rights, and, related to the public sphere, the internet emphasises the special role of private actors. To express this succinctly, digitalisation can be a tool for both protecting and violating human rights, with direct implications for the cyber- and physical security of individuals.
Besides the pillar of rights, States must of course reorganise their classical functions that were based on territoriality. Some authors argue that there is a trend to territorialise cyberspace (aka ‘sovereignty fever’). I feel it is too soon to reach that conclusion. As already stated, space and territory have been important as bases for the law for several centuries (in fact, territory is even a political construction in the legal field). All these have changed with the internet and digitalisation.
The (new) public sphere is public–private, fragmentary, immediate, and egocentric.Footnote 5 This makes it difficult to distinguish between individual information and press information, and therefore the exact new notion of public opinion. Still considering the pillar of rights, the existing rights have gained a new dimension in the context of applying new technologies, and this necessitates a reinterpretation. We can also see the emergence of new digital rights.
Overlooking the rights themselves, and considering both the categories of fundamental rights and human rights, digitalisation implies, above all, a redefinition of privacy and other (personal) rights related to free will, as well as all rights connected with communication through the media (namely, freedom of expression and freedom of press and media). In addition to the framework of rights already broadly affirmed, there is the legal consecration (at constitutional and international levels) of new rights (digital rights), such as the right to access the internet irrespective of economic condition, the right to digital education, the right to neutrality on the internet, the right of access online data, innovations, creations, and knowledge generated by public funds, and the right to be forgotten.
There is (as yet) no international Bill of Digital Rights, which, in order to be approved, should be under the auspices of the United Nations. But in international law new configurations of human rights arise, especially the rights to freedom of expression and privacy. In this context, we can recall the UNESCO Recommendation concerning the promotion and the use of multilingualism and universal access to cyberspace (2003), that states the need for the coexistence of public and private, as well as civil society at the local, national, regional, and international levels, and the principle of universal access to the internet as a service of public interest.Footnote 6 The World Summits on the information society (Geneva, 2003; Tunis, 2005) should also not be forgotten.
In the European law context, and considering the Europeanisation of the protection of fundamental rights through technology, it is worth mentioning Directive 2009/136/EC of the European Parliament and the Council,Footnote 7 a process that was initiated with the first Directive on the protection of personal data.Footnote 8 We should also remember Regulation 2015/2120, which establishes measures dealing with access to the open internet and amends Directive 2002/22/EC on universal service, as well as Regulation 531/2012 on roaming on public mobile communications networks in the European Union (EU),Footnote 9 and, finally, the well-known Regulation on Data Protection (Regulation 2016/679), which repealed the aforementioned Directive 95/46/EC.Footnote 10
13.2.1 The Specific Case of Digitalisation among Health Norms
One of the areas crucial to digitalisation is that of public services, in which health services are usually included.Footnote 11 This implies the development of a digital citizenship, a citizenship that will entail a new way of understanding relations between administrations and citizens where recognition of rights are concerned, namely the confidence for citizens about electronic ways of working (e.g., privacy and security issues). Of course, the EU is trying to develop a common vision of how e-government services will develop, and subsequently is committed to its implementation.
Alternatively, digitalisation is favouring the development of a new ‘health market’, namely through the reorganisation of therapy and business models offered by digital platforms, such as tablets with computer chips, implants with sensors, fingerprints, fitness trackers, and medical apps. The use of digital technologies concerning health poses special technical as well as philosophical and juridical problems, as this involves dealing with well-recognised sensitive data.Footnote 12 But doctors, pharmacists, and companies have banded together to develop applications that promise to redefine the way medicine is practised. There will likely be a need for new models, such as patient-oriented care and a more efficient system (perhaps an interconnected and smart-healthcare system capable of solving complicated situations in the healthcare sector using digital tools). Insurance companies will require special attention, as they will be responsible for handling sensitive data.
The main framework is already enacted by the General Data Protection Regulation (GDPR) (generally, processing data can be carried out without the consent of the individual). This health issue is also well known from the pandemic, as all over the world, and especially in Europe, the Council of Europe Member States moved forward in an attempt to make use of digital technology to slow down the spread of the virus.Footnote 13 The COVID-19 apps could, therefore, if they had been successful in Europe, have served as an experiment for the digitalisation of healthcare. This was not, in general terms, the case (one exception, at least in the first period of the pandemic, was the Italian Immuni). One should never forget that smart management of healthcare (‘telemedicine’) includes treatments connected to the use of medical apps, through which independent owners collect data, including healthcare data on the person concerned, which can be used for different purposes.
13.3 COVID-19: Digitalisation Methods and Juridical Concerns
13.3.1 The Most Fundamental Constitutional Problems Posed by COVID-19
A constitutional and pluralist view of public law sees citizens and communities as subjects of legitimacy, and in this context the ability of the law to achieve political inclusion. That is why constitutionalism focuses above all on the impact of governance arrangements on human rights.
On 11 March 2020, the World Health Organization (WHO) recognised the spread of the disease as a pandemic and soon states and their governments needed to take specific measures. Apart from disrupting international supply chains, adding popularity to anti-migrant policies, and weakening globalisation, the administration of the crisis led to restrictions on several human rights (e.g., quarantine, travel, isolation) and forced states to resort to the use of public coercion and protective measures, including business closures and social distancing.Footnote 14
At the constitutional level, the pandemic implies a return to two classical and constitutionally protected juridical concepts that were required to be balanced: freedom and security.Footnote 15 The complex agreement between these two concepts – freedom and collective interest – explains why COVID-19 primarily posed three constitutional issues among occidental constitutional states: (a) the place of parliaments in epidemic (emergency) circumstances and the operationalisation of the ‘law of the crisis’ that signifies, as a rule, a rebalancing of the various powers, determining the centrality of the executive power, in general, and the government, in particular (a stronger executive); (b) the adequacy of the legal basis of the measures adopted; and (c) the proportionality of the measures materially adopted.
In terms of the justice systems, it is worth noting that there was almost an automatic response from states considering the suspension of the judicial service and the procedural deadlines that represents an impact on the universality of access to process and effective judicial protection. There were also several influences on the functioning of democracy, such as the postponement of elections, restriction of the rights of freedom, and the right to assembly.
All rights are susceptible to being restricted under certain conditions. In light of the European Convention on Human Rights, those restrictions shall not affect the legality principle, must have a legitimate aim, and must respect the proportionality principle. This instrument even has its own Article (Article 15) that rules the derogation of the Convention in times of emergency, under certain limits.Footnote 16
13.3.2 Apps during the Pandemic: Their Possible Significance and Role
The COVID-19 pandemic made us face many problems and also contradictions.Footnote 17 One was its ‘cosmopolitanism’ – COVID-19 spread all over the world – on the one hand, and the national response to it on the other. Indeed, despite the fact that there is nowadays undeniably more international coordination than during previous pandemics (namely through the WHO), the fight against the virus continued to take place within the national framework, which reinforced the notion of sovereignty that was otherwise dissolving through the globalisation and digitalisation processes. At least in Europe, borders regained a new meaning with COVID-19.
When COVID-19 initially spread, proposals for monitoring the epidemic through technology soon emerged all across the world, but in Europe in particular, both organised by the EU and by each national state. In general, the European proposals were concerned with defending the rule of law and two particular human rights – data security and privacy. The key features were the aggregation of data (and subsequent anonymisation), the purpose, principle, and voluntariness.Footnote 18 The apps, according to European parameters, were designed to the highest standards of data privacy and data security. Their aim was not to track individuals and not to hold personal information. But even in the EU, where there is sophisticated regional integration, the response to the pandemic was far from uniform and efficient.
Of course, an app does not replace the human side of constraining the disease, but it is generally accepted that it can help. Like a piece in a puzzle, it is a tool to facilitate the resolution of the problem. These systems would enable people who had tested positive for COVID-19 to share information about their recent contacts, so that those individuals could be contacted and given appropriate public health advice to help limit the spread of the virus.
The official response, both from the European Data Protection Committee (EDPC) and the Council of Europe were issued in April 2020. The EDPC enacted Guidelines No. 4/2020 on the use of location data and contact-tracing tools in the context of COVID-19, on 21 April, with the Council issuing a document on 7 April. The European Commission also highlighted that contact tracing was just an instrument within the public health strategy, while clearing the advantages in creating a single app for mobile devices at the European level. However, the responses were fragmented and uncoordinated. Examples of national responses include the Italian Immuni, the German Corona Warn, the Irish COVID Tracker, and the Portuguese StayAway Covid.
In light of the rule of law and the right to privacy, there are five basic ideas to be kept in mind: (a) the pandemic was an exceptional scenario and called for exceptional measures; (b) the proportionality principle has a different meaning in normal times and times of exception; (c) privacy is intrinsic to the idea of the Rule of Law; (d) the Right to Privacy is a fundamental right and a human right (protected in Europe by Article 8 of the European Convention on Human Rights); and (e) Protection of Data and Privacy are two different rights, but both are protected in the European context.
This immediately means a juridical framework to treat personal data, informed consent, and the responsibility of treating data – all statutes in the GDPR.
The contact-tracing apps can be integrated into individual measures in the pandemic to avoid infection, alongside testing and vaccination – this implies responsibility, in addition to liberty.
Traditional contact tracing is performed by teams of trackers who have to rebuild all the interactions a positive person had. This process consumes both time and resources. The time needed to identify contacts is a critical issue. The stress on responsibility is very important in the case of this tool, with statements that included the idea of a shared responsibility between communities and governments. The implicit value of the app was, therefore, beyond its technology; it instilled a sense of responsibility for the citizens to be part of the solution in addressing the spread of the COVID-19 virus. But it seems that beyond privacy concerns, other issues were at stake, namely efficacy, lack of transparent communication, and sacrifice of privacy when the sacrifice was not worthwhile (disbelief in the usefulness of the app).
All in all, while testing and vaccination were a relative success, contact-tracing apps were a complete failure in Western states. The main tool to combat the pandemic was, of course, vaccines, authorised from December 2020, as voluntary and free for all citizens.
It is also worth remembering that the success of the contact-tracing apps depended on the willingness of the citizens to permanently install and use them; in other words, voluntariness. But, once again, vaccines were also voluntary. That is why the question remains: why did citizens refuse to subscribe to this tool when it comes to the protection of public health, if they normally download applications for other (minor) purposes, which also have juridical consequences? What justifies this dual use of technology in the light of the law?
We must keep in mind that technology should guard patient privacy, but equity and collective benefit are also issues of concern.
13.3.3 An Effort to Find a Justification …
The COVID-19 period was trying, but its consequences were far-reaching. A major proof of this is given by the UN Security Council Resolution 2532 (2020), a symbolic mark, which characterised COVID-19 as a threat to peace and international security, demanding, therefore, for the first time in history, a humanitarian pause in world conflicts.Footnote 19
The pandemic caused the mobilisation of a state of emergency in most countries and in Portugal, in particular, within the framework of the 1976 Constitution for the first time since its entry into force (even though emergency powers have been in the Constitution from the outset). In the Portuguese case, the first patient was diagnosed on 2 March 2020 and the first death occurred fourteen days later.Footnote 20
States were faced with the legal projection of the effects of a pandemic in terms of fundamental rights, a crisis framed by a normative framework that is easily blurred. The admixture of these components gives rise to several questions. Among them are the distinction between situations of normality and situations of exception, the central role played by the executive in the context of the response to the crisis, and the subordination of exceptional measures to the rule of law.
Alternatively, we should never forget that digitalisation and, with it, mass surveillance, are omnipresent in society. Measures of surveillance (all at the heart of the digitalisation process) and privacy infringements are not always of the same degree and must be highlighted in categories, identifying the severity of possible human rights infringements. Let us hierarchise those according to three possible degrees: highly intrusive responses, intrusive responses, and mildly intrusive responses.
That said, it is necessary that one asks why contact-tracing apps were so inefficient during an era of digital applications, especially keeping in mind that the guiding principles of all action in times of a pandemic are freedom and responsibility, as already noted. In other words, in this balance between freedom and responsibility, it is worth asking why the contact-tracing apps were perceived as such a serious interference in informational self-determination. The first idea that one can imagine is the fear of being segregated or marginalised in communities once it became known that a citizen tested positive. But the scenario can be more complex than that.
First, because if it is true that the Western legal system is not built to impose restrictions on personal freedom when there is no culpability or direct benefit, it is also worth remembering that in an emergency situation rights are under stress.Footnote 21 Second, the COVID-19 pandemic led to restrictions in a range of areas. Common examples are the right to freedom of movement and assembly, the right to a fair trial, the right to education, and the right to private and family life.
I believe that the first important lesson to learn from this strange experience is the urgent need to regulate the health market properly, based on the ideas that digitalisation demands de-territorialisation, de-centralisation, and de-nationalisation. In technological terms, the system worked. However, the new wave of techno-optimism was in doubt. The problem was that technology was not enough. I believe the main threats felt by citizens were: (a) the use of the data produced by such tools for disease modelling and epidemic dashboards; (b) the information on health decisions through technology-driven disease testing; and (c) the use of technology to counter health-related discrimination.
In addition, there are always contextual issues. Social, political, economic, and psychological factors can affect citizens when adopting and/or using technology. We can imagine how attitudes towards the pandemic and the general attitude of the people before and during the pandemic can change (from negative – dissatisfaction, unhappiness, compliance, etc., to positive – satisfaction, appreciation, etc.).
In the Portuguese example (the one I know best), people were not sufficiently encouraged to start using the app. This involves two simple steps – the first is how to install the app and the second is instructions on how to operate the app using a smartphone. However, we must not forget that Portugal has many citizens without digital skills (especially the elderly who may not be conversant with new technologies), and the developers of the app did not consider them.
There was not enough public awareness about the app, and information and knowledge on usage was limited. The main efforts came from the government. However, the media did not promote the use of the app, either in the press or on radio stations, unlike what happened in other countries (e.g., Germany, with the Corona-Warn-App).Footnote 22 Specialists at the time reported that we would need at least 60 per cent of the population to download and actively use the contact-tracing app in order for it to be useful.
Let us go beyond the surface and utilise this experience to reflect on digitalisation and its implications in the juridical area. The digital transformation in the legal world can generate two paths: (a) the need for the right to ‘give in’ to new technologies, reducing the level of legal protection for those rights negatively affected by technologies; and (b) the call to update and reconceptualise the existing legal framework to accommodate new technological developments.
The real and critical challenge for public law and human rights protection in the digital age is finding and maintaining the appropriate balance between the advantages and disadvantages that the application of technology brings; that is, how to ensure that technological development moves within a framework that provides the well-being of human society.
13.4 The Lessons Learned
In contemporary democracies, trust in public and political institutions has collapsed in the last decade of the twentieth century and the beginning of the twenty-first century.Footnote 23 This is probably one substantial explanation for the non-adherence to the COVID-19 app in the Western states (first lesson). Added to that, the COVID-19 crisis highlighted the issue of trust in democracies, as governments had to both undertake (unprecedented) restrictive measures to manage the spread of COVID-19 and to rely on the citizens’ willingness to adhere to these measures.
Cyberspace is a global network, primarily for private entities and institutions, and that includes public institutions – which poses a fundamental question about the role of the state in this new world of goods and services.
A constitutional approach means a policy that deals with digital technologies from a perspective also aimed to protect fundamental rights and democratic values, which includes framing the debate within the information society; this is increasingly subject to the power of public and private actors implementing automated decision-making technologies. As a first consequence, far from simply applying existing law in cyberspace, there is an urgent need for online laws, such as the GDPR (second lesson). Especially in the healthcare field, we should amplify the call for building and strengthening stable global healthcare data and technology governance frameworks to assist digital surveillance suitable for overall healthcare systems. An appropriate institutionalisation of a rights-based framework would enhance trust, as well as longer-term geographical equity and comprehensive health and care. With the COVID-19 app experience, we have learned that technology as embellishment does not work at all.
In that field, the Council of the EU has considerable experience, as it played a crucial role in consolidating the constitutional dimension of the right to privacy and data protection in Europe, through the Data Protection Directive. So it is desirable that the institution focus on strengthening governance of digital healthcare systems, with at its heart the concept that healthcare is a public good (rather than health data as a public good).
The notion of the rule of law is also at stake. Indeed, the crisis of the rule of law is the crisis of trust. To exemplify that, just remember two recent European situations: the non-enforcement of refugee laws in Europe and high-level corruption in public entities. These two quite different examples prove that rules are also not being obeyed in Europe, which generates a problem of trust.
Trust, in light of the rule of law, is not trust in persons but trust in institutions (e.g., courts) and systems (e.g., the EU). This problem of trust has been reinforced by several measures in some states during the pandemic, as proven by people attending street events, criticisms of political leaders, and divided institutions.
The main problem of trusting institutions is related to the classical notion of the rule of law (the formal rule of law concept). One knows that there is an intrinsic ambiguity across legal traditions in this concept (there are differences between the English idea of ‘the rule of law’, the German Rechtsstaat, the French l’Etat de droit, the Italian Stato di diritto, etc.), but they all mean a relationship between state, constitution, governing, and law. In other words, this traditional concept is clearly related to notions of the separation of powers, general and public rules for all, and consistent and transparent regulation.
This system of checks and balances, along with the principle of the separation of powers, represents the common core of constitutionalism. The ideal of limited government, intrinsic to any form of constitutionalism, requires the adoption of a system of reciprocal control among different branches or decision-making centres of the state, and rejects the unwarranted concentration of power in the general constitutional design (third lesson).
This is the core of public law, even with digitalisation in progress. That is why we should identify a new notion of public authority, which includes acts, institutions, and relations of states, supranational institutions, and international bodies, and apply it in these normative dimensions of public law, both offline and online. Indeed, all public institutions must act according to the standards of democratic public law, no matter if they are acting offline or online.
A healthy suspicion of power provides democracy its vitality, but, and let us stress this point, democracy depends on trust. Besides, liberty in democracy is not only individual liberty, but also collective liberty.
We should try to continue and widely reflect on this lack of trust. Modern societies are creating particularised trust based on race, ethnicity, lifestyle, moral identity, or religion. I am not sure if this scenario is helping the real standards of trust. What is clear is that public policy has to cope with diversity, but trust can withstand the pressure that diversity poses. One thing is certain: in the end, a failure of trust is a failure for democracy.
Public entities must assure trust in justice (perception of the judiciary), maintain awareness of any (dis)satisfaction with public services, pay attention to corruption, and its perception, and govern with transparency and accountability. There is no need to stress that good governance practices influence citizens’ attitudes and behaviours towards the government (fourth lesson).
In Europe, the problem of trust is a problem both for political institutions and the courts, not only in European institutions, but also in national institutions. This means a long path to develop a collaborative way forward, where trust is seen as a value by institutions.
Besides, ensuring anonymity is not enough. The technology used in the social context must be considered; for example, when providing additional support to citizens that have experienced discrimination because they contracted the COVID-19 virus. Furthermore, the use of a smartphone application to trace those individuals that an infected person has been in contact with in order to prevent the spread of COVID-19 is not such a novel situation.
From a broader perspective, there are benefits in enabling voluntarism, solidarity, and public modes of association, even in political relations (fifth lesson). The simple fact that national decision-making has shifted from domestic policies to policies that result from participation in a global or transnational decision-making process alters the dynamics of all decision-making. This will mean limiting coercive organisation in favour of voluntary association, enabling broader participation. Not all, but certain kinds of rights – in particular, rights of association, speech, and political participation – must shift from problem-solving mode and from a coercive manner to include participatory relations (according to the all affected principle). This principle says, roughly, that all those who are affected by a decision should have a right to participate in making it. The explanation is simple: it is only when strangers are no longer treated as bearers of malign intent that the possibilities of extensive trust can develop.
It is clear that globalisation and the proliferation of communicative platforms is taking people away from ‘vertical’ interactions in which representative politics is typical, toward more distributed, flatter, or ‘horizontal’ modes of sociality, working, and organising, which poses special problems for democracy itself, as this leaves us in a ‘post-representative’ political moment, which means that the advent of online communication has had both good and bad effects on the practice of democracy.
It is also pointless to recall the normative concept developed by Jürgen Habermas – the public sphere. For Habermas, modernity was formed from the development of a division between state and society.Footnote 24 The public sphere became politicised and was transformed into a political public sphere and this concept – the political public sphere – is the most difficult and controversial issue in terms of the constitutional power posed by digitalisation. It suffers a profound alteration in the Habermasian sense; that is, as a sphere of communication. Habermas explains this situation as one where individuals can discuss critical issues and gain knowledge of public issues. In this way, political public opinion is formed – a public and shared space in which decisions are made through dialogue.
Indeed, communication, an essential element for the formation of will within a community, is now guided by a new paradigm. I even question if digital communication is still ‘only’ a way of exercising freedom of expression.
This new paradigm is more accelerated, more instantaneous, and more connected. The public sphere, as already stated, became fragmentary, immediate, egocentric, and public–private. Besides, it includes a great divergence of publics, the political, the cultural, and so on. Knowledge is generated and disseminated in a decentralised manner and the reconstruction of meaning is carried out by acceptance and reproduction without any control. The protagonists are unknown. They are no longer ministers of religious cults, artists, or intellectuals. But will it be the will of leaders (political, business, etc.) or instead the will of the media? Or even none of these? Within the political will, new methodologies merge (informal information, fact-checking), where fact and opinion become mixed.
To sum up, all these changes represent a considerable challenge for the logic of constitutionalism in terms of legitimacy. This is why I can ask what the concrete manifestations are today of living in democratic spaces mediated by technology. In the spaces of discourse, which ones can be characterised as spaces of civic interaction and as spaces of political intervention? The control of access to resources and communications platforms indeed has considerable power to (re)configure discourses, and the answers to these questions are crucial. We should not forget the unequal distribution of the potential associated with such technologies, and the specific state of the virtual public sphere also distorts intercultural dialogue. In other words, virtual social networks can limit and distort the dialogue between cultures because of the virtual public sphere they create. That is one of the reasons I am in favour of encouraging the public financing of online communications so that they are not co-opted by commercial interests. Cultivating a will towards civic participation in society, keeping non-profits involved so that access remains affordable, is essential to democracy and the protection of fundamental rights. A virtual self-government by ‘cyberians’ is highly problematic, since the real world, through the state, is the only institutional structure that seems able to fulfil the important and irreplaceable task of promoting social and political integration that forms the collective identity (the formula that contains in itself the ideas of common interest and community). In this sense, there are several discussions, namely about ‘echo chambers’, specific platforms created by political parties, parliamentary decisions via AI, possible uses of AI for the realisation of social rights, and so on.
It is clear that the new regulations have to address one basic distinction: the exercise of democracy through the internet and the exercise of democracy on the internet. In the former, cyber-attacks, namely from authoritarian governments and non-state actors, pose a clear and increasing threat to democracies across the world, especially through their interference in free and fair elections (we must keep in mind that there is state influence on the internet by some states and international legal protection from that interference), and the manipulation of information sources for political discourse and decision-making. In the latter scenario, besides considering the rule of law, a new premise for democracy itself arises as technology becomes an integral part of a truly democratic global society. Indeed, in this new scenario, democracy itself and its realisation involves monitoring strategies that deal with the asymmetry of information and imbalance of power. Of course, an appropriate approach in confronting and criticising government power, a logical and rational critique of political, economic, social, and cultural issues, bilateral dialogue, and free audience (all) with government officials can help.
Good governance theory advocates the responsible, accountable, and transparent management of human, financial, economic, and natural resources for the sustainable and equitable development of all institutions. First of all, this requires the government to be accountable for its actions by implying transparency and access to information for its citizens. Governments also need to be responsive to people’s needs by exhibiting responsiveness and safeguarding human rights, in order to achieve public trust. On the other hand, public authorities must use common-sense at the core of their speeches, embedded in an overall togetherness narrative. This will lead to mutual agreement, pragmatic rationality, and cooperative compliance from citizens. Building a common public culture is essential for trust in public affairs. Otherwise, we experience the division of society into parallel societies with little to no intergroup trust and the risk of mutual suspicion.
With the digitalisation of the public environment, it is not only the state and public authorities that can be a threat to our rights, but also private entities. That was the idea in the birth of fundamental rights in the eighteenth century. Today, the threat can come from both sides, but that is a completely different question and it is not addressed in this chapter.
This is, I believe, the core provided by the legal frameworks of public and private law. They follow different rationales. Private law allows actors to act solely in pursuit of their self-interest, whereas public law requires a higher standard, often referred to as the pursuit of a common interest. The public character of an act or behaviour thus derives from its relation to that common interest. It depends on the social sphere from which it originates. If the activity is part of the sphere where self-interest is a sufficient justification, the act is private; if it belongs to the sphere where common interests are predominant, it is public.
In times of crisis, such as the COVID-19 pandemic, citizen trust in the system is one of the central features ensuring citizen compliance and the functioning of a democratic society, which includes the role of democracy and how citizens assess its performance. That evaluation also comes from the published discourse. We cannot forget that it was that public discourse that associated the use of apps with surveillance, questioning whether these measures are ‘typically’ European.
13.5 Conclusion
It is time to conclude. From all that has been written, it is clear that I believe that trust, regulation, redefining the rule of law, and the role of the state are crucial factors in overcoming the general perception of the loss of rights. That perception has developed since the beginning of digitalisation and was particularly clear during the experience of contact tracking to control COVID-19. That is why I believe this represents a new set of challenges in the art of the law. It is important to keep in mind that we have online and offline rights, national and international orders, and states that will not give up their roles, and, last but not least, citizens who have claims relating to privacy and also protection and security.
14.1 Introduction
Innovation and digitalisation are perceived as an enabler of growth and a catalyst for the development of modern aviation in Europe.Footnote 1 In the view of the European Commission ‘drones are a technology that is already bringing about radical changes, by creating opportunities for new services and applications, as well as new challenges’.Footnote 2 As it stated in the title of the Communication from the European Commission to the European Parliament and the Council, opening the aviation market to the civil use of drone technology marks ‘[a] new era for aviation’.Footnote 3 A similar view of drone technologies is shared by other countries.Footnote 4
Indeed, the commercial drone industry has flourished in recent years, and the technology, which a few decades ago was exclusively a part of modern military equipment, became available to the general public. The number of drone operations in Europe alone has already come close to manned aviation.Footnote 5 Drones are widely used by various state authorities, as well as commercial entities and private persons for purposes as diverse as policing, search and rescue, environment monitoring, film-making, mapping, agriculture, and entertainment.
Drones are undoubtedly very useful and represent tremendous opportunities. With evolving drone technologies various new business models emerge, such as parcel delivery by air, aerial photography, air taxis, and drone journalism. Drones offer new services and applications going far beyond traditional aviation and allow us to perform existing services in a more affordable and environmentally friendly way by increasing the efficiency of different activities. In addition, drones are hard to replace, especially in difficult situations; for example, when restoring communications or carrying out search and rescue missions after natural disasters. Even the pandemic caused by the COVID-19, when physical contact was restricted, became an opportunity to demonstrate the extremely wide range of possible drone applications and promote their use in everyday life, seeking a more positive public attitude towards the application of drone technology.Footnote 6 In principle, the capabilities of drones are almost limitless, making them applicable in any field.
However, among other reasons, to meet safety requirements, modern drones as a rule are equipped with high-end technologies, which can capture, store, and upload online or to other devices huge amounts of data, including private data. Drones can range in size from being big enough to carry a human to as small as a hummingbird, and very quiet, making them extremely hard to notice. In addition, drones are piloted remotely or, in some cases, using advanced artificial intelligence (AI) technology that is able to develop flight patterns with the only human input specifying the destination,Footnote 7 which makes it very hard to trace the actual drone users.Footnote 8 Therefore, with the use of drones private data can not only be easily accessed and collected in areas where people reasonably expect privacy, but it can also be achieved anonymously. Hence, along with all the new possibilities and benefits, the massive deployment of drones in public life also bring serious privacy issues, as drones like any other technology can be misused. According to Zuboff, in the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its original intention.Footnote 9 Therefore, emerging drone technology requires an appropriate legal response, as it is impossible to disinvent the technology – drones are here to stay.
Privacy is a constitutional value recognised in the vast majority of countries. According to Privacy International, one of the world’s major watchdogs on surveillance and privacy, over 130 countries in every region of the world have constitutional statements regarding the protection of privacy.Footnote 10 The right to privacy is also enshrined in major international and regional human rights documents (conventions, declarations, charters, etc).Footnote 11 Although legal scholars often acknowledge that drones pose a serious threat to privacy, in making such a conclusion they simply presume the potential dangers, usually limiting themselves to a few examples, but do not discuss in more detail the privacy violations that the use of drone technology can cause.
A deep understanding of drone related threats to privacy, the diverse ways that drone technology can affect privacy, the way it can interfere with other human rights, the various restrictions to drone use that may be implicated, and the need to mitigate these tensions by maintaining the right balance together form the cornerstone to ensuring the successful integration of drones in modern society. Therefore, the aim of Section 14.2 is to present a broader discussion of the major privacy concerns arising from the mass introduction of drones into everyday life,Footnote 12 provide a more detailed description of the relevant threats, as well as to highlight possible clashes between privacy and other human rights invoked by the use of drone technology, emphasising the need to strike a fair balance between these conflicting values. In Section 14.3, the current regulatory developments on drone technologies in relation to identified human rights concerns are analysed, focusing primarily on the European context and seeking to determine the main shortcomings that must be rectified in order to effectively manage the threats associated with the use of drones.
14.2 Drones and Their Use: Major Privacy Concerns and Other Human Rights Issues
Article 7 of the Charter of Fundamental Rights of the European Union (the Charter) holds that everyone has the right to respect for his or her private and family life, home, and communications. While Article 8 of the Charter enshrines protection of personal data, stating in its first paragraph that everyone has the right to the protection of personal data concerning him or her. The Court of Justice of the European Union (CJEU) when applying Articles 7 and 8 of the Charter has noted on numerous occasions that Article 7 of the Charter, regarding the right to respect for private and family life, contains rights corresponding to those guaranteed in Article 8(1) of the Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR), and that the protection of personal data is of fundamental importance to a person’s enjoyment of his or her right to respect for private and family life, as guaranteed by Article 8 of the ECHR. In accordance with Article 52(3) of the Charter, Article 7 of the Charter is thus to be given the same meaning and the same scope as Article 8(1) ECHR, as interpreted by the case law of the European Court of Human Rights (ECtHR).Footnote 13 The same is true for other rights protected by the Charter which correspond to rights guaranteed by the ECHR. Therefore, the CJEU often relies on the jurisprudence of the ECtHR when interpreting the meaning and the scope of the rights recognised by the Charter. Nevertheless, this provision does not prevent EU law providing more extensive protection.
The ECtHR has emphasised in its case law that the concept of private life extends to aspects relating to personal identity, such as pictures of a person.Footnote 14 A person’s image constitutes one of the chief attributes of his or her personality, as it reveals the person’s unique characteristics and distinguishes the person from his or her peers. The right to the protection of one’s image is thus one of the essential components of personal development.Footnote 15 It primarily presupposes the individual’s right to control the use of that image, including the right to refuse its publication, which is also relevant for publications online.Footnote 16 And namely in the light of drone technology, the most evident threat to privacy is when using a drone equipped with a camera, which usually comes as a standard element of drone equipment and is able to capture images (photographs and videos).
The right to one’s image is recognised virtually worldwide as a part of the right to private life or regarded as a separate right with a special provision in national laws to protect it; nevertheless, it is closely related to the right to respect for private life.Footnote 17 Therefore, the unlawful surveillance of a person and recording, collecting, processing, or using that data may lead to the violation of his or her right to privacy. As evident from the jurisprudence of the ECtHR, everyone, including people known to the public, has a legitimate expectation that his or her private life will be protected.Footnote 18 However, a person’s reasonable expectation of privacy is a significant though not necessarily conclusive factor, since there are occasions when people knowingly or intentionally involve themselves in activities that are or may be recorded or reported in a public manner.Footnote 19 Therefore, there are a number of elements relevant to the consideration of whether a person’s private life is concerned by measures effected outside a person’s home or private premises.Footnote 20 As a result, according to the ECtHR, it is relevant if the surveillance exceeded an extent of exposure possible to a passer-by or to security observation and beyond a degree surpassing that which the individual could possibly have foreseen.Footnote 21
In this sense the use of drones in the light of the right to privacy is quite problematic. Drones can be very small and quiet, and so hard to detect, and the ever-decreasing size of various drone components constantly leads to less detectable devices. Owing to these features, people may often not be aware of being surveilled; among other issues, it creates opportunities for more frequent voyeuristic attacks. Drones can also be very light and easy to carry, they can take off quickly and almost from anywhere, usually there is no need for lengthy preparation and/or special take-off and landing sites. In addition, they are relatively cheap. These are definitely the advantages of drones in comparison with conventional aircraft, as they make aerial surveillance, which was previously quite expensive and usually available only to state authorities, easily accessible to everyone. However, this poses a serious challenge to ensuring adequate protection of the right to privacy because it may lead to systematic mass surveillance, which can in turn cause serious negative psychological consequences in society by making people feel less free and force a sort of self-censorship by restricting their behaviour.
Private life, in the ECtHR’s view, includes a person’s physical and psychological integrity; the guarantee afforded by Article 8 of the ECHR is primarily intended to ensure the development, without outside interference, of the personality of each individual in his relations with other human beings.Footnote 22 There is therefore a zone of interaction with others, even in public contexts, which may fall within the scope of private life.Footnote 23 The ECtHR, for example, has found video surveillance of public places where the visual data are recorded, stored, and disclosed to the public as falling under Article 8 of the ECHR.Footnote 24 According to the ECtHR, although monitoring the actions of an individual in a public place using photographic equipment that does not record the visual data does not, as such, give rise to an interference with the individual’s private life,Footnote 25 the recording of the data and the systematic or permanent nature of the record may give rise to such considerations, regardless of whether the surveillance is covert or overt.Footnote 26 Therefore, the ECtHR has concluded that the compilation of data by security services on particular individuals, even without the use of covert surveillance methods, constitutes an interference with the applicants’ private lives.Footnote 27 This comes in line with ‘mosaic theory’, developed in light of the Fourth Amendment to the US Constitution.Footnote 28 According to this theory, a certain amount of data as an aggregated whole can implicate reasonable expectations of privacy even though the separate constituent parts of such data do not.Footnote 29 Hence, it is evident that private life is a broad term ‘not susceptible to exhaustive definition’.Footnote 30
Threats posed by drones equipped only with sound recorders and no ability to capture video or images should also not be neglected, as they can secretly listen to and record private conversations. Moreover, modern technologies make it possible to perform a sort of human profiling based on timbre and other voice characteristics and/or identify a particular person (i.e., a speaker) using voice recognition technology.Footnote 31 Accordingly, voice recordings, as well as images in the case of facial recognition, can be further processed into biometric data using advanced technologies, which makes it possible to relate real people with their profiles in the digital domain. Such unforeseen use of photographs, videos, and sound recordings can constitute an interference with the right to private life.Footnote 32 In the ECtHR’s view, the rapid development of increasingly sophisticated techniques allowing, among other things, facial recognition and facial mapping techniques to be applied to individuals’ photographs without a doubt amounts to interference with his or her right to private life within the meaning of Article 8 § 1 of the ECHR, which makes the taking of their photographs and the storage and possible dissemination of the resulting data problematic.Footnote 33 Therefore, recording and storing voices may also in itself constitute an interference with the right to private life. As the ECtHR has ruled in the case P. G. and J. H. v. the United Kingdom (§ 59–60), the recordings taken for use as voice samples cannot be regarded as falling outside the scope of the protection afforded by Article 8 of the ECHR.
The risk of interference with the right to privacy may arise not only when photographing and/or filming individuals but also their private property.Footnote 34 Even when such images (e.g., of an enclosed courtyard) are not related to an identified or identifiable natural person, it may still infringe one’s privacy, as the right to respect for private life also includes the inviolability of the home. While drones can easily overcome fences of any size and construction or even fly inside buildings,Footnote 35 where persons reasonably expect to maintain their privacy, the inviolability of the home gains even more significance.Footnote 36 In addition to this, cameras used in modern drones have advanced optical and digital zoom capabilities, which make it possible to capture high resolution images from a long distance.Footnote 37 Combined with advanced infra-red, radar, laser, holographic, computer, and other technologies and theoretical scientific knowledge, drones equipped with high zoom cameras enable to form a detailed spatial (3D) projection from captured data, revealing in detail the geometric, physical, and other properties of objects and their interrelationships.Footnote 38 Furthermore, this feature (high zoom capability) is also an issue in relation to privacy concerns as far as it helps to maintain the secrecy of surveillance by facilitating a large distance from the subject of interest, in addition to the anonymity of the drone pilot owing to remote control. In turn, the individual not being aware of ongoing surveillance is one of the most significant challenges to privacy in relation to drones, when either way the actual transgressor (pilot) may remain anonymous.
Drones can capture and store a great variety of data – beside sound recordings and standard images (photos or videos), drones can also capture thermal images, geo-location and geo-spatial data, which poses no less threat to privacy than the former. For example, thermal image technology makes it possible to see through walls, obtaining an image of people and objects inside buildings without even entering.Footnote 39 In this respect, for example, the Supreme Court of the United States in the case Kyllo v. US (2001) has ruled that the use of such technology to capture the internal thermal image of a person’s home constitutes a search within the meaning of the Fourth Amendment to the United States Constitution.Footnote 40 It may also be concluded from ECtHR jurisprudence that sophisticated surveillance methods with enhanced video monitoring capabilities, such as thermal imaging, infra-red, or night vision, will likely interfere with Article 8 of the ECHR, as they surpass the ordinary surveillance measures available to the general public and thus exceed reasonable expectations of privacy in certain circumstances (making individuals exposed in an unforeseen way).Footnote 41
Most basic modern drones also have Global Positioning System (GPS) functionality, accelerometers, inclinometers, and other sensors necessary for safe drone operation and the functioning of such features as ‘return home’, ‘follow me’, and so on.Footnote 42 Meanwhile recording the GPS data makes it possible to track a person’s movement and can interfere with the right to private life, especially in conjunction with other captured data (e.g., images).Footnote 43 In fact, GPS coordinates are often automatically assigned to images (photos and videos) taken by drones as metadata, making it possible to identify the specific location they were captured. This is a clear example of how new data (in a qualitative sense) can be created by combining various pieces of data, especially with the use of AI-driven data mining and data harvesting techniques, which make it possible to unearth new interesting and unexpected patterns and relationships between existing, at first sight completely unrelated, data by discovering the missing details of the information and linking such data in a logical way, creating a detailed picture of the subject under study. As Gray and Citron have noticed, ‘technological advances have made it possible for public and private actors to watch us and to know us in ways that once seemed like science fiction’.Footnote 44 In this regard, the aforementioned mosaic theory becomes significant. The CJEU also shares this view and emphasises that various data, taken as a whole, may allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained, such as their everyday life habits, permanent or temporary places of residence, daily or other movements, activities, their social relationships, and the social environments they frequent.Footnote 45
With advanced computer programs and AI technology, besides providing such functions as face recognition or identification of vehicle licence plate numbers or autonomously tracking the target, drones can also perform many other tasks; for example, intercept or block mobile communications,Footnote 46 recognise and scan radio-frequency identification (RFID) tags (i.e., information stored in them), which form the basis of various identification methods (identity cards, pass cards, etc.),Footnote 47 and much more. However, drones can be hacked and intercepted themselves, making it not only possible to use them for the unlawful collection of private data by taking over their control, but also to retrieve the data already stored in the drone’s internal memory. Therefore, despite the fact that personal data may sometimes be gathered by drones lawfully, it is necessary to subsequently ensure adequate protection of such data. However, the cybersecurity of drones is questionable,Footnote 48 especially if even state-of-the-art military drones can be hacked.Footnote 49 This low level of cybersecurity poses a threat to drone technology itself, as it can erode public confidence in such technology, and drones, which are now rapidly gaining popularity among modern society, may no longer look so attractive.Footnote 50
By using a well-established wireless (mobile) network, drones are able to transmit captured data directly to other devices (including other drones), upload the data online, or simply broadcast publicly over the internet, including through popular social networks and other platforms. Similarly, drones connected to a mobile network can download data from various databases and combine it with real time surveillance data. Notwithstanding, storing and processing personal data relating to the private life of an individual also falls within the right to privacy.Footnote 51 This raises another issue regarding the cross-border exchange of data,Footnote 52 while at the same time ensuring adequate protection of personal data.Footnote 53
The ability of drones to communicate (interact) with each other makes it possible to form an interoperable drone swarm, making it possible to monitor multiple targets at the same time and/or maintain continuous long-term aerial surveillance using different drones in shifts. This kind of communication, as part of the Internet of Things, has led to the emergence of the terms Internet of Drones or Internet of Drone Things.Footnote 54 As the President of the European Commission, Ursula von der Leyen, has stressed in her political guidelines for 2019–24, the Internet of Things (thus, also the Internet of Drones) is connecting the world in new ways, as physical devices and sensors are now linking up with each other so that huge and increasing amounts of data are being collected; although data and AI are ingredients for innovation, in order to release that potential it is crucial to balance the flow and wide use of data while preserving high privacy standards, as well as ensuring safety and security.Footnote 55 This is a serious issue because combining drones with huge databases and aggregation software controlled by private entities may lead to significant shifts in the distribution of power in society, creating powerful private entities that can tend to abuse the available data. The AI technology used to manage such data can also lead to discrimination by misinterpreting individual behaviour and creating prejudices.
Technological development in recent decades has been notably moving towards higher levels of automation in various areas, including air transport.Footnote 56 Automation plays an ever-increasing role in aviation, where many processes are in fact already entrusted to technologies capable, for example, of keeping an aircraft on course, identifying conflicting traffic, proving resolution advisories to avoid potential mid-air collisions, plotting and executing optimal descent profiles, and in some cases even controlling aircraft take-off or landing, with the pilot becoming a simple observer of these high-end systems.Footnote 57 Accordingly, aviation is increasingly focusing on the application of AI technology in the air transport sector,Footnote 58 including, for example, the use of passenger face recognition technology at airports to speed up the boarding process,Footnote 59 or the automation of air traffic control to increase aviation safety, inter alia by safely integrating drones into the non-segregated airspace.Footnote 60 As already mentioned, AI technology is widely used in drones themselves, with drones being regarded as high-risk AI systems.Footnote 61 This aspect of the digital transformation reflects the fourth industrial revolution, which is ‘characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres’ and will change society in unpredictable ways.Footnote 62 In this regard the European Commission predicts that the use of drones is likely to grow significantly, as automation enables them to fly further, and declares that ‘European rules promote the sustainable growth of drone operations, paving the way for a digital future.’Footnote 63
Unmanned aerial vehicles are exceptional in the sense that they are able to integrate many different modern technologies into a single whole, acting like a platform (base) that additionally gives wings to these technologies. Drones are essentially flying robots capable of capturing and processing extremely large amounts of various types of data, and this process can be more or less automated, which makes them perfect surveillance tools. Despite the fact that the majority of the technologies used in drones (e.g., cameras, sound recorders, GPS sensors) are not so new, and that therefore they are quite well known (including the threats they pose to privacy), drone technology (specifically, their ability to fly and the remote control option) brings these to the next level of danger, making private data, which is so valuable and often regarded as the new currency, more vulnerable than ever.Footnote 64 Just as the emergence of instantaneous photographs in the gutter press was once seen as a game changer, requiring us to re-estimate the protection of the right to privacy in order to meet the demands of society,Footnote 65 today drones invoke the same necessity owing to the increased scope of aerial surveillance they afford.Footnote 66
Along with the mass introduction of drones in everyday life, sophisticated surveillance techniques emerge.Footnote 67 Traditionally, the state was seen as the source of such surveillance concerns,Footnote 68 but increased usage of modern technologies in public life (including the commercial drone industry) has created what Zuboff calls ‘surveillance capitalism’, which stems from the exploitation and control of human nature as private entities control most of the data.Footnote 69 The immense deployment of drones in public life may lead to the so-called chilling effect on the fundamental right to privacy, creating a Panopticon environment, where individuals feel less free and may resort to self-preservation (self-censorship) by restricting their behaviour to avoid being watched even when no drones are in operation.Footnote 70 This requires an appropriate legal response, especially in light of increasing public concerns regarding bulk interception.
However, some data is captured and processed by drones for safety and security reasons – modern drones inevitably must capture, store, and process some specific data in order to ensure their safe and secure integration into the non-segregated airspace and our everyday life. For example, it would be hard (if not impossible) to safely use a long-distance drone without a camera, gyroscope, GPS, and other modern sensors, especially when performing BVLOS flights.Footnote 71 Such sensors are also necessary for the proper functioning of Detect and Avoid technology, which can automatically avoid obstacles (including humans); therefore, requiring the drone to constantly monitor the surrounding environment and process that data in real time. GPS data is also crucial for the proper operation of the return home function, which can safely return a drone to its take-off or other pre-arranged location; for example, in case of the loss of the remote control or when the drone battery is running low. This is also necessary for the geo-fencing technology used in restricted areas for security reasons (e.g., above nuclear plants, military bases, government buildings, airports).Footnote 72 Therefore, aviation safety and security implications must also be taken into account when analysing the privacy concerns raised by the use of drones because the collection, processing, storage, and use of certain (private) data may inevitably be necessary.
Moreover, although increasing drone usage primarily raises privacy concerns, at the same time, drone technology creates new opportunities for diverse applications and services. Therefore, the restrictions imposed on their use in order to protect the right to privacy may interfere with other human rights, necessitating a fair balance to be struck between these conflicting values.
As already stated, evolving drone technologies have led to the emergence of new business models, such as parcel delivery by air and aerial photography. Accordingly, drones are now widely used in various professions, including estate agents, photographers, cinematographers, advertisers, and others. In this regard, Articles 15 and 16 of the Charter enshrine that everyone has the right to engage in work and to pursue a freely chosen or accepted occupation, and the freedom to conduct a business in accordance with the law is recognised. Freedom to choose an occupation and the right to engage in work is also recognised by other international human rights documents.Footnote 73 Therefore, any restrictions on drone usage that affect professions dependent on this technology may be regarded as interfering with the freedom to choose an occupation, the right to engage in work, and the freedom to conduct a business.
Such restrictions may also interfere with the right to property, as protected by Article 17 of the Charter, Article 1 of Protocol No. 1 of the ECHR, among others. This is not only the case in the meaning of the use and peaceful enjoyment of drones themselves as physical possessions, but also in a broader sense. For example, the ECtHR has concluded in its case law that the economic interests connected with running a business include ‘possessions’ for the purposes of Article 1 of Protocol No. 1 of the ECHR, and maintenance of the licence can be regarded as one of the principal conditions for carrying on a business; thus its withdrawal could constitute interference with the right to the ‘peaceful enjoyment of [one’s] possessions’.Footnote 74 This means that any legal regulation that may restrict the use of drones for business purposes in favour of the protection of the right to privacy, as a general (public) interest, must be reasonably proportionate to the aim sought. In other words, a fair balance must be struck between these conflicting values, and the requisite balance will not be found if the person or persons concerned have to bear an individual and excessive burden.
One profession that immediately took advantage of the emergence of drone technology is journalism, which is also closely related to the freedom of expression and the right to information.Footnote 75 Article 11 of the Charter, Article 10 of the ECHR, and other human rights instruments protect the right to receive information without interference by a public authority regardless of frontiers.Footnote 76 In light of this right, drone journalism becomes highly important, as drones allow reporters to access information in difficult and dangerous situations while maintaining a safe distance, such as in violent demonstrations, flooded areas, and the sites of other environmental disasters, where they could not otherwise be present or their presence would be of a very limited scope. Such an opportunity, besides helping to gather information, could also help promote human rights protection by documenting possible human rights violations (e.g., in war zones, during riots). Therefore, policies regarding the use of drones can be linked to the World Press Freedom Index and can be seen as a test of the freedom of expression, with top-ranked countries being the least restrictive about the use of drones and authoritarian countries completely prohibiting the journalistic use of this technology.Footnote 77
Drone technology, as an instrument for remote observation, may also be relevant in the sense of the integration of persons with disabilities. Article 26 of the Charter states that the EU recognises and respects the right of persons with disabilities to benefit from measures designed to ensure their independence, social and occupational integration, and participation in the life of the community. Article 27 § 1 of the Universal Declaration of Human Rights also recognises, that everyone has the right freely to participate in the cultural life of the community and to share in scientific advancement and its benefits. Meanwhile, drones equipped with cameras, microphones, and speakers in some cases may be regarded as one way (if not the only way) for people with movement disabilities to engage in public life (at least remotely) and interact with others.
However, the intrusive nature of drone surveillance discussed in Section 14.1 could also affect the freedom of movement of others protected by Article 45 of the Charter, Article 2 of Protocol No. 4 of the ECHR, Article 13 of the Universal Declaration of Human Rights, among others. In this regard it must be noted that constant surveillance caused by such digital interaction, could damage freedom of movement in the sense that it may lead to a chilling effect, when individuals feel less free and resort to a form of self-preservation (self-censorship) by restricting their behaviour and feeling forced to avoid such drone-filled public places.
Another important issue is the right to life:Footnote 78 as already mentioned, drones inevitably must capture, store and process certain data owing to aviation safety and security considerations (see Section 14.1). The main aim of such aviation safety and security-based measures is not only to protect people in the air (e.g., crews and passengers of manned aircraft, against a collision with a drone), but also people on the ground (e.g., passers-by who may be injured by flying drones), as well as their property, which could be damaged or destroyed. A slightly more distant implication in this regard is related to drone usage during natural disasters and other difficult situations, when drone technology can be crucial in restoring communications and carrying out search and rescue missions, thereby saving lives.Footnote 79
14.3 Developments of Drone Regulations: Maintaining the Balance between Conflicting Human Rights from a European Perspective
In recent years, the vast development of drone technology has led to the important evolution of legal regulation worldwide. However, standards set by individual countries could lead to a significant weakening of the protection of the right to privacy, given the possible diversity of views in relation to it. Hence, it is worth looking into EU drone regulation in more detail, as the EU is characterised by its integrative nature and declares the expressed aim of becoming a world leader in international aviation, a global model for the development of next-generation aviation technologies in full respect of fundamental human rights.Footnote 80 It is clear from the jurisprudence of the ECtHR that when it comes to balancing competing values in relation to the use of modern technologies (e.g., in this case, drones), any state claiming a pioneer role in the development of new technologies bears special responsibility for striking the right balance in this regard.Footnote 81 Accordingly, with drones considered to be the future of aviation,Footnote 82 there have been important developments in EU regulation in recent years.
In particular, the new Regulation (EU) 2018/1139 on common rules in the field of civil aviation was adopted (commonly referred to as the Basic Regulation), thereby also establishing an EU Aviation Safety Agency. It brought all aircraft, regardless of their operating mass, into EU competence. In other words, since the adoption of the Basic Regulation, all drones fell within the scope of EU regulation.Footnote 83 This comes in line with the opinion of the European Commission, expressed earlier in its Communication to the European Parliament and the Council COM(2014)207 ‘A new era for aviation’, that rules allowing civil drone operations while guaranteeing at the same time the required high levels of privacy must be established at the European level, because such harmonised rules are seen as a necessary precondition for public (societal) acceptance of this disruptive technology.
Acknowledging public acceptance as key to the growth of drone services was also the standpoint of the European aviation community (which the European Parliament later agreed with and fully supported as one of the essential principles for future drone technology development),Footnote 84 which pointed out in the 2015 Riga declaration that in order to achieve this public acceptance the respect of citizens’ fundamental rights, such as the right to privacy and the protection of personal data, must be guaranteed. The aviation community confirmed the importance of joint European action and stressed the necessity for European regulators to ensure that all conditions are met for the safe and sustainable emergence of innovative drone services, but at the same time highlighted that regulations must help the industry to thrive and adequately deal with citizens’ concerns. This point of view reflects the importance of striking a fair balance between different competing values.
Following the adoption of the Basic Regulation, which provided a mandate to the European Commission to adopt legislation in relation to the operation of unmanned aircraft, as well as requirements for their production and certification, the Commission delegated Regulation (EU) 2019/945 on unmanned aircraft systems and on third-country operators of unmanned aircraft systems and the Commission implementing Regulation (EU) 2019/947 on the rules and procedures for the operation of unmanned aircraft were adopted and came into force, with the latter applied since 2021. In general, the Basic Regulation together with the aforementioned two regulations of the European Commission, brought in important changes regarding drone operations, especially in relation to the right to privacy. For example, the obligation was introduced to register drones and their users and to install direct remote identification systems in unmanned aircraft, it also established that courses and exams for remote pilots should include subjects on the right to privacy and data protection, with examination certificates valid only for a limited period of time (currently five years), which means that remote pilots will have to periodically renew their knowledge on this matter.
As pointed out in recital 28 of the Basic Regulation,
The rules regarding unmanned aircraft should contribute to achieving compliance with relevant rights guaranteed under Union law, and in particular the right to respect for private and family life, set out in Article 7 of the Charter of Fundamental Rights of the European Union, and with the right to the protection of personal data, set out in Article 8 of that Charter and in Article 16 of the TFEU [Treaty on Functioning of European Union], and regulated by Regulation (EU) 2016/679 of the European Parliament and of the Council [General Data Protection Regulation (GDPR)].
Therefore, it is enshrined in Annex IX to the Regulation (EU) 2018/1139, that unmanned aircraft and operations with unmanned aircraft must comply with relevant rights guaranteed under Union law. But at the same time, it follows that ‘unmanned aircraft […] operations should be subject to rules that are proportionate to the risk of the particular operation or type of operation’.Footnote 85 The European legislator therefore seems to be sharing the view of the aviation community, by taking the risk-based approach towards drone regulation in the search for a fair balance between conflicting legal values. However, at first sight it does not seem to be completely successful, to the detriment of the right to privacy and data protection, although the EU promotes high standards when it comes to the protection of these fundamental rights.
The analysis of the EU drone regulations concerned shows that there are a lot of exceptions from such important mechanisms as the registration of drones and their users or the direct remote identification of unmanned aircraft, which hardly seems to be well founded and makes it quite difficult to ensure the effectiveness of these measures while protecting the right to privacy and ensuring personal data protection. For example, the Basic Regulation states that operators of unmanned aircraft shall be registered in accordance with the acts adopted by the Commission when they operate unmanned aircraft, the operation of which presents risks to privacy or protection of personal data, and such unmanned aircraft shall be individually marked and identified.Footnote 86 Therefore, it seems that the obligations related to registration should apply whenever a drone with any sensor that allows the capture of private or personal data is used or is going to be used. Yet Regulation (EU) 2019/947 adds some ambiguity regarding registration, because according to it such registration is mandatory only when operating a drone equipped with a sensor able to capture exclusively personal data, which covers only information related to an identified or identifiable natural person (data subject),Footnote 87 leaving aside the data, which strictly does not fall within the scope of the definition of personal data, despite the fact that collection of such data could infringe the right to privacy. Moreover, the obligation to register is not applied when using drones that are considered to be toys within the meaning of Directive 2009/48/EC, although the latter also can be fitted with cameras, microphones, and various other sensors capable of capturing and storing both private and personal data. Therefore, as the remote pilot of an unmanned aerial vehicle (UAV) and the pilot of a manned aircraft ultimately have the same responsibility for following the legal regulations when operating their aircraft,Footnote 88 the obligation to register is highly important when dealing with their anonymity issue, as it can ease traceability in the case of their possible liability for failing to comply with those rules. This was also the standpoint of the European Parliament, noting that all drones in line with a risk-based approach should be equipped with an ID chip and registered to ensure traceability, accountability, and the proper implementation of civil liability rules.Footnote 89
The same goes for a direct remote identification system,Footnote 90 which, despite the European Parliament’s expressed view that the question of identifying drones, of whatever size, is crucial,Footnote 91 does not apply to drones categorised as C0 class (i.e., drones with an operating mass less than 250 g) or C4 class (i.e., drones already made available on the market) within the meaning of delegated Regulation (EU) 2019/945,Footnote 92 although this system is essential for effectively dealing with the issue of drone users’ anonymity and ensuring their traceability; that is, remote ID is crucial for individuals to be able to take measures to protect their privacy from aerial surveillance by such drones. It is hard to understand the grounds for this exception because it does not seem that such an obligation could be regarded as unsuitable, unnecessary, or disproportionate, even when talking about the drones already made available on the market, as direct remote identification according to the same Regulation (EU) 2019/945 can be provided as a separate add-on, which can be retrofitted on drones by their users themselves.Footnote 93 This is especially the case when in 2017 one of the world’s leading drone manufacturers had released a white paper outlining a concept in which each drone could transmit its location as well as a registration number or similar identification code using inexpensive radio equipment that is already on board many drones today and that could be adopted by all manufacturers.Footnote 94
Furthermore, the delegated Regulation (EU) 2019/945 sets out the requirements for a geo-awareness system, which should alert remote pilots when a potential breach of airspace limitations is detected so that they can take effective immediate action to prevent that breach; for example, in areas where drone use is restricted owing to privacy concerns. But this system is not mandatory, not to talk about a geo-fencing system, which is completely omitted from the EU drone regulations, although it could automatically prevent drones from entering or launching in restricted (no-fly) zones and help to ensure privacy in these areas.Footnote 95 However, some drone manufacturers tend to install geo-fencing systems in their drones voluntarily, which shows that business awareness extends far beyond that of the legislators.Footnote 96 What is more, alarming drone cybersecurity issues are not covered by the EU regulations either.Footnote 97
Of course, the EU legislator acknowledges the need to further develop requirements regarding the registration of drones and their pilots, as well as geo-awareness and remote identification systems, as these are seen as the foundations of the U-space system, which is being developed to safely integrate drones into the airspace.Footnote 98 However, this step-by-step approach, based on the current state of drone technology development, considering the fast pace of technological progress in comparison to the evolution of legal regulation, risks lagging far behind the technology and does not correspond to the standpoint of the European Parliament expressed in a 2015 resolution on the safe use of remotely piloted aircraft systems (RPAS) in the field of civil aviation that the global regulatory framework for drones should be part of a long-term perspective, taking into account possible future developments.Footnote 99
From the global perspective, it is quite clear that a similar approach to drone regulation is being taken by legislators worldwide; for example, in the US,Footnote 100 Canada,Footnote 101 and Australia,Footnote 102 where specific requirements for drone registration, pilot licensing, built-in remote identification systems, operations above gatherings of people, minimum distance from airports, other people and property, geo-fencing and/or geo-awareness systems, among others, are being established. Some of these are already being implemented; others are still in progress (in a transitional period).Footnote 103 Meanwhile, in other European (non-EU) countries, such as the UK,Footnote 104 Norway,Footnote 105 and Iceland,Footnote 106 drone regulation is based on common EU drone rules or the latter are de facto applied. The aforementioned countries also share a risk-based approach – like the EU, other countries tend to differentiate drone regulations based on the type of operation (i.e., different flight purposes: recreational, commercial), drone size and weight, other drone characteristics (e.g., with or without camera), level of pilot competence, and so on. This regulatory approach in a sense materialises the vision of the European Parliament that a ‘harmonised and proportionate European and global regulatory framework needs to be developed on a risk-assessed basis, which avoids disproportionate regulations for businesses that would deter investment and innovation in the [drone] industry, whilst adequately protecting citizens’.Footnote 107
Although such an approach at first glance may seem beneficial for industry, it is not the case in the context of drone technologies. When the legislator avoids taking more decisive steps in order to develop a clear and well-defined provision, this creates a sort of ‘chicken and egg’ problem, whereby regulators are reluctant to develop standards until the industry comes forward with technologies for authorisation; however, the industry is reluctant to invest in developing the necessary technologies without certainty surrounding how they will be regulated. Such a deadlock is dangerous from the human rights perspective and suggests a failure on the part of the legislator to balance conflicting values, and nor does it help the industry to thrive. As the ECtHR has ruled on several occasions, it is essential to have clear and detailed rules on the subject, especially as the technology available for use is continually becoming more sophisticated, and there must be adequate and effective safeguards against abuses.Footnote 108 Such rules must form part of a legislative framework affording sufficient legal certainty, so that all parties can foresee the consequences for themselves.Footnote 109
This brings to mind the control dilemma elaborated by Collingridge, following which, influencing technological developments is easy when their implications are not yet manifest, but once we know these implications, they are difficult to change. In other words, when a technology is still at an early stage of development, it is possible to influence the direction of its development, but we do not yet know how it will affect society. On the other hand, when the technology has become societally embedded, we can recognise the implications, but by then it is very difficult to influence its development. Nevertheless, legislators should not put human rights at risk, but should refrain from a step-by-step approach based on the current state of technological progress, as such an approach tends to lag behind the development of technologies. Shifting from a reactive to proactive legislation and establishing a sufficiently clear and balanced legal framework could help foster further technological development, while at the same time ensuring adequate protection of human rights. As the deployment of drones may inevitably raise tensions between the right to privacy and other human rights, a holistic approach must be taken when regulating the use of drones, focusing not merely on the protection of the right to privacy, but paying more attention to other fundamental freedoms and human rights, and seeking a well-balanced legal framework.Footnote 110
In this regard, Article 52 § 1 of the Charter establishes that any limitation on the exercise of the rights and freedoms recognised by this Charter are subject to the principle of proportionality. This principle is also considered the most important tool for interpreting the ECHR and is widely applied by the ECtHR.Footnote 111 Therefore, in continental (Romano-Germanic) legal systems, the proportionality principle prevails as a balancing method. Despite the very concept of balancing being perceived slightly differently in non-continental legal systems,Footnote 112 the substantial objective remains the same – to strike a fair balance between conflicting legal values. Legislators worldwide should rely more on the principle of proportionality (or other legal balancing methods) when regulating the usage of drone technology in order to reconcile conflicting human rights. As technological development in this field is inevitable and drone usage in modern society seems to keep growing rapidly, further discussion of these issues is of great importance.
14.4 Conclusions
This study has revealed that the use of drone technology can interfere with the right to privacy in diverse ways. Although many of the discussed dangers that the use of drones pose to the right to privacy are not new relatively speaking, the frequency and severity of privacy violations may increase significantly owing to the capabilities of drones. In this regard drones pose a dual threat: (a) as a technology that inexpensively ‘gives wings’ to other technologies (e.g., cameras, sound recorders, GPS, infrared and other sensors, etc.), thus allowing their use in a completely new environment (i.e., in the air) and opening up new surveillance possibilities, and (b) as a platform (base) that integrates various technologies into one whole, including the incorporation of AI technology, thus creating qualitatively new surveillance instruments.
However, various restrictions imposed on the use of drones in favour of privacy protection could undermine other human rights that are equally important; for example, the right to engage in work and the freedom to conduct a business, the right to property, freedom of expression, and the right to information, or even the right to life. This requires a holistic approach from legislators in order to strike a fair balance between these conflicting values. It is important, when regulating the usage of drone technology, to rely more on the proportionality principle (or other legal balancing methods) in order to reconcile conflicting human rights.
The legal response to the threats posed by the use of drones tends to lag behind the development of these technologies, and various poorly grounded and extensive exceptions are being established. Illustrated by the example of the EU, the study reveals that further joint action must be taken to develop legal requirements regarding the registration of drones and their users, as well as geo-awareness and remote identification systems, while also establishing common rules related to drone cybersecurity, geo-fencing, and others. More attention should be paid to by-design and by-default measures (e.g., minimisation of the data gathered by drones, automatic anonymisation or removal of unnecessary data, etc.), possible obligations for online service providers (e.g., remote signal blocking, restrictions for data sharing, etc.) and AI related issues in drone systems.
The legislators should foster the development of standards using a long-term perspective. In order to be effective, the legislation has to shift from reactive to proactive, and establish a more future-orientated legal framework taking into account possible future developments, rather than a step-by-step approach based on the current state of technological progress. The industry, regulators, and the public must come together to seek a harmonised global regulatory framework and to guarantee legal certainty while balancing competing values. As drone usage in modern society keeps growing, it is of great importance to tackle these challenges in a timely manner and ensure that all the conditions are met for the safe and sustainable emergence of innovative drone services, enabling the industry to thrive and at the same time adequately deal with human rights concerns.
15.1 Introduction
Concerns over the harmful impact of online disinformation on the integrity of electoral processes, on democratic political cultures and values, and on human rights arose globally in 2016, when the UK held its referendum to leave the European Union (EU), and Donald Trump was first elected as president of the US.Footnote 1 Almost ten years later, following the experiences of the COVID-19 pandemic, wars in Ukraine and in Gaza, and the return of Trump to the US presidency, these concerns have not receded.Footnote 2 Worries over disinformation, colloquially referred to as ‘fake news’,Footnote 3 centre on its global reach and ease of access, but also on the thorny challenge of regulating online communication, and with it the power of platforms and search engines that facilitate the dissemination of inaccurate and misleading content. This chapter argues that the disinformation problematic is compounded by the proliferation of fake news via the practice of microtargeting, a term that describes the surgical spread of political and other messages to homogeneous social groups, drawing on the analysis of people’s personal data.Footnote 4 The chapter contends that the spread of microtargeted disinformation is of concern to human rights and democracy, as it distorts and fragments the information ecosystem, with harmful consequences for the right to freedom of expression and for democratic discourse. Data analytics techniques, which underpin microtargeting and serve as a vector for the dissemination of fake news, can lead to voter surveillance and interfere with the right to privacy. Elucidating the phenomenon of microtargeted online disinformation (MOD) and its effects on human rights and democracy propels my discussion, which is guided by three questions: What harms to human rights and democracy are produced by MOD? How can human rights law respond to these harms? What are the limits of human rights law?
How to respond to worries over MOD poses complex conceptual, sociological, and legal challenges. Regulatory efforts that seek to curb disinformation rub against legal protections of the right to freedom of expression, which is enshrined, for example, in Article 10 of the European Convention on Human Rights (ECHR) and in Article 11 of the Charter of Fundamental Rights of the EU.Footnote 5 There is also a real need to map and analyse the impact of microtargeted disinformation on a wider range of rights, including the right to privacy, and to unpack its broader implications for Article 10, such as its effect on the rights of minoritised voices. Paradoxically, despite the harmful effects of microtargeted disinformation on human rights, the chapter asserts that human rights law is ill-suited to address the full range of these harms. Considering the processing of personal data in the dissemination of disinformation, the chapter suggests that the protection of human rights has become displaced onto other legal regimes, such as data protection law, and is increasingly reliant on legal instruments with horizontal effect. Acknowledging the extensive regulatory activities within the EU, the chapter analyses a ‘European approach’Footnote 6 to microtargeted disinformation, examining selected recent EU legislative initiatives, the General Data Protection Regulation (GDPR),Footnote 7 the Digital Services Act (DSA),Footnote 8 the Artificial Intelligence Act (AIA),Footnote 9 and the Regulation on Transparency Political Advertising,Footnote 10 and considers their capacity to regulate MOD and to mitigate potential harms to human rights and democracy. There are further reasons for examining the EU’s regulatory initiatives. First, EU legal instruments, such as the GDPR, are said to provide a regulatory ‘gold standard’, which is emulated in non-EU jurisdictions. Second, the extraterritorial dimension of EU law, the much invoked ‘Brussels effect’,Footnote 11 sets regulatory standards beyond the jurisdictional borders of the EU. Third, while EU human rights provisions defer to the norm-setting power of the Council of Europe and the jurisprudence of the European Court of Human Rights (ECtHR), EU secondary law, specifically EU regulations, provide bespoke regulatory tools with horizontal direct effect.
The chapter is structured as follows. Section 15.2 expounds the phenomenon of online disinformation and surveys concerns about the threat of disinformation to human rights and democracy. Drawing on the ECHR, specifically Article 10, and on selected case law of the ECtHR, Section 15.3 discusses the intersection of the right to freedom of expression with online disinformation and analyses the potential consequences for the regulation of disinformation and for a wider suite of Convention rights. Section 15.4 centres on the role and impact of microtargeting in the dissemination of disinformation, while Section 15.5 examines selected regulatory initiatives in the EU, focusing on the GDPR; the DSA, the AIA, and the Regulation on Transparency Political Advertising. Section 15.6 summarises the main points covered in the chapter and identifies areas that require further work.
15.2 From Fake News to (Online) Disinformation: Challenges for Human Rights and Democracy
The rapid spread of online communication and the attendant ease of access to online content, facilitated by the rise of social media platforms such as Facebook, X, or TikTok, and by search engines such as Google, has enhanced the capacity for disseminating and receiving information, and increased the range and scope of civic engagement. These new opportunities for communicating and connecting have been welcomed as a way of informing and empowering people, and helping them to enjoy their human rights, such as the right to assembly and association.Footnote 12 However, online content can also propel the dissemination of inaccurate and frequently misleading content to levels previously unimaginable. The term ‘fake news’, which was popularised during Donald Trump’s first tenure as US president (2017–21) has become shorthand for this type of content. Despite its widespread use, ‘fake news’ lacks an agreed definition and there is no consensus on the range of expressions that it refers to. These can vary from the entertaining barb of political satire to maleficent attempts that seek to damage public trust and confidence in the integrity of electoral processes and elected representatives,Footnote 13 and in public policy. For example, concerns over fake news accompanied the UK Brexit referendum in 2016 and the US presidential elections of 2016 and 2020.Footnote 14 Fake news stories included inaccurate reports about Turkey joining the EU, playing on fears of UK voters over immigration, and erroneous claims that the 2020 US presidential election was ‘stolen’.Footnote 15 There have also been concerns about mistaken and misleading information with respect to the COVID-19 pandemic that has sought to undermine public policy efforts aimed at combating the disease.Footnote 16 Moreover, it should be stressed that faking is not limited to the dissemination of written text, such as tweets or Facebook posts. It extends to the manipulation of voices and images – so-called deepfakes – that can heighten mistrust in political leaders, institutions, and even national security.Footnote 17 Deepfakes gained notoriety when a speech delivered by Nancy Pelosi, the former leader of the Democrats in the United States House of Representatives, was altered to make it sound slurred. This alteration created the inaccurate impression that Pelosi was intoxicated. Such alteration is deceitful, and it can undermine the credibility of democratically elected politicians or of those preparing to stand for public office.Footnote 18 Furthermore, despite the contemporary interest in online disinformation, it is worth noting that fake news is not an invention of the digital age: from Octavian’s fake news info war against Mark Anthony,Footnote 19 to the humorous 1835 hoax of batmen hunting bison on the moon,Footnote 20 and Joseph Goebbels’ Nazi propaganda apparatus, fake news has been part of public political life for more than two millennia. However, developments in the field of digital technologies, propelled by advances in artificial intelligence (AI), have put the propensity to spread fake news on steroids: fake news created by a few can reach millions of users at the push of a button.
How to deal with fake news has emerged as a central issue for policymakers and scholars. One of the key challenges relates to the aptness of the expression ‘fake news’. Despite its widespread use, the term has been discarded as loaded, deployed to discredit political opponents and critical media coverage of politicians and policies. For example, the UK House of Commons Digital, Culture, Media and Sports Committee report Disinformation and ‘fake news’: Final Report,Footnote 21 in one of the most thorough treatments of this topic, rejects the term and suggests instead the adoption of the words ‘misinformation’ and ‘disinformation’. The report defines misinformation as the ‘inadvertent sharing of false information’, while disinformation constitutes ‘the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purpose of causing harm, or for political, personal or financial gain’.Footnote 22 These definitions align with proposals developed by the EU’s High Level Expert Group on Fake News and Online Disinformation. It conceives of disinformation as ‘verifiably false or misleading information … which cumulatively … is created, presented and disseminated for economic gain or to intentionally deceive the public … and may cause public harms [as] threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security’.Footnote 23 A third category of fake news, malinformation, refers to ‘genuine information shared with the intention to cause harm’,Footnote 24 such as defamatory content.
Despite a broad preference for the term ‘disinformation’, there is no consensus regarding its impacts, whether these impacts constitute harm; who or what is being harmed, and how to address such harms. Counselling against an ‘overdose of US perspectives’,Footnote 25 Bodo et al. contend that worries over disinformation amount to a moral panic, which is said to originate in US political culture and in the dominance of US perspectives on disinformation, and which is not applicable beyond the US.Footnote 26 Meyer and Marsden also assert that ‘evidence of large-scale harm is still inconclusive in Europe’.Footnote 27 In particular, there is a dearth of verifiable empirical evidence, which could demonstrate a significant effect of disinformation campaigns on electoral outcomes. However, this lack of empirical evidence does not diminish widespread concerns about the politically dangerous and potentially harmful impact of disinformation on democracy and on the integrity of electoral processes. For example, the EU regards online disinformation practices as ‘public harms’ and ‘threats to our way of life’.Footnote 28 These harms and threats are said to undermine trust and confidence in democracy, in public discourse, and in human rights.Footnote 29 Within public political discourse, concerns about disinformation harms have conjoined with broader worries over online harms, adding force to calls for the regulation of online content. To date, the debates have centred on harms to individuals regarded as vulnerable, specifically individuals with protected characteristics such as children, women, LGBTQ+ people, or people from ethnic minority backgrounds – social demographics who are frequently subjected to misogynistic, homophobic, transphobic, or racist hate speech online, or to online sexual abuse or threats of violence.Footnote 30 An emerging consensus that ‘the online and offline worlds cannot neatly be separated’,Footnote 31 that what is prohibited offline should be prohibited online, underpins the deliberations about the regulation of online communications.
However, the format and precise modalities of regulation remain contested and have emerged as a key challenge. Commenting on broader attempts to come up with a suitable regulatory design, Lillian Edwards asks whether we should ‘regulate by law … refuse to regulate till a clear path can be seen, or …turn to soft law, self-regulation, “co-regulation”, codes of conduct, technical standards, trustmarks, ethical charters, user democracy, who knows?’.Footnote 32 While attempts to regulate the dissemination of illegal content have turned to criminal law,Footnote 33 there are no quick and easy fixes that can offer effective, meaningful, and lawful ways to deal with disinformation. Designing regulatory instruments that can address the threats and harms posed by online disinformation generate seemingly intractable problems, which converge on three aspects: first, the diffuse and opaque nature and extent of disinformation harms and their typically intangible effects on society and on societal values such as human rights and democracy complicate regulatory efforts.Footnote 34 Scholarship on the societal harms of new technologies is only beginning to emerge,Footnote 35 while, as highlighted earlier, the harms caused by disinformation, notwithstanding significant normative concerns, remain empirically unproven. Second, disinformation operates in a novel information landscape, which lacks traditional (editorial) gatekeepers, creates online filter bubbles, and facilitates the spread of online (dis-)information to previously unimaginable levels and across jurisdictional boundaries. National regulatory landscapes have been described as opaque and fragmented, with overlaps and gaps,Footnote 36 while the speed and global reach with which the disinformation ‘infodemic’ infects public discourse limits the effectiveness of national regulation and requires instead collaboration,Footnote 37 and the difficult work of consensus building, at international, or at the very least regional, level.Footnote 38 There is also considerable unease about the role of private companies in the regulatory architecture, for example, whether they should be tasked with the sensitive role of regulating, and possibly censoring, online content. Third, tools such as algorithmic content moderation can be used to remove or block content, but these are blunt instruments that lack contextual understanding, such as the ability to distinguish satire from harmful disinformation or illegal content.Footnote 39 Moreover, different platforms and search engines operate differently and may require bespoke regulatory tools. For example, while X (formerly Twitter) is an open platform, others, such as WhatsApp, offer end-to-end encryption, making them less transparent but no less effective in the spread of disinformation.Footnote 40
These are important considerations for any analysis of disinformation, but this chapter’s main concern, and the focus of Section 15.3, is the linkage between disinformation and human rights that crystallises around the right to freedom of expression. To preview my argument, the ill-considered regulation of online content, including disinformation, may lead to potentially unlawful, unnecessary, and disproportionate interferences with the right to freedom of expression.Footnote 41 Rather than addressing the threats posed by disinformation, such interferences may generate new harms: they may undermine the functioning and values of the democratic processes that critics of unregulated speech worry about. Therefore, online disinformation is not amenable to broad-brush regulation, a factor that adds substantially to the difficulty of responding effectively to its harmful impact.
15.3 Freedom of Expression, Democracy, and Online Disinformation
The challenges posed by the spread of online disinformation are significant, but they should not deter efforts to develop regulatory instruments. As one commentator quipped, ‘the time for simply admiring the problem is over’.Footnote 42 However, as already stated, regulating online content, including disinformation, faces major hurdles. One such hurdle stems from states’ obligations with respect to the right to freedom of expression: these obligations generate a complex terrain for regulatory interventions into disinformation and pose significant barriers to actions that could be construed as interfering in human rights. Drawing on the framework of the ECHR, this section plots how the right to freedom of expression intersects with the spread of online disinformation. The discussion begins with an exposition of the right to freedom of expression in Europe’s regional human rights regime before problematising the legal and conceptual limitations of such a focus. The discussion demonstrates, first, that the right to freedom of expression limits states’ scope to regulate disinformation. Second, it will be argued that reading disinformation harms exclusively through the lens of freedom of expression does not suffice. The section proposes instead a nuanced human rights analysis, which attends to the impact of disinformation on diverse groups and which considers how disinformation impacts a range of other human rights beyond freedom of expression. Third, despite disinformation’s threat to human rights, I suggest that human rights law does not provide sufficient protection from the human rights harms caused by online disinformation.
The right to freedom of expression is enshrined in the system of international human rights law, which emerged in the aftermath of the Second World War. This right imposes negative and positive obligations on states to respect, protect and promote human rights.Footnote 43 Complementing international legal obligations, European human rights provisions for freedom of expression derive from Article 10 of the ECHR and from the jurisprudence of the ECtHR (or Strasbourg Court). Article 10 stipulates that:
1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This Article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.
2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.Footnote 44
The protections provided by Article 10 are not limited to speech and cover instead a wide range of expressions. These include artistic and commercial expressions, the publication of photographs, forms of conduct, rules governing clothing, and the use of the ‘Like’ button or similar expressions on social media networks. Article 10 also protects media freedom and grants a limited margin of appreciation with respect to interferences with journalistic expressions.Footnote 45 Moreover, freedom of expression extends to different modes of receiving and imparting information and disseminating one’s right to expression.Footnote 46 However, not all forms of expression are granted equal protection. Two related aspects are noteworthy in this context: first, the ECtHR presumes a hierarchy of expressions, which accords political speech the highest form of protection, followed by artistic and commercial speech.Footnote 47 Second, freedom of expression has special significance within the context of a democratic society. This principle is enshrined in the ECHR Preamble, which confirms a ‘profound belief in those fundamental freedoms which are … best maintained on the one hand by an effective political democracy and on the other by a common understanding and observance of the human rights upon which they depend’.
The importance of the right to freedom of expression within the system of ECHR rights is reflected in the case law of the ECtHR.Footnote 48 The leading case of Handyside v. the UK (1976) established that freedom of expression is ‘[i]ndissociable from democracy’,Footnote 49 and ‘one of the essential foundations of … a [democratic] society, one of the basic conditions for its progress and for the development of every man’. This assertion is stressed in an often-cited section in Handyside, which asserts that freedom of expression
is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no ‘democratic society’. … every ‘formality’, ‘condition’, ‘restriction’ or ‘penalty’ imposed in this sphere must be proportionate to the legitimate aim pursued.Footnote 50
Subsequent judgments have reinforced the view that ‘freedom of political debate is at the very core of the concept of a democratic society which prevails throughout the Convention’.Footnote 51 For example, in Lingens v. Austria (1986), the Strasbourg Court proclaims that:
freedom of expression … constitutes one of the essential foundations of a democratic society and one of the basic conditions for its progress and for each individual’s self-fulfilment. … it is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb.Footnote 52
That Article 10 ‘enjoys a very wide scope, whether with regard to the substance of the ideas and information expressed, or to the form in which they are conveyed’,Footnote 53 is further emphasised in Mathieu-Mohin and Clerfayt v. Belgium (1987), which reiterates the ECHR Preamble’s link between fundamental human rights and freedoms as ‘best maintained by “an effective political democracy”’,Footnote 54 and which enshrines in particular the ‘prime importance’ of free expression in free elections.Footnote 55 Bowman v. the United Kingdom (1998), which draws on Lingens v. Austria (1986), further emphasises the protection of freedom of expression and of protection of political speech, described as ‘the bedrock of any democratic system’:
Free elections and freedom of expression, particularly freedom of political debate … are inter-related and operate to reinforce each other … freedom of expression is one of the ‘conditions’ necessary to ‘ensure the free expression of the opinion of the people in the choice of the legislature’ … For this reason, it is particularly important in the period preceding an election that opinions and information of all kinds are permitted to circulate freely.Footnote 56
The protection of the right to freedom of expression, its special role in democratic societies, and the extension of this right to different modes of imparting and receiving information has also been confirmed in a series of cases relating to digital communication, specifically the digital dissemination of political speech.Footnote 57 Flipping the claim that what is prohibited offline should be prohibited online, one may read the ECtHR jurisprudence as an assertion of the claim that what should be protected offline should be protected online. In its ‘Guide to Article 10’, the Court confirms the ‘innovative character of the Internet’ and states that ‘user-generated expressive activity on the Internet provides an unprecedented platform for the exercise of freedom of expression’.Footnote 58 The wide scope of Article 10, together with the principles articulated in the ECHR Preamble and in subsequent ECtHR jurisprudence, suggest that online disinformation may not be inherently unlawful and may, in fact, be protected in accordance with Article 10. Therefore, attempts to regulate online disinformation may constitute an interference with the right to freedom of expression. As is well known, interferences into Article 10, for example in the interest of national security, public safety, or other matters, as specified in Article 10(2), must be assessed against the ECtHR’s tripartite test of legality, proportionality, and necessity; they must also consider the additional import bestowed on political speech. These protections impose limits on interferences with the right to freedom of expression, and it is reasonable to surmise that these limits extend to attempts at interference with online disinformation.
Human rights law, specifically Article 10, creates a knotty problem for tackling online disinformation: as argued earlier, there are compelling normative concerns about the harmful impact of disinformation on human rights, yet we may plausibly conclude that disinformation can avail itself of the protections offered by Article 10. Addressing this problem requires a shift in perspective. Two aspects merit particular attention. First, although the regulation of disinformation risks disproportionate and unlawful interference in the right to freedom of expression, unfettered speech, whether offline or online, can also harm the wider communication ecosystem by silencing minoritarian voices.Footnote 59 Judit Bayer argues that ‘paradoxically from the perspective of Article 10 of ECHR, freedom of speech was to be restricted with the objective to preserve a sound informational environment; because pluralism of views, and ultimately the democratic process would otherwise have been distorted by the speech in question’.Footnote 60 Acknowledging these broader effects of disinformation calls for more granular analyses, which study how disinformation impacts the right to freedom of expression for a diverse range of individuals and groups. Online disinformation can generate structural conditions, which silence and exclude marginalised voices, and thus restrict their enjoyment of the right to freedom of expression. Second, conjoining critical analyses of disinformation exclusively with the right to freedom of expression risks losing sight of the effects of disinformation on the wider communication and information ecosystem and on a wider range of human rights, including the right to privacy (Article 8 ECHR), freedom of assembly and association (Article 11), the prohibition of discrimination (Article 14), or the prohibition of an abuse of rights (Article 17). This calls for a broader engagement with human rights law, beyond Article 10, and, as will be discussed in the remainder of the chapter, with legal regimes that offer new or additional protections for human rights.
15.4 Microtargeting: Benefits and Harms to Human Rights and Democracy
What compounds concerns about the impact of online disinformation on human rights and democracy and complicates regulatory efforts is the practice of microtargeting. The term ‘microtargeting’ describes the surgical, selective, and frequently opaque dissemination of tailored political or commercial communication to pre-identified, typically homogeneous audiences.Footnote 61 Through the use of data analytics, microtargeting can generate audience profiles based on social demographics such as gender, age, or ethnicity, but also philosophical beliefs or political opinions. These segmented audiences, for example, of consumers or voters, can be strategically targeted with bespoke messages.Footnote 62 The data typically required for microtargeting is personal data, defined in Article 4(1) of the GDPR as:
any information relating to an identified or identifiable natural person (‘data subject’) … who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.
How such personal data is acquired can vary. It can be provided by the data subject (‘provided data’), such as a person’s political opinions, which he or she may share on social media platforms. It can also be processed through the use of cookies or tracking pixels (‘observed data’). Or it can involve inferred data, which is based on probabilities emerging from the analysis of provided or observed data by finding ‘correlations between datasets and using these to categorise or profile people, e.g., calculating credit scores or predicting future health outcomes’.Footnote 63
Microtargeting has attracted significant public interest since the mid-2010s, when its deployment became associated with the practices of the data analytics company Cambridge Analytica.Footnote 64 Cambridge Analytica created data profiles of typically undecided voters, which aligned with their social media profiles, and which were used to target comparatively small cohorts of voters with selective and often inaccurate messages, especially on highly emotive issues such as immigration. However, it is important to stress that microtargeting is not inherently political in either content or objectives, and that it can be deployed for a range of purposes, including commercial goals.Footnote 65 This is because microtargeting techniques are equally amenable for commercial or political ends, or a combination of both.Footnote 66 Moreover, there is no inherent reason to associate microtargeting with disinformation: microtargeting is not intrinsically wedded to the spread of disinformation, while disinformation can proliferate without resorting to microtargeting practices. However, the conjoining of online disinformation with the practice of microtargeting adds to the significant concerns about harms to elections, to democratic practices and democratic political cultures, and to human rights. This also extends the remit of analysis, beyond a focus on freedom of expression, to include data protection and privacy issues.
Despite extensive scholarly and public political interest in microtargeting and its respective benefits and harms, its effectiveness remains contested. For example, Borgesius et al. remind us that voters do not live in digital bubbles and may not be receptive to microtargeted messages.Footnote 67 This may diminish the capacity of microtargeted messages to influence the outcome of elections. We may also surmise that companies that offer microtargeting services may exaggerate their value.Footnote 68 This lack of empirical evidence on the effectiveness of microtargeting could, potentially, alleviate concerns over its harms. However, suggestions that microtargeting, whether deployed in political or commercial marketing campaigns, may offer benefits to the sender and receiver of targeted messages, indicates a continued confidence in its usefulness.Footnote 69 One of its alleged benefits is said to derive from its shift from ‘broadcasting’, a term that depicts the wide-ranging spread of political or commercial messages to general audiences, to ‘narrowcasting’, which provides receivers, such as voters or consumers, with information on issues that selected audiences regard as relevant, and that speak directly to their interests, needs, or concerns.Footnote 70 For example, in the field of commercial advertising, microtargeting can support the marketing of goods and services to consumers in search of specific products, and reduce advertisement overload for those who are not interested in these products. Microtargeting can also support the transmission of public policy messages in fields such as health or welfare.Footnote 71 For example, microtargeting has been deployed to communicate tailored skin cancer prevention messages to young women using sunbeds,Footnote 72 or to disseminate information about welfare programmes to people living in deprived areas.Footnote 73
Targeting segmented audiences with bespoke messages is also said to offer distinct benefits in the democratic process. For example, microtargeting may engage hard-to-reach individuals and communities, typically those who are ‘switched off’ from the political process. (Re-)engaging sections of the electorate with political issues and electoral processes may offer non-partisan benefits that, rather than undermining democratic politics, may in fact strengthen democratic systems. However, its main advantage is said to lie in the partisan benefits to political campaigns. Although there is no consensus on whether microtargeting can mobilise undecided voters, there is a view that it can ‘activate the base … and improve partisan turnout’.Footnote 74 This explains why microtargeting has been deployed so widely in election campaigns and referenda.
Critical perspectives on microtargeting contend that a focus on empirical evidence about its effectiveness with respect to elections is blind to the substantial threats, including its weaponisation of personal data and its effect on democratic infrastructures.Footnote 75 These analyses ground their arguments in normative claims about the harmful effects of microtargeting that are bound up with three interrelated issues: first, the capacity of microtargeting to disseminate disinformation to selected audiences; second, its effects on the information ecosystem and on the right to freedom of expression; and third, the consequences for privacy and data protection. For example, there are concerns that microtargeted disinformation may suppress voter turnout, and that the use of microtargeting to mobilise small groups of voters, typically in swing states in US elections, can focus on so-called wedge issues. These are polarised issues that can frame and dominate public political discourse and undermine the coherence of the wider polity by detracting attention from issues that concern voters across the party-political spectrum.Footnote 76 These concerns intersect with worries over the impact of microtargeting on privacy, specifically with the harvesting of personal data, and attendant concerns over data protection, data security, and voter surveillance, based on the use of predictive analytics and the processing of inferred data.Footnote 77 There are additional normative concerns that the impact of microtargeting extends beyond individual privacy. For example, Bennett and Lyon contend that it generates collective and societal effects, which are ‘not just about privacy, but even more so about data collection and governance, freedom of expression, disinformation, and democracy itself’.Footnote 78 Zittrain asserts that what he calls ‘digital gerrymandering’ is ‘not a wrong to a given individual user, but rather to everyone, even non-users’.Footnote 79
Such worries about the collective and systemic effects of microtargeting also inform the work of Judit Bayer. Addressing the impact of microtargeting on democracy, she calls for a restriction of microtargeting, not because of its alleged manipulation of voters and interferences with the right to privacy, but because it fragments public discourse and threatens the democratic process. According to Bayer, microtargeting presents a double-harm: it comprises the harm of being targeted, but also the harm of not being targeted, which, according to her, constitutes a violation of informational rights, a potential ‘mass violation of human rights’ of all those who are not targeted.Footnote 80 She declares that the right to receive information and the right to freedom of expression are complementary: ‘[w]hen the right to receive information is violated, it is freedom of expression in its broader sense, which is violated.’Footnote 81 Conjoining the analyses offered by Bennett and Lyon, Zittrain, and Bayer, we may conclude that the profiling and microtargeting of online users, based on the ‘analysis of their data profile’ creates an information asymmetry,Footnote 82 which violates informational rights,Footnote 83 and shatters the ‘shared world’ of political deliberation.Footnote 84 A fragmented and opaque (dis-)information basis of public political discourse, which targets selected social media users with personalised (dis-)information based on data analytics techniques poses harm to the democratic polity, culminating in the ‘polarisation and fragmentation of the public sphere’.Footnote 85 It leaves in its wake citizens who are informationally isolated and atomised.
Despite these normative concerns about MOD’s harms to human rights and democracy, a consensus on how it should be regulated remains elusive. As highlighted in Section 15.3, human rights law creates significant barriers to interferences with freedom of expression; these barriers extend, potentially, to disinformation practices. There is as yet no ECtHR jurisprudence on the issue of disinformation,Footnote 86 and we may conclude that online content, including disinformation, and the mode of delivery – microtargeting – are protected under the broad umbrella of Article 10. Attention has turned instead to legal regimes beyond human rights law. Section 15.5 illustrates this shift in focus with respect to selected EU legal instruments.
15.5 Regulating MOD: The EU’s Legislative Actions
The notion of a ‘marketplace of ideas’ popularised by Justice Holmes’s dissenting judgment in Abrams v. United States implies that the self-regulatory capacity of a public realm, constituted by uncoerced dialogue, is sufficiently inoculated against the effects of harmful speech.Footnote 87 More than a century later, there are well-founded concerns that Holmes’s vision of self-regulation may not suffice to counter the harms generated by unfettered online content. Therefore, despite the affordances of online platforms and search engines to expand opportunities for freedom of expression and other human rights, there are persistent worries that online disinformation compounds harms to individuals and communities and exerts chilling effects on democracy and human rights.Footnote 88 It is against this backdrop that interest in the regulation of online expression and of platforms and search engines has garnered attention.Footnote 89 However, as discussed earlier in the chapter, balancing the right to freedom of expression, which includes the right to ‘offend, shock or disturb’,Footnote 90 with protection from the individual, collective, and societal harms generated by MOD poses seemingly intractable challenges. Regulating online expressions may not provide a sufficiently granular response to the specific threats posed by MOD. In fact, one could plausibly argue that content regulation constitutes a regulatory scattergun, which risks missing the desired aim – that of MOD – because it is blind to diverse types of online communications and their attendant set of challenges.
Recent EU policy and legislative initiatives promise more granular approaches to deal with this problem. They are anchored in EU primary law, including Article 16 of the Treaty on the Functioning of the EU and the Charter of Fundamental Rights (hereinafter the Charter), the EU’s premier human rights instrument. The latter includes the right to privacy (Article 7 of the Charter), the right to protection of personal data (Article 8 of the Charter), and the right to freedom of expression and information (Article 11 of the Charter). The EU’s legislative programme is supported by a series of agenda-setting policy recommendations, Communications, and soft law instruments, which offer additional, albeit ultimately non-enforceable, ways to regulate the dissemination of disinformation via the practices of microtargeting. They include the European Commission’s ‘2030 Digital Compass: the European way for the Digital Decade’,Footnote 91 the European Democracy Action Plan;Footnote 92 the Strengthened Code of Practice on Disinformation,Footnote 93 and the European Declaration on Digital Rights and Principles for the Digital Decade.Footnote 94 Also noteworthy are statements and opinions issued by the Article 29 Data Protection Working Party, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS).Footnote 95 Of key interest is an emerging suite of EU secondary law, which centres on the main elements involved in the dissemination of MOD, such as the regulation of data processing, the regulation of online platforms and search engines, the regulation of AI systems, and the regulation of political advertising. Despite differences in material scope, these laws share three features: first, they have direct horizontal effect, which imposes rights obligations not just on states or an emanation of a state, but also on non-state actors, including private companies. Second, they share loopholes, wide-ranging exemptions, and monitoring and enforcement gaps, which are compounded by the significant regulatory roles accorded to private actors. Third, they manifest an unresolved tension between the EU’s twin objectives of protecting the internal market and promoting innovation on the one hand, and its obligation to protect fundamental rights.
15.5.1 Regulating Data Processing: The GDPR
The use of data analytics in microtargeting underscores the imperative to protect personal data, as enshrined in Article 8 of the Charter, and highlights the importance of data protection laws in the regulation of online content. The GDPR, the EU’s most established and perhaps most widely known legislative instrument, centres on the processing of personal data (Article 4 of the GDPR), while its provisions pursue two goals: first, respect for fundamental rights and freedoms, including the right to protection of personal data; and second, the creation of harmonised rules within the EU internal market (Recital 2).Footnote 96 To deliver on these objectives, the GDPR put forward a series of data protection principles, including transparency, purpose limitation, data minimisation, accuracy, and integrity and confidentiality (Article 5),Footnote 97 which are intended to guide the processing of personal data. As stated in Recital 4, the GDPR ‘respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties’, including the right to freedom of expression (Article 11 of the Charter). Thus, data protection principles, and the right to the protection of personal data, are not absolute but must be balanced against other Charter rights.
The GDPR is ‘technologically neutral’ (Recital 15); it is not designed to regulate the dissemination of disinformation in general or the use of microtargeting techniques in particular. However, its provisions for the processing of personal data provide tools that can be used to regulate the microtargeting of voters via the use of inferred data. Four aspects are particularly noteworthy. First, the GDPR’s extraterritorial scope (Article 3) extends the application of the Regulation beyond the EU and may affect the dissemination of microtargeted online content that originates outside the EU. Second, it connects the right to the protection of personal data with the right to freedom of expression and information, including processing for journalistic purposes (Article 85(1)). Third, it prohibits the processing of ‘special categories of data’, including that of racial or ethnic origin, political opinion, philosophical beliefs, or health data (Article 9(1)). And fourth, it includes the right not to be subject to a decision based solely on automated processing, including profiling (Article 22(1)). Articles 9(1) and 22(1), separately as well as combined, appear to offer robust protection against the harmful effects of data analytics practices, including those that underpin microtargeting. However, each of the two articles is diluted with a range of exemptions. For example, Article 9(2d) provides exemptions for the processing of special category data for associations with political aims or where data subjects have made their personal data public (Article 9(2e)).
Analysing the GDPR’s provisions, Blasi Casagran and Vermeulen have also identified compliance issues with respect to political microtargeting, specifically with the data protection principles as outlined in Article 5 of the GDPR, including lawful processing, purpose limitation, data minimisation, data accuracy, and data accountability.Footnote 98 Their concerns align with those of the EDPB, which highlights the risks of microtargeting not only for ‘individuals’ rights to privacy and data protection, but also to wider trust in the integrity of democratic processes themselves.Footnote 99 The EDPB has counselled that derogations from Article 9 ‘should be interpreted narrowly, as it cannot be used to legitimate inferred data’.Footnote 100 There are additional concerns over the enforcement of GDPR provisions, and tensions between the way that human rights protection is balanced against other interests, including the development of harmonised rules across the EU’s internal market.
15.5.2 Regulating Online Platforms: The DSA
While the GDPR focuses on the processing of personal data, the DSA, which entered into force on 19 November 2022, is the EU’s bespoke tool for dealing with the business models and practices of online service providers.Footnote 101 Like other EU legislative actions, the DSA speaks to the EU’s twin concerns of protecting the internal market via a harmonisation of ‘uniform, effective and proportionate mandatory rules’ (Recital 4) while also protecting fundamental rights enshrined in the EU Charter. Its main objectives are ‘to prevent illegal and harmful activities online and the spread of disinformation’.Footnote 102 Recognising the ‘inherently cross-border nature of the internet’ (Recital 2), the DSA establishes a graduated and tailored approach, which imposes increasingly stringent due diligence obligations on intermediary service providers, depending on their size and reach. The most stringent obligations are imposed on very large online platforms and very large online search engines, which Article 33(1) defines as having a number of average monthly active recipients of the service in the EU equal to or higher than 45 million.
The regulatory guidelines presented in the DSA are technology neutral. Its primary concern lies with a ‘safe, predictable and trusted online environment’ (Recital 9) through a graded, risk-based approach. Three categories of risk are identified: first, the dissemination of illegal content and conduct of illegal activities, such as child sexual abuse or terrorist content; second, the impact of digital services on the exercise of fundamental human rights; and third, the manipulation of platform services which have an impact, among other things, on civic discourse and electoral processes. The DSA requires intermediary service providers to have due regard to relevant international standards for the protection of human rights, including freedom of expression, media freedom, and pluralism (Recital 47). It acknowledges the wider impact of disinformation on fundamental rights (Recitals 9 and 86) and on democratic processes (Recital 82), and it recognises the systemic risks of disinformation and its impact on society and democracy (Recital 104), including the risk of online services to ‘civic discourse and electoral processes, and public security’ (Article 34(1)(c)). More specifically, the DSA acknowledges risks stemming from the use of online advertisements and associated targeting techniques. It stipulates that ‘providers of online platforms should not present advertisements based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679, using special categories of personal data referred to in Article 9(1) of that Regulation, including by using profiling categories based on those special categories’. (Recital 69). Despite its recognition of the harm from disinformation, references to disinformation are relegated to the non-binding recitals. With its focus on illegal activities, the DSA offers few effective interventions into disinformation practices.
15.5.3 Regulating Techniques: The AIA
Broadly mirroring the approach of the DSA, the AIA focuses on the regulation of AI systems.Footnote 103 An AI system is defined as a ‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ (Article 3(1)). The Act’s purpose, as outlined in Recital 1, is ‘to improve the functioning of the internal market by laying down a uniform legal framework’ that strikes a balance between fostering the development of a single market for lawful, safe, and trustworthy AI applications while ensuring the protection of fundamental rights and societal values, including democracy and the rule of law. To deliver on this goal, the AIA adopts a risk-based approach, which categorises AI systems into four risk levels, ranging from unacceptable and thus prohibited risks through high risk and limited risk to minimal or no risk. Its legally binding provisions delineate obligations and responsibilities for AI developers and users, seeking to foster an AI landscape that promotes human-centric and trustworthy AI systems and ensures a high level of protection of health, safety, and fundamental rights (Article 1(1)).
Although the Act has not been designed as a bespoke human rights instrument – the AIA refers to fundamental rights – it acknowledges concerns about AI’s impact on rights at various stages of the AI lifecycle, from design and development to the placing of AI systems on the market. Its provisions for human rights protection are part of a broader mission, which seeks to harmonise the legal regulation of AI across the EU, and foster innovation aimed at establishing the EU as a ‘global leader in the development of secure, trustworthy and ethical AI’ (Recital 8). Concerns over AI’s impact on human rights feature prominently in the (non-binding) recitals and in the articles. The Act promotes alignment with EU values of ‘respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights’ (Recital 28). To operationalise these concerns, the Act introduces a series of regulatory governance techniques. These include an obligation to conduct fundamental rights impact assessments for high-risk AI systems,Footnote 104 which apply to AI deployers governed by public law or private entities providing public services (Article 27), transparency obligations (Article 50), and the reporting of serious incidents (Article 73). The Act also stipulates the voluntary application of codes of conduct, including adherence to EU guidelines on ethical AI (Article 95(2)(a)), facilitating inclusive and diverse AI design through inclusive and diverse development teams and stakeholder participation (Article 95(2)(d)), assessing and preventing the negative impact of AI on vulnerable groups and on gender equality (Article 95(2)(e), and establishing an advisory forum to the AI Board with civil society representation (Article 67(2)).
There is a recognition of the potentially maleficent impact of high-risk AI systems on democracy and democratic processes, primarily in the recitals. Recital 110 addresses the capabilities of general-purpose AI models, specifically their capacity to facilitate the spread of disinformation and pose threats to democratic values and human rights. Developing this theme, Recital 120 considers how AI systems deployed by very large online platforms and very large online search engines can disseminate artificially generated or manipulated content, with ‘actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation’ (see also Recital 136). The AIA makes specific reference to deepfakes, defined as ‘AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful’ (Article 3(60)). It imposes a specific obligation on the deployers of deepfakes to disclose that the content has been artificially generated or manipulated if text is published ‘with the purpose of informing the public on matters of public interest’ (Article 50(4)). Overall, though, there is limited engagement with the issue of microtargeted disinformation, and it is fair to conclude that the AIA’s provisions reveal a substantial gap between its normative commitments to protect human rights and the exemptions and regulatory loopholes in the legally binding provisions.
15.5.4 Regulating Political Advertising: The Regulation on the Transparency and Targeting of Political Advertising
Despite the huge public and scholarly interest in the DSA and the AIA, neither of these two instruments offer wide-ranging or bespoke regulatory tools to address the problem of MOD. The new Regulation on the Transparency and Targeting of Political Advertising (TTPA), which entered into force on 13 March 2024 and applies from 10 October 2025, promises to fill this gap by becoming the EU’s most tailored legislative response to the challenge of the microtargeting of political advertising to date. A relatively short regulation – it consists of thirty articles – the TTPA seeks to create synergies with the GDPR and the DSA. For example, the TTPA utilises the GDPR’s provisions for the processing of personal data. Building on the DSA, the TTPA foregrounds the principle of transparency and presents a risk-based approach. Its two objectives are to contribute to ‘the proper functioning of the internal market for political advertising and related services’ (Article 1(4a)) and ‘to protect the fundamental rights and freedoms’ enshrined in the Charter, ‘in particular the right to privacy and the protection of personal data’ (Article 1(4b)). It seeks to achieve these objectives by creating ‘harmonised rules, including transparency and related due diligence obligations, for the provision of political advertising and related services’ (Article 1(1a)) and ‘harmonised rules on the use of targeting techniques and ad-delivery techniques that involve the processing of personal data in the context of the provision of online political advertising’ (Article 1(1b)).
Underpinning the TTPA’s provisions is a recognition that ‘[p]olitical advertising can be a vector of disinformation’ (Recital 4) and that ‘the misuse of personal data through targeting, including microtargeting … may present particular threats to legitimate public interests’ (Recital 6), including threats to fundamental rights, such as freedom of expression, privacy, and equality. Three aspects are significant in this context: first, the TTPA establishes a direct link between microtargeting, disinformation, and political advertising, which is defined as ‘the preparation, placement, promotion, publication or dissemination, by any means, of a message’ (Article 3(1)). Second, the TTPA recognises that the targeting and amplification techniques, which disseminate political advertising, can negatively impact the democratic process and exploit the vulnerabilities of data subjects, with ‘specific and detrimental effects on citizens’ fundamental rights and freedoms with regard to the processing of their personal data and their freedom to receive objective information’ (Recital 74). Targeting or amplification techniques are ‘techniques that are used either to address a tailored political advertisement only to a specific person or group of persons or to increase the circulation, reach or visibility of a political advertisement’ (Article 2(8)). These techniques, which are used to disseminate political advertising, should be prohibited unless explicit consent by the data subject has been given and appropriate safeguards are in place. In this respect, the TTPA seeks to close some of the derogations established in Article 9(2) of the GDPR, but it does not stipulate a prohibition of targeting techniques, provided that explicit consent from the data subject has been provided and that the targeting does not involve profiling (Article 18(1c)). Political parties, foundations, associations, or other non-profit bodies are exempt from these requirements provided that their communications are based on subscription data (Article 18(3)). Article 19 specifies additional transparency requirements, including publications of internal policies, record keeping, and internal annual risk assessments. Third, the TTPA rejects interferences with the substantive content of political messages and seeks to protect the content of political advertising from unlawful interference.
The TTPA is designed to go further than other EU instruments with respect to the practice of microtargeting. As distinct from the DSA, it is not focused on the regulation of illegal online content. Moreover, it expounds the wider impact of targeting and amplification techniques on a broader range of fundamental rights and on democracy itself. However, by exempting the editorial freedom of the media (Recital 29) and ‘the sharing of information through electronic communication services, such as electronic message services … provided that no political advertising service is involved’ (Recital 48), the TTPA’s scope excludes significant avenues for the dissemination of targeted disinformation; for example through platforms such as WhatsApp.Footnote 105 Moreover, its requirement for informed consent with respect to targeting and amplification is ill-suited to the practice of microtargeting, especially if it involves interferences by foreign states or bad actors. This will require further thought and new regulatory tools.
15.6 Conclusion
This chapter has surveyed the phenomenon of MOD and critically assessed whether human rights law is equipped to respond to this complex and complicated challenge. Through a focus on selected EU legislative instruments, and being cognisant of the broader legal-normative order of the ECHR and the jurisprudence of the ECtHR, the discussion has identified considerable hurdles with respect to the regulation of MOD. Given the wide-ranging protections for the right to freedom of expression, the chapter has contended that the scope for human rights law to address the harms generated by MOD is limited. The chapter began by expounding the concept of disinformation before discussing the right to freedom of expression and the limitations it poses on regulating disinformation – a form of communication not normally regarded as unlawful. Building on this discussion, the chapter then proceeded to examine the practice of microtargeting and attendant data analytics and their effects on the right to privacy. A survey and analysis of recent legislative initiatives in the EU illustrated how human rights protection has extended beyond human rights law; for example, by regulating the practices of very large online platforms, or by harnessing the tools of data protection law. While this ‘European approach’ can offer important lessons for the protection of human rights,Footnote 106 the chapter also highlighted important shortcomings related to issues such as enforcement and the wide-ranging scope for derogations and exemptions.
The regulation of MOD remains an evolving area. As emphasised in scholarly work and in policy debates, legal regulation is no panacea to counter the harms to human rights and democracy caused by fake news. The chapter therefore echoes the recommendation of Meyer and Marsden, who call for a holistic approach to disinformation that includes the development of digital literacy across all age groups.Footnote 107 This focus on digital literacy should be aligned with a broader political literacy strategy, which supports the capacity of citizens to access and analyse information and to participate in political discourse, both online and offline. As the recent push against the regulation of tech companies suggests,Footnote 108 there are worrying changes in the international political climate, which propel an anti-regulatory move. Whether EU law can withstand these powerful global actors remains to be seen.Footnote 109
16.1 Introduction
The study presented in this chapter is based on the hypothesis that today, more than ever, we need legal regulations adapted to the interaction between technology, human rights, and investments. In this context, international investors, as influential providers of digital services, must respect human rights by adapting their investment activities. At the same time, it has become a priority for states to find a balance between national sovereignty and the protection of investments and fundamental rights, especially in light of the dramatic decline in investment treaties.Footnote 1
Sociability, as the ability of people to live in society, is a cornerstone of the digital era. Since the law reflects the structure of society, the law of sociability is a natural norm applicable to both individuals and states, highlighting the importance of international cooperation to meet the heightened human needs of civilisation. International investments have a profound impact on human rights, and the standards of legal treatment and the attitudes of host or origin states can influence inclusive and sustainable development. The question is: in light of these priorities, is it better to include new reference clauses in investment contracts or to enhance the dynamics of investment treaties that encompass these standards?
Either way, UN General Assembly Resolution 67/171 emphasises that human rights should be the main framework in negotiations regarding international legal instruments in the field of foreign investments. Current legal regulations need to create a fair and clear framework for all participants, balancing innovation, investment protection, and respect for human rights. In the latter case, we must consider both access to digitalisation as a human right and the individual autonomy to choose not to use the internet or technologies.Footnote 2
The close interaction between international investment and significant advances in digital technologies points to the need for legal regulation adapted to this convergence. International law is facing a dynamic that reflects a complex and close interaction between the global business world and rapid technological developments. In general, the legal regime of foreign investment has its origins in the same law of sociability from which trade developed. Sociability, as an appropriation and capacity of people to live in society, resulting from the human character of being sociable, is a spring of the digital era, an era in which international investors as digital service providers have to comply by adapting to respect human rights in investment activities.
In the past, the law of sociability was considered a natural law not only for individuals but also for states.Footnote 3 It is a perfectly valid rule even now. The more humankind advances towards civilisation, the more our needs increase, and as these cannot always be satisfied by the products and industry of our own country, we are obliged to have recourse to neighbouring countries, so that it may be said that from the particular needs of individuals have been born relations between states.Footnote 4
As a result, the diversity of human rights is directly proportional to the level of sociability. As things were in the past, so they are today. Why? Because we have a ‘today’ characterised by the digital boom as a new kind of sociability with new laws of sociability between individuals and states, and human rights can only be seen in their fundamental and vertical dimensions.
Therefore, international investment clearly has a profound impact on human rights, contextually given that the standards of their legal treatment, as well as the attitude of host states or even home states, may not always result in inclusive, sustainable and equitable development.Footnote 5
General Assembly Resolution 67/171 states that human rights are a primary guiding framework in negotiations for international law instruments in the field of foreign investment. The resolution includes among its provisions the following:
(30) also recognizes that good governance and the rule of law at the national level assist all States in the promotion and protection of human rights, including the right to development, and agrees on the value of the ongoing efforts being made by States to identify and strengthen good governance practices, including transparent, responsible, accountable and participatory government, that are responsive and appropriate to their needs and aspirations, including in the context of agreed partnership approaches to development, capacity-building and technical assistance.Footnote 6
Increasing concerns for good regulation through the harmonisation of the international investment regime with regard to human rights have been driven by their overlap and interference with intellectual property, technology transfer, climate change, and energy regimes. Legal research analyses and assessments need to address the impact of the convergences, divergences, and intersections of these very different regimes on the realisation of human rights.
Digital technologies have become a key tool in this strategy of globalising business. Many international investments focus on areas of research and development, including digital technologies. Cross-border collaboration in these areas stimulates innovation and brings together expertise from different regions of the world. Legal regulations are currently focused on cross-border data transfer, global intellectual property rights protection, and cyber risk management. Legal initiatives must create a fair and clear framework for all participants. The balance depends on facilitating innovation, protecting investment and ensuring respect for human rights in the context of digital technologies. The focus remains on the protection of individual rights in the digital age, in particular privacy and data security, but also modern compliance with international standards on the human rights for inclusion and diversity.
16.2 Is a Legal Approach of Different Theories and Regimes Useful for Reaching a Single Unified Theory?
Recent doctrine and case law have approached international investment through a human rights lens, taking into account a wide range of issues such as: (a) the impact that states’ obligations (implicitly their liability) under Bilateral Investment Treaties (BITs) or Treaties with Investment Provisions (TIPs) would have on their capacity to respect human rights or to adopt new provisions in this respect – in this context, the phenomenon of digitisation is central; (b) the package of measures that states and other actors would need to adopt or pursue in order to ensure a positive impact and avoid negative impacts; and (c) how to address these new challenges through international cooperation. All this continues in an upward and multifunctional trend aiming at holistic, people-centred development.
In this context, the transdisciplinary approach to human rights would be advisable in the sense of an integrative research strategy.Footnote 7 Through transdisciplinarity as a shared space, we can seek to accumulate new knowledge from the dialogue between two or more disciplines. This is an approach that understands reality as a whole and analyses it from that complete perspective without addressing each of the different parts that make it up separately. Finally, transdisciplinarity is also present in technologies, where integrated knowledge allows the development of technological tools with immediate application in solving specific problems.Footnote 8
When successfully used in the social sciences, we note that transdisciplinarity, if approached correctly, has the potential to overcome the singular vision of the specialist fields that form it, seeking to achieve a unity of knowledge. Human beings are a dynamic and ever-changing object of study, especially from the perspective of human rights and especially in the digital age. In this way, the transdisciplinary approach is necessary to obtain a complete assessment of human behaviours and the communities in which they develop, since they cannot be examined in isolation when we refer to the internet, new technologies, and so on.
The socialisation discussed in Chapter 15, together with integration and social control (notions that should be used with caution in the field of law), are the main concepts used in social psychology, sociology, and cultural anthropology, which define the processes, mechanisms, and institutions that ensure in any society the conformity of members of society to its ethical, normative, and cultural model, but also prevent deviation.Footnote 9 It is, therefore, impossible to analyse the subject of this chapter in isolation. However, the importance of a legal analysis that could bring together several reference extracts from different disciplines in order to arrive at a single legal theory focused on a particular theme of study, as we often find in human rights, is obvious. In other words, even if we pursue very concrete themes (such as digital impact), the legal concept must be the subject of a rigorous discipline with its own language and its own requirements.Footnote 10
This mode of analysis chosen for the exposition of my theory was inspired, at the level of the structure of the analysis, by the ideas of E. O. Wilson, who affirmed the unified theory of knowledge in disciplines as an Esperanto between physics, biology, social sciences, and humanities. The level of legal research can concentrate, as we have shown, on different and comparative legal regimes, especially when it comes to defending and regulating human rights as comprehensively as possible. We have found that applying this theory can open new doors to new, broader, more effective solutions. In fact, since in the process of scientific knowledge it is always worth noting the success of several scientists’ attempts to bring together certain different concepts for comparative analysis, Wilson’s book, Consilience, is also a revelation in the same sense.Footnote 11 This jump together of specialists from different fields that also involves substantially different concepts can indeed provide unified theories.
The aim is to obtain a unified human rights theory of international digital investment, which can be derived from the analysis of substantially different notions, such as the promotion of social progress and better living standards; greater freedom; use of specific international mechanisms to promote the economic and social progress of all peoples; and identification of interferences between intellectual property regimes, technology transfer, consumer protection, digital green infrastructure, and investment policies.
We find an example related to the theme of this chapter in addressing human rights, trade, economic value creation and capture, law enforcement, and national security for tailored digital regulation. However, the speed at which states respond differs, resulting in gaps caused by the speed of technological progress. As a consequence, these gaps are reflected in the rights of individuals, businesses, international trade, and policymaking. It is becoming increasingly important to address the issue of establishing legal frameworks that are as compatible as possible at national, regional, and multilateral levels. In the digital economy, the focus must be constantly on foreign digital and technology start-ups, as experts believe that they can become the major players of tomorrow.
Alongside all these aspects, we have discovered an equation in previous work that is applicable in such analyses. It is one of the newest legal customs: the triangle of equal sides formed by human rights–digitalisation–security; in other words, following the same correspondence: legality, necessity, and proportionality. We have often said that in law we are more likely to discover and not innovate. The following theory is one of our discoveries that we have written about and promoted in the last year at international conferences. The definitions and explanations of each component of the New International Triangle (human rights, digitisation, and security) and how they interact with each other are based on the observation that the triangle is, in fact, a modern paradigm of international relations that focuses on the interconnectedness of its three equal sides. In other words, following the same correspondence, we can observe legality, necessity, and proportionality. Since Cristina Popa Tache personally discovered this geometry, we named it ‘DoDS triangle’ or, in English, ‘HrDS triangle’.
Using this equation has the potential to clarify regulatory lines in the area of international investment. We can see how, as digitisation intensifies, many states are having to develop new legislation to protect sensitive data belonging to individuals or institutions. Protection takes place to prevent their use for malicious or even illegal or commercial purposes, or against their use in the unauthorised surveillance of individuals (or the masses) by the state or other natural or legal persons who might have this interest. In this way, privacy and data protection will become a standard in cross-border trade and investment, as many commercial transactions require cross-border data flows that meet minimum legal requirements (UNCTAD, 2019).Footnote 12 From this point of view, this standard is not only preferred by the population but is also pursued by international investors who expect to find the best possible standards for their investments in host countries. Experts believe that investors and consumers are likely to start prioritising privacy and data protection as a fundamental human right and to censure companies operating in countries that do not provide adequate privacy and data protection.Footnote 13
The relationship to human rights highlights the connection between international investment and digitisation, going beyond data protection, and references the groundbreaking Digital Economy Partnership Agreement (DEPA), the first of its kind, signed in 2020. In its content, the following are brought into the spotlight: ‘the importance of corporate advocacy, social responsibility, cultural identity and diversity, environmental protection and preservation, gender equality, indigenous rights, labour rights, inclusive trade, sustainable development and traditional knowledge’.Footnote 14 In the same vein, problems arise when human rights and freedoms conflict with the acceptance and use of digitisation. One example may be the religion or faith of certain groups of individuals that prohibits them from using modern digital means (in the same way that they refuse to take medication, for example). It has also been highlighted that there have been and continue to be cultural or traditional differences that should be given more consideration by digital service providers. The history of the investment system has seen opposition to certain types of investment, which leads us to think now about strategies for reconciling the discrepancies that can arise from relating the freedom of human belief to certain ideologies and a certain qualification of foreign investment. At one time, Canada and France, for example, invoked cultural criteria to prevent the American entertainment industry from entering these countries and dominating their national entertainment industries. The danger of a certain kind of dominance appears on the horizon for national digital industries, to be treated with particular attention because there may exist or may arise national security problems that may overlap, knowing that the limits of national security and safety are regulated at the level of domestic law and, as a consequence, are usually related to certain human rights.Footnote 15 Alternatively, precisely these premises can lead to various contemporary problems (some marked by the unpredictability) of digital manifestation because is it not the case that ‘everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom to manifest his religion or belief […]’.Footnote 16
The Universal Declaration of Human Rights itself reinforces respect for these values, which are considered to be among the highest aspirations of humanity.Footnote 17 It is equally true that all these rights and freedoms will have to be adapted at the government level so that, through the development of digitisation, social progress and improved living conditions can be fostered within the framework of greater freedom, as provided for in the Declaration.
In order to avoid discrepancies and conflicts between different jurisdictions, international regulations and especially common standards are essential. Collaboration between countries can help create a predictable and coherent environment. The convergence of international investment and the digital boom brings to the fore new paradigms, such as state–private sector relations in digital innovations. This process is not without some limits that can generate significant effects if not dealt with seriously, promptly and competently. For example, the rapid pace of innovation can lead to rapid advances in digital technologies that can outpace the ability of legislation to keep up with new developments. Regulations may become outdated before they are updated, leaving gaps in relevant legal matters. These are compounded by significant differences between jurisdictions in terms of legal regulations and practices. Obstacles to adopting global standards or maintaining a coherent framework in the context of international digital investment may be created. It is not easy to come up with regulations that take into account all the variables and ensure a fair balance between divergent interests. If we discuss individual rights and cybersecurity, the balance between data access and data privacy can become a point of contention in the regulation of international technology investments.
Other limits may be given by implementation costs or resistance to change by some sectors or entities that enjoy existing regulations. Let us not forget that in this way, the process of adaptation and adoption of innovative legal practices needed in the digital environment can be slowed down. This phenomenon is amplified by the fact that regulations can be difficult to monitor and implement in practice, especially given the cross-border nature of many international investments and digital operations. The effects can be disastrous when combined with the limitation of appropriate civil society and industry consultations, as it can lead to the absence of important insights and increase the risk of the inappropriateness of norms.
Any initiative must consider the delicate balance between promoting investment and protecting human rights. The analysis of the rights and obligations of international investment actors in the information and communications technology (ICT) sector underlines the complexity of the legal issues involved in this rapidly developing area.
16.3 Types of International Investment – Digitisation – Human Rights
International investment branches out into all areas: in the vertical sense, we have investments in maritime, land, air and space, while horizontally, there is an unlimited horizon of areas and fields we can find foreign investment. The recognised fact that investment activities can no longer function outside digitisation is one of the main reasons they are seen as shaping the global economy. The use of artificial intelligence is just one step further towards more advanced automation and robotics. The analytical perspectives include the legal, social, economic, and political spheres.
In contemporary society, there is an endless variety of types of international investment, including projects such as building and launching satellites, building renewable power plants, building and operating blood plasma fractionation facilities, and many and various media services. In this landscape, ICT is co-ordinated by the big investors providing social media platforms without much transparency in respect to human rights. Special treaties to regulate this area in a clear way do not exist at the multilateral level, and no clauses have been identified in investment treaties to cover this issue. Therefore, the existence of these situations of non-regulation affects the protection of human rights, which is so sensitive to legal interpretation. In the process of globalisation, the national system of regulation, adjudication, management, or dissemination of information has a significant impact.Footnote 18 In the face of these developments, states generally play an active role, preferring the status of policymaker rather than policy-taker in the international community.Footnote 19
All of this requires the close cooperation of experts in these fields, and continuous research, debate, and critical analysis in order to respond to various issues that arise in relation to the effective legal regulation of the foreign investment–digitisation–human rights relationship. Like all other issues in law, the issue of human rights in this context can only achieve its social and economic objectives to the extent that all aspects are made known and are subject to critical analysis.Footnote 20 In this process, information becomes the focus of attention and acquires the characteristics of an effective tool for promoting and ensuring human rights. The rapid development of digitalisation has a serious impact on society and on culture, and experts must identify each element of this impact in relation to human rights. According to Resolution A/HRC/RES/20/8: ‘Information is a source that activates the economy, making it possible for people to participate in government activities through public forums and contribute to the decision-making process.’Footnote 21 In this context, international investors interested in the correct application of legal regulations are increasingly engaging in major investments in start-ups operating in this field. These investments are often made in collaboration with human rights groups or in partnership with various social organisations, focusing on adopting appropriate digital technologies and testing their compliance with human rights standards. Information technologies will continue to deliver progress, but the way in which their use will be achieved requires both a detailed knowledge of every element of the technology, as well as human rights protection to cover each element. They inherently come with new and hard-to-anticipate risks related to non-discrimination, privacy, children’s rights, freedom of expression, access to public services, and the right to work. We are therefore at the intersection of human rights and technological development.Footnote 22
In this context, access to digitisation itself becomes a human right that needs to be expressed in specific regulations, a fact confirmed by recent developments according to which customary international law is ready to accept rules that identify internet access as a human right.Footnote 23 If we analyse the theoretical framework corresponding to this hypothesis, we will notice that there are very few recent works, even fewer in the field of digitisation, that address human rights in general or in investment law in particular. Discussions should also focus on the extent of the regulatory right available to states.Footnote 24 Attention must be fairly directed to the contradictory issues that can be generated between the right to regulate and the protection offered to foreign investors under international investment law.
At a theoretical level, it is also important to explore whether the host states of certain investments in technology have the right to regulate, and also how they exercise that right. For example, there are a small number of articles in which theorists have raised the issue of the possibility of adopting new and adapted regulations to improve social standards and living conditions for populations. If these regulations do not exist, then states could face a wide range of investment disputes.Footnote 25
A case in point is financial technology, where the latest international law instruments with more tailored clauses are emerging. The model put forward by these instruments could be followed in all other investment sectors where there is the imprint of digitisation, as the global use of financial technologies is just one example with a great impact on current human rights regulations. Perhaps the ultimate goal ideally pursued by the fundamental and vertical analysis of human rights in the digital age is even the development of international law instruments, such as digital bridges, whereby states protect their populations in this respect. Of course, these new instruments come with accountability built in for those who violate human rights in the digital sphere, but accountability is the subject of a wider debate, which follows hierarchically from the normative establishment of these new rights.
Inspired by the international FinTech Bridge agreements (bilateral agreements between two national governments and their respective relevant regulatory bodies), we believe it is possible for all types of international investments to be subject to new or amended regulations. To that end, the following pillars of rights and obligations should be considered as relevant in the investment–digitalisation–human rights relationship:
(a) Government-to-government pillar. A framework in which implementing authorities – according to the administrative system in each country – agree to hold regular working discussions, which could, for example, be organised quarterly to help realise existing options. There are several contributors to this type of regulation that help elaborating specific norms – among them, political officials, legal experts, and regulatory authorities. From here, parties can launch discussions of the development of policies to suit the type of investment-digitalisation-human rights in each jurisdiction.Footnote 26 The aim is to explore innovative ways to mitigate the impact of digitisation on human rights, to explore the challenges faced by international investors, and to gather relevant evidence, in particular on the protection and promotion of this sector. These aspects are particularly important and have been qualified in theory as part of the Great Digital Game.Footnote 27 There has even been talk of these concerns forming a Digital Silk Road (inspired by China’s modus operandi for various activities – commercial and diplomatic – of interest to governments in the cyber domain).Footnote 28 In this sense, we consider the circumstances of the acceleration of the formation of a new development model that presents domestic circulation as the main body consolidated by the dual circulation of the domestic and international economy. This relationship is reciprocal, and China is one example of a large economy that has addressed it.Footnote 29 Here, a note can be added that relates to modern international law: possible bridging agreements could be established through specific clauses that dialogue between governments, regulators, and what we call industry, to identify emerging digital trends and specific issues. In FinTech, for example, blockchain, security and data exchange, RegTech,Footnote 30 SupTech,Footnote 31 and WealthTech,Footnote 32 have already been the subject of international cooperation. With these, the desirability of establishing legitimate and effective international institutions is logically raised. Institutions mean public law regulations and international institutions as international public authorities will be covered by a part of public international law that should be more appropriately understood as part of (international) public law, given their hybrid typology.Footnote 33 This is also the aim of this chapter: namely, to highlight by reference to a concrete example (i.e., international investment), the emergence of a new era of legal regulation that can be used as a model for all other subsequent emergences. As developed in another publication, this coexistence of radically different visions of international institutions has proven useful in specialist theory.Footnote 34 From this perspective, public international law has the potential to transform and adapt its own ecosystem. Regulatory demands in the legal relationship between foreign investment, digitalisation, and human rights call for identifying, reforming, and advancing the aspects of public international law that govern the exercise of international public authority.Footnote 35
(b) Regulator-to-regulator pillar. This is the second type of regulation, built on a cooperation agreement between implementing authorities and between regulators. Again, public international law reacts to challenges and manifests its regulatory role through these types of public authorities. The rights of the consumer of digital services are an area that must be constantly updated with the latest protections.Footnote 36 The industrial transformation of the digital ecosystem, especially but not least digital communication, has reflected the adaptations and reinterpretations of the user within the interface where there is a permanent dialogue with the strategies proposed by industry. The web has become the symbolic space where human-to-human interactions take place and not with the machine as it was believed.Footnote 37
Here, we can mention the characteristic attributions of regulatory authorities: improvement of licensing procedures for innovative companies already licensed or authorised in another jurisdiction, and developing research and testing solutions (and publishing the results for the benefit of industry, regulators, and consumers). What is relevant here are the factors that prioritise or influence the elaboration of rules in international law. This argument is based precisely on the emphasis on common international interests, not on the interest of a single state.Footnote 38 The digital domain is increasingly under the purview of various international authorities. That is why the emphasis must be placed on the legal relationship of public international law (private law already presents its limits). Private international law has been criticised for its lack of meaningful contribution to global governance issues, such as the equal distribution of wealth, the fight against transnational human rights violations, and the protection of collective planetary goods such as the environment. All this, as well as many other issues, such as those pertaining to digitalisation, somehow remained within the scope of public international law.Footnote 39
For example, through an instrument of international law, preferably through a treaty as a hard law instrument, states can agree to work to enhance the various digital bridges between them and the benefits of this cooperation for digital investment activities. How? By establishing cooperation at expert level, tailor-made strategic advice, assisting and supporting the process of identifying opportunities, establishing contacts between the relevant staff at trade and investment implementing authorities, developing innovation programmes, and, of course, setting up joint working groups.
(c) Business-to-Business pillar. A subsequent pillar built on the ability of governments to support active engagement between digital industry bodies and human rights bodies. How? Through regular high-level business-to-business discussions involving joint human rights and industry representative groups co-chaired by signatory states.
Also under this pillar, governments would approve the initiatives of digital industry bodies. One of the topics of these discussions may be the exploration of collaboration around certain areas, such as (the list is only illustrative) supply chain finance, digital assets, and the use of blockchain in government applications, such as social care, estates or pensions, or data sharing.
In the face of these activities, there has been widespread criticism and mobilisation against trade agreements and investment treaties, which is perpetuated because solutions are not always adopted. One of the issues that has been the subject of this criticism has focused on the tendency of governments to focus more on trade interests without taking into account their obligations to address human rights, the environment, and development. In other words, the way states are preoccupied with ensuring a business-friendly environment has led to the undermining of the protection and realisation of human rights. Along with the tech giants, as the big digital investors are known in theory, states really have to bend to regulatory updates.Footnote 40 The era of datafication has come with issues such as surveillance capitalism, digital welfare, and the government use of data about its citizens that reconfigures rights and power and minimises the role of individuals and citizens in decisions about their own future.Footnote 41 In a United Nations (UN) report, the UN Special Rapporteur on extreme poverty, Philip Alston, states that ‘social protection and assistance systems are increasingly driven by data and digital technologies that are used to automate, predict, identify, monitor, detect, target and punish’.Footnote 42
A number of papers have gone through specific analyses of international investors’ decisions and actions that extend to respect for human rights in their operations, products, and services.
In this context, it is worth recalling the UN Guiding Principles on Business and Human Rights (UNGP),Footnote 43 which establishes that investors should take into account in their investment activities the responsibility to ensure that their investments in the technology sector avoid negative impacts. These guiding principles are built on three pillars: the duty of the state to protect human rights, the responsibility of business to respect human rights in its own operations and along the value chain, and the right of access to redress for victims of human rights abuses. Of particular importance is the delineation of investors’ responsibilities and the identification of current trends in practice, together with useful recommendations for investors.Footnote 44
An official enumeration of the main internationally recognised human rights is found in the International Bill of Human Rights (the Universal Declaration of Human Rights and the main instruments by which it was codified: the International Covenant on Civil and Political Rights and the International Covenant on Human Rights, the Covenant on Economic, Social and Cultural Rights, together with the fundamental rights principles of the eight fundamental International Labour Organization Conventions as set out in the Declaration on Fundamental Principles and Rights at Work). They play a guiding role for other social actors who evaluate the impact of investments and business in general on human rights.
The responsibility of the international investor in general, and of commercial enterprises in particular, is to respect human rights. But along these lines, responsibilities may differ from matters of legal liability and enforcement, which remain largely defined by the provisions of national law in the relevant jurisdictions. Uniformity is a difficult goal to achieve.
Depending on the circumstances, international investors may need to follow additional standards. For example, businesses should respect the human rights of people belonging to specific groups or populations that require special attention when the businesses may have a negative impact on human rights. For this, the UN instruments further developed the rights of indigenous peoples, women, national, ethnic, religious and linguistic minorities, children, persons with disabilities, and migrant workers and their families. Special attention is also given to the elderly. The international community has felt its own low level of digital literacy against the backdrop of digitisation. At the level of older generations, its impact has been huge (social exclusion and restricted access to certain natural human rights). Theorists have analysed the power that universal human rights instruments can have to impose on states a certain concern for what is called ‘the inclusion of older generations in the information society’. The same concerns are transferred horizontally to investors in digital services. A world convention is being worked on at the UN that protects the rights of elderly people and through which rights such as education, information, transparency, and other aspects related to the degree of the standard of living have been brought into the spotlight.Footnote 45 Additionally, in the event of an armed conflict, investors should comply with the standards of international humanitarian law.
Establishing a system of accountability is essential because the important risks to people and the material risks to technology investments are converging rapidly, as current studies show. These continue to be highlighted in venture capital investments, as well as by the recognition of digital rights as material rights by investors and by those empowered to regulate new standards, which would lead to uniformity of data points and different methodologies today.
The UN Human Rights B-Tech Project notes that while sustainable investment practices have been around for decades, investments in Environmental, Social, and Governance funds reached approximately $40.5 trillion in global assets under management in 2020.Footnote 46 It also points out that human rights are often narrowly understood as individual aspects of human rights, such as forced labour or discrimination (correlated with certain narrow parts of business, particularly in dealing with customers and end-users), rather than encompassing the full range of human rights.Footnote 47
The role of technologically responsible investment is becoming the benchmark for civil society seeking protection through the accountability of investors for their practices but also appropriate mechanisms to enforce these rights. Responsible investors promote public policies that encourage responsible investment in technology in a particular area of interest.Footnote 48
16.4 What Is New in the World of Digitisation? New Standards for the Legal Treatment of Foreign Investment in Digital Matters
In this context, the DEPA is a first step, whereby Singapore, New Zealand, and Chile signed this as the world’s first digital trade treaty in June 2020. DEPA is pioneering because it creates international rules and practices for cross-border business in the digital economy, anchored by strongly stated sustainable development goals. The Agreement highlights ‘the importance of the digital economy in promoting inclusive economic growth … in particular Goal 8 and Goal 9’.
Meanwhile, we have seen China, South Korea, and Canada’s intentions to join DEPA along with joining the Comprehensive Regional Economic Partnership, driven by the growing importance of digital trade and its evolution, leading to cooperative efforts in scientific research and international education. This development aligns with the widely accepted principle that digital trade represents the future of global trade and investment, playing a dynamic and transformative role in shaping the global economy. It is focused on blockchain, artificial intelligence, and internet technologies to guide the expansion of e-commerce and cross-border payments, issues that highlight the priority of mapping new trade technologies, including cloud services, Distributed Ledger Technology, and 3D printing.
As we have outlined the solutions in the pillars in Section 16.3, through DEPA, the parties will convene a ‘Digital SME (small and medium-sized enterprises) Dialogue’ to promote the benefits of the Agreement for the parties’ SMEs through consultations with experts from the private sector, non-governmental organisations, academia, and other stakeholder groups.
Importantly, this international law instrument is intended to inspire international law actors in negotiations, and ‘will generate new ideas and approaches in the international digital economy or digital trade’.
On the other hand, international organisations concerned with these issues have produced specific reports following global crises affecting high and volatile food prices, climate change, and financial turmoil. Public international law has felt obliged to react to the international economic order. We have seen how civil society has pushed states to turn (in parallel with the attention paid to business) to the social implications for human rights vis-à-vis the new set of international investment law policies and instruments.
In these efforts, human rights impact assessments consider the right to development and its implementation, the importance of policy coherence, taking into account human rights obligations, standards and principles, the need for human rights audits and impact assessments, flexibilities and exemptions, such as in the WTO Agreement on Trade-Related Aspects of Intellectual Property Rights, and remedies.Footnote 49
In view of these moves, it is not out of the question that investment treaties will be amended by introducing new standards of treatment specific to digital activities.
When discussing all of this, we note the major role of treaties in the governance of digital investments, with an emphasis on strategic and significant international implications. Adapting legal regimes to the digital age is an essential process. The results of the analysis converge towards the need for domestic laws and existing treaties to be revised in order to bring them up to date and adapt them to the new digital realities. The approaches are based on identifying gaps or ambiguities in existing regulations and adjusting them, and formulating them in a flexible and adaptive way.
International collaboration to establish common standards is based on harmonisation, which has the purpose of facilitating cross-border interactions and helping to avoid discrepancies between jurisprudence. Adapting legal frameworks to the digital age also requires significant efforts in promoting digital literacy. Citizens need to be informed about their rights and responsibilities in the digital environment, and continuing education can contribute to a deeper understanding of digital challenges and opportunities.
The specific analysis of the ramifications at the national level requires a (re-)examination of the specific regulations at the domestic level, with an emphasis on issues such as intellectual property, national security and economic progress. National regulations can include policies and facilities to stimulate investment in digital technologies. The privacy and safety of consumers are at the centre of the debates, especially against the background of the absence of internationally uniform definitions for what it means to be a consumer, user, or recipient of technological products or services. By specifically analysing these issues at the national level, regulations can be adjusted to create a legal environment conducive to the sustainable development of digital technologies, while protecting national interests and the fundamental rights of citizens. At the international level, some concrete examples of the implications for public policy and human rights, as well as how international actors can gain control over technologies, are well known. China is a notorious example of the government using digital technologies to exert tight control over information available on the internet. Through digital censorship and monitoring systems, Chinese authorities have imposed significant restrictions on freedom of expression online, directly affecting human rights to freedom of information.
Situations where international actors, such as the big players in the technology industry, enter into technology transfer agreements can have major implications. For example, when a company from a developed country invests in a developing country, it can gain control over the local digital infrastructure, raising questions about data protection and access to cutting-edge technologies.
In the US, the use of facial recognition technologies in government and the private sector has raised concerns about privacy and individual freedom. In this context, questions have been raised about the protection of human rights, especially regarding mass surveillance and the potential abuse of the technology.
In addition to these examples, we can remember that international investments in the digital infrastructure of African countries can turn into a certain form of control over digital resources. An example can be given by the case of Chinese investments in communication networks and technologies in Africa, where there are concerns about the impact on the digital autonomy of these countries and the potential risks to human rights in the context of these partnerships.
16.5 Conclusions
This chapter has addressed a particularly dynamic and important theme in the international law landscape, namely the legal relationships in the triad of foreign investment, digitisation, and human rights. This is demonstrated by the frequent emergence of new international instruments, a dynamic unprecedented for any other field.
The digital intersection between trade and human rights, including the right to development, has revealed new elements that can inspire negotiating parties. The implications for inclusion and human rights extend, without limitation, to data protection, social responsibility, cultural identity and diversity, environmental protection and conservation, gender equality, indigenous rights, labour rights, inclusive trade, sustainable development, and traditional knowledge. All of these regulations aim to protect and promote fundamental human rights and freedoms, improve cultural links between people, including indigenous peoples, and improve digital access for women, rural populations, and low socio-economic groups.
All recommendations on measures to meet governments’ human rights commitments are taken into account, developing national, regional, and continental approaches and frameworks to support digital trade worldwide.
It is advisable to continue analyses that build on the positive aspects of including the human rights and environmental rights dimensions of sustainability explicitly in mega-infrastructure plans and projects, globalisation and its impact on the full enjoyment of all human rights and policymaking, the concerns of all members of society in the negotiation, drafting, and implementation of specific international law instruments, including the search for complementary and adjustment measures, the infusion of ethical and normative objectives and processes into contracts between states and investors, and the ways in which human rights, in particular, can be incorporated.
The digital domain and its interaction with human rights, especially in the context of digital investments, is a domain totally subject to the argumentative practice of law as an interaction of formal and instrumental reasoning techniques.Footnote 50 The differences lie in those specific to the novelty of the field since we cannot discuss the formation of a precedent as paradigmatic modes of formalist reasoning. Attention is directed to instrumentalism: purposive reasoning, balancing, policy analysis, and casuistry.Footnote 51 Interactions will always exist between compliance with existing norms and the moral power to ‘change the law into a more optimal rule’.Footnote 52
Bringing together groups of specialists for these analyses has the potential optimistically to lead to positive results and the identification of the best solutions.