To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter explores the privacy challenges posed by generative AI and argues for a fundamental rethinking of privacy governance frameworks in response. It examines the technical characteristics and capabilities of generative AIs that amplify existing privacy risks and introduce new challenges, including nonconsensual data extraction, data leakage and re-identification, inferential profiling, synthetic media generation, and algorithmic bias. It surveys the current landscape of U.S. privacy law and its shortcomings in addressing these emergent issues, highlighting the limitations of a patchwork approach to privacy regulation, the overreliance on notice and choice, the barriers to transparency and accountability, and the inadequacy of individual rights and recourse. The chapter outlines critical elements of a new paradigm for generative AI privacy governance that recognizes collective and systemic privacy harms, institutes proactive measures, and imposes precautionary safeguards, emphasizing the need to recognize privacy as a public good and collective responsibility. The analysis concludes by discussing the political, legal, and cultural obstacles to regulatory reform in the United States, most notably the polarization that prevents the enactment of comprehensive federal privacy legislation, the strong commitment to free speech under the First Amendment, and the “permissionless” innovation approach that has historically characterized U.S. technology policy.
This chapter introduces methods for generating and documenting paradata before and during data creation practices and processes (i.e. prospective and in-situ approaches, respectively). It introduces formal metadata-based paradata documentation using standards and controlled vocabularies to contribute to paradata consistency and interoperability. Narrative descriptions and recordings are advantageous for providing contextual richness and detailed documentation of data generation processes. Logging methods, including log files and blockchain technology, allow for automatic paradata generation and for maintaining the integrity of the record. Data management plans and registered reports are examples of measures to prospectively generate potential paradata on forthcoming activities. Finally, facilitative workflow-based approaches are introduced for step-by-step modelling of practices and processes. Rather than suggesting that a single approach to generating and documenting paradata will suffice, we encourage users to consider a selective combination of approaches, facilitated by adequate institutional resources, technical and subject expertise, to enhance the understanding, transparency, reproducibility and credibility of paradata describing practices and processes.
The chapter examines the legal regulation and governance of ‘generative AI,’ ‘foundation AI,’ ‘large language models’ (LLMs), and the ‘general-purpose’ AI models of the AI Act. Attention is drawn to two potential sorcerer’s apprentices, namely, in the spirit of J. W. Goethe’s poem, people who were unable to control a situation they created. Focus is on developers and producers of such technologies, such as LLMs that bring about risks of discrimination and information hazards, malicious uses and environmental harms; furthermore, the analysis dwells on the normative attempt of EU legislators to govern misuses and overuses of LLMs with the AI Act. Scholars, private companies, and organisations have stressed limits of such normative attempts. In addition to issues of competitiveness and legal certainty, bureaucratic burdens and standard development, the threat is the over-frequent revision of the law to tackle advancements of technology. The chapter illustrates this threat since the inception of the AI Act and recommends some ways in which the law has not to be continuously amended to address the challenges of technological innovation.
Generative AI offers a new lever for re-enchanting public administration, with the potential to contribute to a turning point in the project to ‘reinvent government’ through technology. Its deployment and use in public administration raise the question of its regulation. Adopting an empirical perspective, this chapter analyses how the United States of America and the European Union have regulated the deployment and use of this technology within their administrations. This transatlantic perspective is justified by the fact that these two entities have been very quick to regulate the issue of the deployment and use of this technology within their administrations. They are also considered to be emblematic actors in the regulation of AI. Finally, they share a common basis in terms of public law, namely their adherence to the rule of law. In this context, the chapter highlights four regulatory approaches to regulating the development and use of generative AI in public administration: command and control, the risk-based approach, the experimental approach, and the management-based approach. It also highlights the main legal issues raised by the use of such technology in public administration and the key administrative principles and values that need to be safeguarded.
The rapid development of generative artificial intelligence (AI) systems, particularly those fuelled by increasingly advanced large language models (LLMs), has raised concerns of their potential risks among policymakers globally. In July 2023, Chinese regulators enacted the Interim Measures for the Management of Generative AI Services (“the Measures”). The Measures aim to mitigate various risks associated with public-facing generative AI services, particularly those concerning content safety and security. At the same time, Chinese regulators are seeking the further development and application of such technology across diverse industries. Tensions between these policy objectives are reflected in the provisions of the Measures that entail different types of obligations on generative AI service providers. Such tensions present significant challenges for implementation of the regulation. As Beijing moves towards establishing a comprehensive legal framework for AI governance, legislators will need to further clarify and balance the responsibilities of diverse stakeholders.
Generative artificial intelligence (GenAI) raises ethical and social challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. The main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of abuse, use or underuse of Gen AI. The epistemological approach investigates the specific way in which Gen AI produces information, knowledge, and a representation of reality that differs from that of human beings. To fully realise the impact of Gen AI, our paper contends that both these approaches should be pursued: an identification of the risks and opportunities of Gen AI also depends on considering how this form of AI works from an epistemological viewpoint and our ability to interact with it. Our analysis compares the epistemology of Gen AI with that of law, to highlight four problematic issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence. The epistemological analysis of these issues leads to a better framing of the social and ethical aspects resulting from the use, abuse or underuse of Gen AI.
Drawing on the extensive history of study of the terms and conditions (T&Cs) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January-March 2023, in which T&Cs were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc.), small and large sizes, and varying countries of origin. Our early findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradiction to their function as content generators rather than mere hosts for third party content.
The rise in the use of AI in most key areas of business, from sales to compliance to financial analysis, means that even the highest levels of corporate governance will be impacted, and that corporate leaders are duty-bound to manage both the responsible development and the legal and ethical use of AI. This transformation will directly impact the legal and ethical duties and best practices of those tasked with setting the ‘tone at the top’ and who are accountable for the firm’s success. Directors and officers will have to ask themselves to what extent should, or must, AI tools be used in both strategic business decision-making, as well as monitoring processes. Here we look at a number of issues that we believe are going to arise due to the greater use of generative AI. We consider what top management should be doing to ensure that all such AI tools used by the firm are safe and fit for purpose, especially considering avoidance of potential negative externalities. In the end, due to the challenges of AI use, the human component of top corporate decision-making will be put to the test, to prudentially thread the needle of AI use and to ensure the technology serves corporations and their human stakeholders instead of the other way around.
The advent and momentum gained by Generative AI erupted into the EU regulatory scene signalling a significant paradigm shift in the AI landscape. The AI Act has struggled to embrace the eruption and extraordinary popularity of Generative AI and managed to provide for specific solutions designed for these models. Nonetheless, there are legal and regulatory implications of Generative AI that may exceed the proposed solutions. Understanding the paradigm shift that Generative AI is likely to bring will allow us to assess the sufficiency and adequacy of the measures adopted and to identify possible shortcomings and gaps in the current EU framework. Generative AI raises specific problems in the compliance of AI Act obligations and in the application of liability rules that have to be acknowledged and properly addressed. Multimodality, emergence factor, scalability or generality of tasks may mismatch the assumption underlying the obligations and requirements laid down for AI systems. The chapter explores whether the current ecosystem of existing and still-to-be adopted rules on AI systems does fully and adequately address the distinctive features of Generative AI, with special consideration to the interaction between the AI Act and the liability rules as provided for the draft AILD and the revPLD.
Generative artificial intelligence has a long history but surged into global prominence with the introduction in 2017 of the transformer architecture for large language models. Based on deep learning with artificial neural networks, transformers revolutionised the field of generative AI for production of natural language outputs. Today’s large language models, and other forms of generative artificial intelligence, now have unprecedented capability and versatility. This emergence of these forms of highly capable generative AI poses many legal issues and questions, including consequences for intellectual property, contracts and licences, liability, data protection, use in specific sectors, potential harms, and of course ethics, policy, and regulation of the technology. To support the discussion of these topics in this Handbook, this chapter gives a relatively non-technical introduction to the technology of modern artificial intelligence and generative AI.
Artificial Intelligence (AI) can collect, while unperceived, Big Data on the user. It has the ability to identify their cognitive profile and manipulate the users into predetermined choices by exploiting their cognitive biases and decision-making processes. A Large Generative Artificial Intelligence Model (LGAIM) can enhance the possibility of computational manipulation. It can make a user see and hear what is more likely to affect their decision-making processes, creating the perfect text accompanied by perfect images and sounds on the perfect website. Multiple international, regional and national bodies recognised the existence of computational manipulation and the possible threat to fundamental rights resulting from its use. The EU even moved the first steps towards protecting individuals against computational manipulation. This paper argues that while manipulative AIs which rely on deception are addressed by existing EU legislation, some forms of computational manipulation, specifically if LGAIM is used in the manipulative process, still do not fall under the shield of the EU. Therefore, there is a need for a redraft of existing EU legislation to cover every aspect of computational manipulation.