We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While generative AI enables the creation of diverse content, including images, videos, text, and music, it also raises significant ethical and societal concerns, such as bias, transparency, accountability, and privacy. Therefore, it is crucial to ensure that AI systems are both trustworthy and fair, optimising their benefits while minimising potential harm. To explore the importance of fostering trustworthiness in the development of generative AI, this chapter delves into the ethical implications of AI-generated content, the challenges posed by bias and discrimination, and the importance of transparency and accountability in AI development. It proposes six guiding principles for creating ethical, safe, and trustworthy AI systems. Furthermore, legal perspectives are examined to highlight how regulations can shape responsible generative AI development. Ultimately, the chapter underscores the need for responsible innovation that balances technological advancement with societal values, preparing us to navigate future challenges in the evolving AI landscape.
The rapid development of generative artificial intelligence (AI) systems, particularly those fuelled by increasingly advanced large language models (LLMs), has raised concerns of their potential risks among policymakers globally. In July 2023, Chinese regulators enacted the Interim Measures for the Management of Generative AI Services (“the Measures”). The Measures aim to mitigate various risks associated with public-facing generative AI services, particularly those concerning content safety and security. At the same time, Chinese regulators are seeking the further development and application of such technology across diverse industries. Tensions between these policy objectives are reflected in the provisions of the Measures that entail different types of obligations on generative AI service providers. Such tensions present significant challenges for implementation of the regulation. As Beijing moves towards establishing a comprehensive legal framework for AI governance, legislators will need to further clarify and balance the responsibilities of diverse stakeholders.
Generative artificial intelligence (GenAI) raises ethical and social challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. The main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of abuse, use or underuse of Gen AI. The epistemological approach investigates the specific way in which Gen AI produces information, knowledge, and a representation of reality that differs from that of human beings. To fully realise the impact of Gen AI, our paper contends that both these approaches should be pursued: an identification of the risks and opportunities of Gen AI also depends on considering how this form of AI works from an epistemological viewpoint and our ability to interact with it. Our analysis compares the epistemology of Gen AI with that of law, to highlight four problematic issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence. The epistemological analysis of these issues leads to a better framing of the social and ethical aspects resulting from the use, abuse or underuse of Gen AI.
The AI Act contains some specific provisions dealing with the possible use of artificial intelligence for discriminatory purposes or in discriminatory ways, in the context of the European Union. The AI Act also regulates generative AI models. However, these two respective sets of rules have little in common: provisions concerning non-discrimination tend not to cover generative AI, and generative AI rules tend not to cover discrimination. Based on this analysis, the Chapter considers what is currently the Eu legal framework on discriminatory output of generative AI models, and concludes that those expressions that are already prohibited by anti-discrimination law certainly remain prohibited after the approval of the AI Act, while discriminatory content that is not covered by Eu non-discrimination legislation will remain lawful. For the moment, the AI Act has not brought any particularly relevant innovation on this specific matter, but the picture might change in the future.
Generative AI promises to have a significant impact on intellectual property law and practice in the United States. Already several disputes have arisen that are likely to break new ground in determining what IP protects and what actions infringe. Generative AI is also likely to have a significant impact on the practice of searching for prior art, creating new materials, and policing rights. This chapter surveys the emerging law of generative AI and IP in the United States, sticking as close as possible to near-term developments and controversies. All of the major IP areas are covered, at least briefly, including copyrights, patents, trademarks, trade secrets, and rights of publicity. For each of these areas, the chapter evaluates the protectability of AI-generated materials under current law, the potential liability of AI providers for their use of existing materials, and likely changes to the practice of creation and enforcement.
This chapter points out the significant challenges in holding foundation model developers and deployers clearly responsible for the uses and outputs of their creations under US law. Scienter requirements, and difficulties in creating proof, make it challenging to establish liability under many statutes with civil penalties and torts. Constitutional protections for speech may shield model-generated outputs, or the models themselves, from some forms of regulation—though legal scholars are divided over the extent of these protections. And legal challenges to agencies’ authority over AI systems could hamstring regulators’ ability to proactively address foundation models’ risks. All is not lost, though. Each of these doctrines do have potential pathways to liability and recourse. However, in all cases there will likely be protracted battles over liability involving the issues described in this chapter.
There is growing global interest in how AI can improve access to justice, including how it can increase court capacity. This chapter considers the potential future use of AI to resolve disputes in the place of the judiciary. We focus our analysis on the right to a fair trial as outlined in Article 6 of the European Convention on Human Rights, and ask: do we have a right to a human judge? We firstly identify several challenges to interpreting and applying Article 6 in this new context, before considering the principle of human dignity, which has received little attention to date. Arguing that human dignity is an interpretative principle which incorporates protection from dehumanisation, we propose it provides a deeper, or “thicker” reading of Article 6. Applied to this context, we identify risks of dehumanisation posed by judicial AI, including not being heard, or not being subject to human judgement or empathy. We conclude that a thicker reading of Article 6 informed by human dignity strongly suggests the need to preserve human judges at the core of the judicial process in the age of AI.
Drawing on the extensive history of study of the terms and conditions (T&Cs) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January-March 2023, in which T&Cs were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc.), small and large sizes, and varying countries of origin. Our early findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradiction to their function as content generators rather than mere hosts for third party content.
The recent paradigm shift from predictive to generative AI has accelerated a new era of innovation in artificial intelligence. Generative AI, exemplified by large language models (LLMs) like GPT (Generative Pre-trained Transformer), has revolutionized this landscape. This transition holds profound implications for the legal domain, where language is central to practice. The integration of LLMs into AI and law research and legal practice presents both opportunities and challenges. This chapter explores the potential enhancements of AI through LLMs, particularly the CLAUDETTE system, focusing on consumer empowerment and privacy protection. On this basis, we also investigate what new legal issues can emerge in the context of the AI Act and related regulations. Understanding the capabilities and limitations of LLMs vis-à-vis conventional approaches is crucial in harnessing their full potential for legal applications.
This chapter explores the privacy challenges posed by generative AI and argues for a fundamental rethinking of privacy governance frameworks in response. It examines the technical characteristics and capabilities of generative AIs that amplify existing privacy risks and introduce new challenges, including nonconsensual data extraction, data leakage and re-identification, inferential profiling, synthetic media generation, and algorithmic bias. It surveys the current landscape of U.S. privacy law and its shortcomings in addressing these emergent issues, highlighting the limitations of a patchwork approach to privacy regulation, the overreliance on notice and choice, the barriers to transparency and accountability, and the inadequacy of individual rights and recourse. The chapter outlines critical elements of a new paradigm for generative AI privacy governance that recognizes collective and systemic privacy harms, institutes proactive measures, and imposes precautionary safeguards, emphasizing the need to recognize privacy as a public good and collective responsibility. The analysis concludes by discussing the political, legal, and cultural obstacles to regulatory reform in the United States, most notably the polarization that prevents the enactment of comprehensive federal privacy legislation, the strong commitment to free speech under the First Amendment, and the “permissionless” innovation approach that has historically characterized U.S. technology policy.
This chapter deals with the use of Large Language Models (LLMs) in the legal sector from a comparative law perspective, exploring their advantages and risks, the pertinent question as to whether the deployment of LLMs by non-lawyers can be classified as an unauthorized practice of law in the US and Germany, what lawyers, law firms and legal departments need to consider when using LLMs under professional rules of conduct - especially the American Bar Association Model Rules of Professional Conduct and the Charter of Core Principles of the European Legal Profession of the Council of Bars and Law Societies of Europe, and, finally, how the recently published AI Act will affect the legal tech market – specifically, the use of LLMs. A concluding section summarizes the main findings and points out open questions.
Generative artificial intelligence has a long history but surged into global prominence with the introduction in 2017 of the transformer architecture for large language models. Based on deep learning with artificial neural networks, transformers revolutionised the field of generative AI for production of natural language outputs. Today’s large language models, and other forms of generative artificial intelligence, now have unprecedented capability and versatility. This emergence of these forms of highly capable generative AI poses many legal issues and questions, including consequences for intellectual property, contracts and licences, liability, data protection, use in specific sectors, potential harms, and of course ethics, policy, and regulation of the technology. To support the discussion of these topics in this Handbook, this chapter gives a relatively non-technical introduction to the technology of modern artificial intelligence and generative AI.
This chapter explores the intricate relationship between consumer protection and GenAI. Prominent tools like Bing Chat, ChatGPT4.0, Google’s Gemini (formerly known as Bard), OpenAI’s DALL·E, and Snapchat’s AI chatbot are widely recognized, and they dominate the generative AI landscape. However, numerous smaller, unbranded GenAI tools are embedded within major platforms, often going unrecognized by consumers as AI-driven technology. In particular, the focus of this chapter is the phenomenon of algorithmic consumers, whose interactions with digital tools, including GenAI, have become increasingly dynamic, engaging, and personalized. Indeed, the rise of algorithmic consumers marks a pivotal shift in consumer behaviour, which is now characterized by heightened levels of interactivity and customization.