To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The rapid development of generative artificial intelligence (AI) systems, particularly those fuelled by increasingly advanced large language models (LLMs), has raised concerns of their potential risks among policymakers globally. In July 2023, Chinese regulators enacted the Interim Measures for the Management of Generative AI Services (“the Measures”). The Measures aim to mitigate various risks associated with public-facing generative AI services, particularly those concerning content safety and security. At the same time, Chinese regulators are seeking the further development and application of such technology across diverse industries. Tensions between these policy objectives are reflected in the provisions of the Measures that entail different types of obligations on generative AI service providers. Such tensions present significant challenges for implementation of the regulation. As Beijing moves towards establishing a comprehensive legal framework for AI governance, legislators will need to further clarify and balance the responsibilities of diverse stakeholders.
Generative artificial intelligence (GenAI) raises ethical and social challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. The main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of abuse, use or underuse of Gen AI. The epistemological approach investigates the specific way in which Gen AI produces information, knowledge, and a representation of reality that differs from that of human beings. To fully realise the impact of Gen AI, our paper contends that both these approaches should be pursued: an identification of the risks and opportunities of Gen AI also depends on considering how this form of AI works from an epistemological viewpoint and our ability to interact with it. Our analysis compares the epistemology of Gen AI with that of law, to highlight four problematic issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence. The epistemological analysis of these issues leads to a better framing of the social and ethical aspects resulting from the use, abuse or underuse of Gen AI.
Drawing on the extensive history of study of the terms and conditions (T&Cs) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January-March 2023, in which T&Cs were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc.), small and large sizes, and varying countries of origin. Our early findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradiction to their function as content generators rather than mere hosts for third party content.
The rise in the use of AI in most key areas of business, from sales to compliance to financial analysis, means that even the highest levels of corporate governance will be impacted, and that corporate leaders are duty-bound to manage both the responsible development and the legal and ethical use of AI. This transformation will directly impact the legal and ethical duties and best practices of those tasked with setting the ‘tone at the top’ and who are accountable for the firm’s success. Directors and officers will have to ask themselves to what extent should, or must, AI tools be used in both strategic business decision-making, as well as monitoring processes. Here we look at a number of issues that we believe are going to arise due to the greater use of generative AI. We consider what top management should be doing to ensure that all such AI tools used by the firm are safe and fit for purpose, especially considering avoidance of potential negative externalities. In the end, due to the challenges of AI use, the human component of top corporate decision-making will be put to the test, to prudentially thread the needle of AI use and to ensure the technology serves corporations and their human stakeholders instead of the other way around.
The advent and momentum gained by Generative AI erupted into the EU regulatory scene signalling a significant paradigm shift in the AI landscape. The AI Act has struggled to embrace the eruption and extraordinary popularity of Generative AI and managed to provide for specific solutions designed for these models. Nonetheless, there are legal and regulatory implications of Generative AI that may exceed the proposed solutions. Understanding the paradigm shift that Generative AI is likely to bring will allow us to assess the sufficiency and adequacy of the measures adopted and to identify possible shortcomings and gaps in the current EU framework. Generative AI raises specific problems in the compliance of AI Act obligations and in the application of liability rules that have to be acknowledged and properly addressed. Multimodality, emergence factor, scalability or generality of tasks may mismatch the assumption underlying the obligations and requirements laid down for AI systems. The chapter explores whether the current ecosystem of existing and still-to-be adopted rules on AI systems does fully and adequately address the distinctive features of Generative AI, with special consideration to the interaction between the AI Act and the liability rules as provided for the draft AILD and the revPLD.
Generative artificial intelligence has a long history but surged into global prominence with the introduction in 2017 of the transformer architecture for large language models. Based on deep learning with artificial neural networks, transformers revolutionised the field of generative AI for production of natural language outputs. Today’s large language models, and other forms of generative artificial intelligence, now have unprecedented capability and versatility. This emergence of these forms of highly capable generative AI poses many legal issues and questions, including consequences for intellectual property, contracts and licences, liability, data protection, use in specific sectors, potential harms, and of course ethics, policy, and regulation of the technology. To support the discussion of these topics in this Handbook, this chapter gives a relatively non-technical introduction to the technology of modern artificial intelligence and generative AI.
Artificial Intelligence (AI) can collect, while unperceived, Big Data on the user. It has the ability to identify their cognitive profile and manipulate the users into predetermined choices by exploiting their cognitive biases and decision-making processes. A Large Generative Artificial Intelligence Model (LGAIM) can enhance the possibility of computational manipulation. It can make a user see and hear what is more likely to affect their decision-making processes, creating the perfect text accompanied by perfect images and sounds on the perfect website. Multiple international, regional and national bodies recognised the existence of computational manipulation and the possible threat to fundamental rights resulting from its use. The EU even moved the first steps towards protecting individuals against computational manipulation. This paper argues that while manipulative AIs which rely on deception are addressed by existing EU legislation, some forms of computational manipulation, specifically if LGAIM is used in the manipulative process, still do not fall under the shield of the EU. Therefore, there is a need for a redraft of existing EU legislation to cover every aspect of computational manipulation.
The recent paradigm shift from predictive to generative AI has accelerated a new era of innovation in artificial intelligence. Generative AI, exemplified by large language models (LLMs) like GPT (Generative Pre-trained Transformer), has revolutionized this landscape. This transition holds profound implications for the legal domain, where language is central to practice. The integration of LLMs into AI and law research and legal practice presents both opportunities and challenges. This chapter explores the potential enhancements of AI through LLMs, particularly the CLAUDETTE system, focusing on consumer empowerment and privacy protection. On this basis, we also investigate what new legal issues can emerge in the context of the AI Act and related regulations. Understanding the capabilities and limitations of LLMs vis-à-vis conventional approaches is crucial in harnessing their full potential for legal applications.
This chapter examines the G7’s Hiroshima AI Process (HAIP) and its flagship document, the Hiroshima Code of Conduct, as key drivers in global AI governance. Through an analysis of AI regulations and guidance across G7 member states, it highlights the alignment between national frameworks and the Code’s principles. The chapter outlines concrete measures for translating these principles into G7-level policies and adjusting national standards accordingly. It also proposes enhancements to the Code, including a common AI governance vocabulary, improved risk management, lifecycle standard harmonization, stakeholder engagement, redress mechanisms for AI harms, and guidelines for government AI use, in order to uphold democracy and human rights. Ultimately, this chapter presents international alignment as a step forward in building common principles on AI governance, and provides recommendations to strengthen the G7’s leadership in shaping a global AI landscape rooted in the rule of law, democracy, and human rights.
It is hard for regulation to keep up with the rapid development of new technologies. This is partly due to the lack of specialist technical expertise among lawmakers, and partly due to the multi-year timescales for developing, proposing and negotiating complex regulations that lag behind technological advances. Generative AI has been a particularly egregious example of this situation but is by no means the first. On the other hand, technical standardisation in global fora such as ISO and IEC generally does not suffer from a lack of specialist technical expertise. In many cases, it is also able to work on somewhat faster timescales than regulation. Therefore, many jurisdictions have developed synergistic approaches that combine the respective strengths of regulation and standardisation to complement each other.
There is growing global interest in how AI can improve access to justice, including how it can increase court capacity. This chapter considers the potential future use of AI to resolve disputes in the place of the judiciary. We focus our analysis on the right to a fair trial as outlined in Article 6 of the European Convention on Human Rights, and ask: do we have a right to a human judge? We firstly identify several challenges to interpreting and applying Article 6 in this new context, before considering the principle of human dignity, which has received little attention to date. Arguing that human dignity is an interpretative principle which incorporates protection from dehumanisation, we propose it provides a deeper, or “thicker” reading of Article 6. Applied to this context, we identify risks of dehumanisation posed by judicial AI, including not being heard, or not being subject to human judgement or empathy. We conclude that a thicker reading of Article 6 informed by human dignity strongly suggests the need to preserve human judges at the core of the judicial process in the age of AI.