To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Generative artificial intelligence has a long history but surged into global prominence with the introduction in 2017 of the transformer architecture for large language models. Based on deep learning with artificial neural networks, transformers revolutionised the field of generative AI for production of natural language outputs. Today’s large language models, and other forms of generative artificial intelligence, now have unprecedented capability and versatility. This emergence of these forms of highly capable generative AI poses many legal issues and questions, including consequences for intellectual property, contracts and licences, liability, data protection, use in specific sectors, potential harms, and of course ethics, policy, and regulation of the technology. To support the discussion of these topics in this Handbook, this chapter gives a relatively non-technical introduction to the technology of modern artificial intelligence and generative AI.
Artificial Intelligence (AI) can collect, while unperceived, Big Data on the user. It has the ability to identify their cognitive profile and manipulate the users into predetermined choices by exploiting their cognitive biases and decision-making processes. A Large Generative Artificial Intelligence Model (LGAIM) can enhance the possibility of computational manipulation. It can make a user see and hear what is more likely to affect their decision-making processes, creating the perfect text accompanied by perfect images and sounds on the perfect website. Multiple international, regional and national bodies recognised the existence of computational manipulation and the possible threat to fundamental rights resulting from its use. The EU even moved the first steps towards protecting individuals against computational manipulation. This paper argues that while manipulative AIs which rely on deception are addressed by existing EU legislation, some forms of computational manipulation, specifically if LGAIM is used in the manipulative process, still do not fall under the shield of the EU. Therefore, there is a need for a redraft of existing EU legislation to cover every aspect of computational manipulation.
The recent paradigm shift from predictive to generative AI has accelerated a new era of innovation in artificial intelligence. Generative AI, exemplified by large language models (LLMs) like GPT (Generative Pre-trained Transformer), has revolutionized this landscape. This transition holds profound implications for the legal domain, where language is central to practice. The integration of LLMs into AI and law research and legal practice presents both opportunities and challenges. This chapter explores the potential enhancements of AI through LLMs, particularly the CLAUDETTE system, focusing on consumer empowerment and privacy protection. On this basis, we also investigate what new legal issues can emerge in the context of the AI Act and related regulations. Understanding the capabilities and limitations of LLMs vis-à-vis conventional approaches is crucial in harnessing their full potential for legal applications.
This chapter examines the G7’s Hiroshima AI Process (HAIP) and its flagship document, the Hiroshima Code of Conduct, as key drivers in global AI governance. Through an analysis of AI regulations and guidance across G7 member states, it highlights the alignment between national frameworks and the Code’s principles. The chapter outlines concrete measures for translating these principles into G7-level policies and adjusting national standards accordingly. It also proposes enhancements to the Code, including a common AI governance vocabulary, improved risk management, lifecycle standard harmonization, stakeholder engagement, redress mechanisms for AI harms, and guidelines for government AI use, in order to uphold democracy and human rights. Ultimately, this chapter presents international alignment as a step forward in building common principles on AI governance, and provides recommendations to strengthen the G7’s leadership in shaping a global AI landscape rooted in the rule of law, democracy, and human rights.
It is hard for regulation to keep up with the rapid development of new technologies. This is partly due to the lack of specialist technical expertise among lawmakers, and partly due to the multi-year timescales for developing, proposing and negotiating complex regulations that lag behind technological advances. Generative AI has been a particularly egregious example of this situation but is by no means the first. On the other hand, technical standardisation in global fora such as ISO and IEC generally does not suffer from a lack of specialist technical expertise. In many cases, it is also able to work on somewhat faster timescales than regulation. Therefore, many jurisdictions have developed synergistic approaches that combine the respective strengths of regulation and standardisation to complement each other.
There is growing global interest in how AI can improve access to justice, including how it can increase court capacity. This chapter considers the potential future use of AI to resolve disputes in the place of the judiciary. We focus our analysis on the right to a fair trial as outlined in Article 6 of the European Convention on Human Rights, and ask: do we have a right to a human judge? We firstly identify several challenges to interpreting and applying Article 6 in this new context, before considering the principle of human dignity, which has received little attention to date. Arguing that human dignity is an interpretative principle which incorporates protection from dehumanisation, we propose it provides a deeper, or “thicker” reading of Article 6. Applied to this context, we identify risks of dehumanisation posed by judicial AI, including not being heard, or not being subject to human judgement or empathy. We conclude that a thicker reading of Article 6 informed by human dignity strongly suggests the need to preserve human judges at the core of the judicial process in the age of AI.
The AI Act contains some specific provisions dealing with the possible use of artificial intelligence for discriminatory purposes or in discriminatory ways, in the context of the European Union. The AI Act also regulates generative AI models. However, these two respective sets of rules have little in common: provisions concerning non-discrimination tend not to cover generative AI, and generative AI rules tend not to cover discrimination. Based on this analysis, the Chapter considers what is currently the Eu legal framework on discriminatory output of generative AI models, and concludes that those expressions that are already prohibited by anti-discrimination law certainly remain prohibited after the approval of the AI Act, while discriminatory content that is not covered by Eu non-discrimination legislation will remain lawful. For the moment, the AI Act has not brought any particularly relevant innovation on this specific matter, but the picture might change in the future.
This chapter points out the significant challenges in holding foundation model developers and deployers clearly responsible for the uses and outputs of their creations under US law. Scienter requirements, and difficulties in creating proof, make it challenging to establish liability under many statutes with civil penalties and torts. Constitutional protections for speech may shield model-generated outputs, or the models themselves, from some forms of regulation—though legal scholars are divided over the extent of these protections. And legal challenges to agencies’ authority over AI systems could hamstring regulators’ ability to proactively address foundation models’ risks. All is not lost, though. Each of these doctrines do have potential pathways to liability and recourse. However, in all cases there will likely be protracted battles over liability involving the issues described in this chapter.
The paper presents an enhanced method for unknown parameter estimation and nonlinear controller adaptation that combines the concept of unfalsification with the genetic algorithm (GA). This approach is based on the measured data and employs a bank of nonlinear controllers designed to dynamically adjust to the system’s evolving conditions. The controllers in the bank can be switched to meet the system’s requirements. This method is applied to an autonomous underwater vehicle (AUV) with uncertain parameters. Using the unfalsification method, these uncertain parameters are estimated, and a suitable controller is selected from the bank to guide the AUV along a desired trajectory. Additionally, an artificial intelligence technique, such as the GA, is employed to update the controller bank, resulting in versatile and optimised candidates. The simulation results obtained in the MATLAB/Simulink environment show that in the environment considered in this paper, the Adaptive Unfalsification algorithm in conjunction with GA estimates the unknown parameter values better than the sole GA-optimised values. Also, the convergence of the actual trajectory of the AUV using the Adaptive Unfalsification algorithm in conjunction with GA is faster and better than the sole GA-optimised algorithm. Furthermore, a survey of experimental results from established literature is included to evaluate the practical implementation of the proposed design, which concludes that within a reasonable time, the Adaptive Unfalsification algorithm in conjunction with GA can be implemented in commercially available processors.
Stone locales together with continuous maps form a coreflective subcategory of spectral locales and perfect maps. A proof in the internal language of an elementary topos was previously given by the second-named author. This proof can be easily translated to univalent type theory using resizing axioms. In this work, we show how to achieve such a translation without resizing axioms, by working with large, locally small, and small-complete frames with small bases. This requires predicative reformulations of several fundamental concepts of locale theory in predicative HoTT/UF, which we investigate systematically.
This paper shows how to set up Fine’s “theory-application” type semantics so as to model the use-unrestricted “Official” consequence relation for a range of relevant logics. The frame condition matching the axiom $(((A \to A) \land (B \to B)) \to C) \to C$—the characteristic axiom of the very first axiomatization of the relevant logic E—is shown forth. It is also shown how to model propositional constants within the semantic framework. Whereas the related Routley–Meyer type frame semantics fails to be strongly complete with regards to certain contractionless logics such as B, the current paper shows that Fine’s weak soundness and completeness result can be extended to a strong one also for logics like B.