To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The recent paradigm shift from predictive to generative AI has accelerated a new era of innovation in artificial intelligence. Generative AI, exemplified by large language models (LLMs) like GPT (Generative Pre-trained Transformer), has revolutionized this landscape. This transition holds profound implications for the legal domain, where language is central to practice. The integration of LLMs into AI and law research and legal practice presents both opportunities and challenges. This chapter explores the potential enhancements of AI through LLMs, particularly the CLAUDETTE system, focusing on consumer empowerment and privacy protection. On this basis, we also investigate what new legal issues can emerge in the context of the AI Act and related regulations. Understanding the capabilities and limitations of LLMs vis-à-vis conventional approaches is crucial in harnessing their full potential for legal applications.
This chapter examines the G7’s Hiroshima AI Process (HAIP) and its flagship document, the Hiroshima Code of Conduct, as key drivers in global AI governance. Through an analysis of AI regulations and guidance across G7 member states, it highlights the alignment between national frameworks and the Code’s principles. The chapter outlines concrete measures for translating these principles into G7-level policies and adjusting national standards accordingly. It also proposes enhancements to the Code, including a common AI governance vocabulary, improved risk management, lifecycle standard harmonization, stakeholder engagement, redress mechanisms for AI harms, and guidelines for government AI use, in order to uphold democracy and human rights. Ultimately, this chapter presents international alignment as a step forward in building common principles on AI governance, and provides recommendations to strengthen the G7’s leadership in shaping a global AI landscape rooted in the rule of law, democracy, and human rights.
It is hard for regulation to keep up with the rapid development of new technologies. This is partly due to the lack of specialist technical expertise among lawmakers, and partly due to the multi-year timescales for developing, proposing and negotiating complex regulations that lag behind technological advances. Generative AI has been a particularly egregious example of this situation but is by no means the first. On the other hand, technical standardisation in global fora such as ISO and IEC generally does not suffer from a lack of specialist technical expertise. In many cases, it is also able to work on somewhat faster timescales than regulation. Therefore, many jurisdictions have developed synergistic approaches that combine the respective strengths of regulation and standardisation to complement each other.
There is growing global interest in how AI can improve access to justice, including how it can increase court capacity. This chapter considers the potential future use of AI to resolve disputes in the place of the judiciary. We focus our analysis on the right to a fair trial as outlined in Article 6 of the European Convention on Human Rights, and ask: do we have a right to a human judge? We firstly identify several challenges to interpreting and applying Article 6 in this new context, before considering the principle of human dignity, which has received little attention to date. Arguing that human dignity is an interpretative principle which incorporates protection from dehumanisation, we propose it provides a deeper, or “thicker” reading of Article 6. Applied to this context, we identify risks of dehumanisation posed by judicial AI, including not being heard, or not being subject to human judgement or empathy. We conclude that a thicker reading of Article 6 informed by human dignity strongly suggests the need to preserve human judges at the core of the judicial process in the age of AI.
The AI Act contains some specific provisions dealing with the possible use of artificial intelligence for discriminatory purposes or in discriminatory ways, in the context of the European Union. The AI Act also regulates generative AI models. However, these two respective sets of rules have little in common: provisions concerning non-discrimination tend not to cover generative AI, and generative AI rules tend not to cover discrimination. Based on this analysis, the Chapter considers what is currently the Eu legal framework on discriminatory output of generative AI models, and concludes that those expressions that are already prohibited by anti-discrimination law certainly remain prohibited after the approval of the AI Act, while discriminatory content that is not covered by Eu non-discrimination legislation will remain lawful. For the moment, the AI Act has not brought any particularly relevant innovation on this specific matter, but the picture might change in the future.
This chapter points out the significant challenges in holding foundation model developers and deployers clearly responsible for the uses and outputs of their creations under US law. Scienter requirements, and difficulties in creating proof, make it challenging to establish liability under many statutes with civil penalties and torts. Constitutional protections for speech may shield model-generated outputs, or the models themselves, from some forms of regulation—though legal scholars are divided over the extent of these protections. And legal challenges to agencies’ authority over AI systems could hamstring regulators’ ability to proactively address foundation models’ risks. All is not lost, though. Each of these doctrines do have potential pathways to liability and recourse. However, in all cases there will likely be protracted battles over liability involving the issues described in this chapter.
To make sense of data and use it effectively, it is essential to know where it comes from and how it has been processed and used. This is the domain of paradata, an emerging interdisciplinary field with wide applications. As digital data rapidly accumulates in repositories worldwide, this comprehensive introductory book, the first of its kind, shows how to make that data accessible and reusable. In addition to covering basic concepts of paradata, the book supports practice with coverage of methods for generating, documenting, identifying and managing paradata, including formal metadata, narrative descriptions and qualitative and quantitative backtracking. The book also develops a unifying reference model to help readers contextualise the role of paradata within a wider system of knowledge, practices and processes, and provides a vision for the future of the field. This guide to general principles and practice is ideal for researchers, students and data managers. This title is also available as open access on Cambridge Core.
Chapter 2 examines how the use of “quantified self” as a shorthand for personal data necessarily indexes only one end, rather than the full spectrum, of technologists’ understanding of digitization and their own roles within it. Looking closely at the way digital executives talk about data in forums such as QS, among others, in fact reveals the contradictions, professional obfuscations, and hyperbole that continue to shape the self-tracking sector. Digital professionals may occasionally enfold concepts such as the "quantified self” into promotional “pitch theater” to stage self-monitoring devices as gadgets that produce faithful and objective data. My interactions with practitioners in these settings, however, point to the more varied social, legal, and fiscal advantages professionals reap from representing digital self-tracking and the data these devices produce as both plastic and precise. This chapter argues that the surface impression that technologists relate to data and modes of self-monitoring in reductive terms has to be weighed against ways executives pursue both digital ambiguity and objectivity as a meaningful corporate strategy.
To begin evaluating the interaction of “quantified self,” the concept, and Quantified Self (QS), the collective, with digital entrepreneurialism, it’s necessary to understand the influence of its originators, Kevin Kelly and Gary Wolf, on this construct’s form and function. Chapter 1 reviews how the two authors have coined the term and established the group as an expression of what Wolf has called the “culture of personal data” (Wolf, 2009). While the founders defer to the explanatory power of culture in situating the collective within the technological imaginary, this chapter examines how their own personal backgrounds as journalists and Wired magazine editors have shaped the semantic meaning of “quantified self” as a catchphrase that refers to the means and outputs of digital self-tracking and especially to QS as a community of technophiles. Although the role the forum has come to play within the commercial self-tracking sphere analyzed in this book does not fully align with its originators’ intentions, the framing they established has set the tone for many of the ways the collective has become socialized in the technological arena as well as how it has come to work within it.
Chapter 6 ultimately analyzes the Quantified Self (QS) as a gateway to the notions of difference that continue to shape the tech sector and therefore the devices that derive from it. As it considers the structural inequality that still constrains technological innovation, this chapter also analyzes QS as a site more specifically connected to the forms of privilege that impact how entrepreneurial extracurricular labor becomes converted into business advantage. It emphasizes that the modalities of participation that have rendered QS a community of tech acolytes unevenly regulate who can benefit from the group’s role as an instrument of professional transfiguration, connection, and access.
The camera slowly scans Chris Dancy’s face, first focusing on a profile of his bespectacled eyes, then quickly switching to a frontal shot to examine his contemplative expression at close range. Seconds later, the angle shifts again, the panorama now filmed as though from behind Dancy’s shoulder. The foreground looks blurry to start with. But once the lens adjusts, the viewer clearly sees the nearby cityscape at which Dancy longingly gazes.