To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines the transformative effects of generative AI (GenAI) on competition law, exploring how GenAI challenges traditional business models and antitrust regulations. The evolving digital economy, characterised by advances in deep learning and foundation models, presents unique regulatory challenges due to market power concentration and data control. This chapter analyses the approaches adopted by the European Union, United States, and United Kingdom to regulate the GenAI ecosystem, including recent legislation such as the EU Digital Markets Act, the AI Act, and the US Executive Order on AI. It also considers foundational models’ reliance on key resources, such as data, computing power, and human expertise, which shape competitive dynamics across the AI market. Challenges at different levels—including infrastructure, data, and applications—are investigated, with a focus on their implications for fair competition and market access. The chapter concludes by offering insights into the balance needed between fostering innovation and mitigating the risks of monopolisation, ensuring that GenAI contributes to a competitive and inclusive market environment.
Several criminal offences can originate from or culminate with the creation of content. Sexual abuse can be perpetrated by producing intimate material without the subject’s consent, while incitement to criminal activity can begin with a simple conversation. When the task of generating content is entrusted to artificial agents, it becomes necessary to delve into the associated risks posed by this technology. Generative AI changes criminal affordances because it simplifies access to harmful or dangerous content, amplifies the range of recipients, creates new kinds of harmful content, and can exploit cognitive vulnerabilities to manipulate user behaviour. Given this evolving landscape, the question that arises is whether criminal law should be involved in the policies aimed at fighting and preventing Generative AI-related harms. The bulk of criminal law scholarship to date would not criminalise AI harms on the theory that AI lacks moral agency. However, when a serious harm occurs, responsibility needs to be distributed considering the guilt of the agents involved, and, if it is lacking, it needs to fall back because of their innocence. Legal systems need to start exploring whether and how guilt can be preserved when the actus reus is completely or partially delegated to Generative AI.
This chapter deals with the use of Large Language Models (LLMs) in the legal sector from a comparative law perspective, exploring their advantages and risks, the pertinent question as to whether the deployment of LLMs by non-lawyers can be classified as an unauthorized practice of law in the US and Germany, what lawyers, law firms and legal departments need to consider when using LLMs under professional rules of conduct - especially the American Bar Association Model Rules of Professional Conduct and the Charter of Core Principles of the European Legal Profession of the Council of Bars and Law Societies of Europe, and, finally, how the recently published AI Act will affect the legal tech market – specifically, the use of LLMs. A concluding section summarizes the main findings and points out open questions.
Making sense of paradata as information on practices and processes is both a matter of theory and practice. This chapter introduces a comprehensive theoretical reference model for paradata and discusses its practical implications. Paradata is approached as a category of things that can be appropriated as being informative about processes and practices. Working knowledge on practices and processes, and the practices and processes themselves, can create paradata through both embodiment and acts of inscription. Paradata turns back to working knowledge through appropriation. Enactment turns paradata back to practices and processes. Paradata materialises as a process and network-like meshwork in space-time. It is perpetually in the making and stabilised momentarily only at times when it is taken into use.
Paradata is a concept that is very much in the making. Its significance is not given and it can matter in different ways depending on context and how the notion itself is operationalised in use. Paradata complements earlier metainformation concepts for knowledge organisation in how it can facilitate systematising and making the complexity of data, practices and processes visible. As a mindset, paradata underlines the importance of being involved both in the theory and practice of how data is constantly being made and remade. There are, however, practical and ethical limits to what paradata can do and how far, and where are the limits of what is desirable to do with it. Ultimately, mastering the use of paradata and making it matter is also a question of literacy, tightly interwoven in the intricate meshwork of the social reality of the domains where it is put to work.
While generative AI enables the creation of diverse content, including images, videos, text, and music, it also raises significant ethical and societal concerns, such as bias, transparency, accountability, and privacy. Therefore, it is crucial to ensure that AI systems are both trustworthy and fair, optimising their benefits while minimising potential harm. To explore the importance of fostering trustworthiness in the development of generative AI, this chapter delves into the ethical implications of AI-generated content, the challenges posed by bias and discrimination, and the importance of transparency and accountability in AI development. It proposes six guiding principles for creating ethical, safe, and trustworthy AI systems. Furthermore, legal perspectives are examined to highlight how regulations can shape responsible generative AI development. Ultimately, the chapter underscores the need for responsible innovation that balances technological advancement with societal values, preparing us to navigate future challenges in the evolving AI landscape.
The purpose of this chapter is to show how and where paradata emerges ‘in the wild’ of the many varieties of research documentation produced during scholarly work, and to demonstrate what this paradata might look like. The examination of paradata in research documentation is approached using perspectives of data ‘as practice’ and data ‘as thing’, emphasising simultaneously that paradata is malleable and will manifest differently across contexts of data production and use, but also that paradata is a tangible data phenomenon with identifiable characteristics. This chapter draws empirically from an interview study of archaeologists and archaeological research data professionals (N=31). Theoretical framing is provided by scholarship on data and documentation. The chapter reveals how paradata in research documentation emerges in different forms and with varying scope, comprehensiveness and degrees of formalisation. It also suggests that there are technical and epistemic usefulness thresholds relevant for identifying and using paradata. The technical usefulness threshold represents baseline possibilities of accessing and interacting with paradata in research documentation. The epistemic usefulness threshold underlines instead the degree of affinity between the intellectual horizons of paradata creation and paradata use, and several resources are identified that can help to strengthen this affinity.
Generative AI promises to have a significant impact on intellectual property law and practice in the United States. Already several disputes have arisen that are likely to break new ground in determining what IP protects and what actions infringe. Generative AI is also likely to have a significant impact on the practice of searching for prior art, creating new materials, and policing rights. This chapter surveys the emerging law of generative AI and IP in the United States, sticking as close as possible to near-term developments and controversies. All of the major IP areas are covered, at least briefly, including copyrights, patents, trademarks, trade secrets, and rights of publicity. For each of these areas, the chapter evaluates the protectability of AI-generated materials under current law, the potential liability of AI providers for their use of existing materials, and likely changes to the practice of creation and enforcement.
It is well-known that, to be properly valued, high-quality products must be distinguishable from poor-quality ones. When they are not, indistinguishability creates an asymmetry in information that, in turn, leads to a lemons problem, defined as the market erosion of high-quality products. Although the valuation of generative artificial intelligence (GenAI) systems’ outputs is still largely unknown, preliminary studies show that, all other things being equal, human-made works are evaluated at significantly higher values than machine-enabled ones. Given that these works are often indistinguishable, all the conditions for a lemons problem are present. Against that background, this Chapter proposes a Darwinian reading to highlight how GenAI could potentially lead to “unnatural selection” in the art market—specifically, a competition between human-made and machine-enabled artworks that is not based on the merits but distorted by asymmetrical information. This Chapter proposes solutions ranging from top-down rules of origin to bottom-up signalling. It is argued that both approaches can be employed in copyright law to identify where the human author has exercised the free and creative choices required to meet the criterion of originality, and thus copyrightability.
This chapter will focus on how Chinese and Japanese copyright law balance content owner’s desire for copyright protection with the national policy goal of enabling and promoting technological advancement, in particular in the area of AI-related progress. In discussing this emerging area of law, we will focus mainly on the two most fundamental questions that the widespread adoption of generative AI pose to copyright regulators: (1) does the use and refinement of training data violate copyright law, and (2) who owns a copyright in content produced by or with the help of AI?
This chapter explores the intricate relationship between consumer protection and GenAI. Prominent tools like Bing Chat, ChatGPT4.0, Google’s Gemini (formerly known as Bard), OpenAI’s DALL·E, and Snapchat’s AI chatbot are widely recognized, and they dominate the generative AI landscape. However, numerous smaller, unbranded GenAI tools are embedded within major platforms, often going unrecognized by consumers as AI-driven technology. In particular, the focus of this chapter is the phenomenon of algorithmic consumers, whose interactions with digital tools, including GenAI, have become increasingly dynamic, engaging, and personalized. Indeed, the rise of algorithmic consumers marks a pivotal shift in consumer behaviour, which is now characterized by heightened levels of interactivity and customization.
This chapter introduces a selection of methods applicable for identifying and extracting paradata from existing datasets and data documentation which can then be used to complement existing formal documentation of practices and processes. Data reuse, in its multiple forms, enables researchers to build upon the foundations laid by previous studies. Retrospective methods for eliciting paradata, including qualitative and quantitative backtracking and data forensics, provide means to get insights into past research practices and processes for data-driven analysis. The methods discussed in this chapter enhance understanding of data-related practices and processes, reproducibility of findings by facilitating the replication and verification of results through data reuse. Key references and further reading are provided after each method description.
Generative AI has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E, and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly within the ambit of the AI Act, it is imperative to acknowledge that generative AI also engenders substantial tension with data protection laws. The example of generative AI puts a finger on the sore spot of the contentious relationship between data protection law and machine learning built on the unresolved conflict between the protection of individuals, rooted in fundamental data protection rights and the massive amounts of data required for machine learning, which renders data processing nearly universal. In the case of LLMs, which scrape nearly the whole internet, this training inevitably relies on and possibly even creates personal data under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection, and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
Research on paradata practices provides diverse insights for the management of paradata. This chapter draws on the existing body of research to inform paradata practices in repository settings including research data archives, repositories and research information management contexts. Four categories of paradata needs (methods; scope; provenance; knowledge representation) are described as well as two major categories of paradata relevant from a repository perspective (core paradata i.e. information commonly perceived as being paradata, and potential paradata i.e. information with potential to function as paradata). Further, the chapter discusses three broad management approaches and a set of intermediary strategies of standardisation and embracing the messiness paradata, and of cultivating paradata literacy to manage different varieties of core paradata and potential paradata.
Making sense of data, and making it useful and manageable, requires understanding of both what the data is about but also where it comes from and how it has been processed, and used. An emerging interdisciplinary corpus of literature terms information about the practices and processes of data making, management and use as paradata. This introductory chapter to a first comprehensive overview of the concept and phenomenon of paradata from data management and knowledge organisation perspectives contextualises the notion and provides an overview of the volume and its aims and starting points.
This chapter provides an outline analysis of the evolving governance framework for Artificial Intelligence (AI) in the island city-state of Singapore. In broad terms, Singapore’s signature approach to AI Governance reflects its governance culture more broadly, which harnesses the productive energy of free-market capitalism contained within clear guardrails, as well as the dual nature (as a regulator and development authority) of Singapore’s lead public agency in AI policy formulation. Singapore’s approach is interesting for other jurisdictions in the region and around the world and it can already be observed to have influenced the recent Association of South East Asian Nations (ASEAN) Guide on AI Governance and Ethics which was promulgated in early 2024.