To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article constructs an approach to analyzing longitudinal panel data which combines topological data analysis (TDA) and generative AI applied to graph neural networks (GNNs). TDA is deployed to identify and analyze unobserved topological heterogeneities of a dataset. TDA-extracted information is quantified into a set of measures, called functional principal components. These measures are used to analyze the data in four ways. First, the measures are construed as moderators of the data and their statistical effects are estimated through a Bayesian framework. Second, the measures are used as factors to classify the data into topological classes using generative AI applied to GNNs constructed by transforming the data into graphs. The classification uncovers patterns in the data which are otherwise not accessible through statistical approaches. Third, the measures are used as factors that condition the extraction of latent variables of the data through a deployment of a generative AI model. Fourth, the measures are used as labels for classifying the graphs into classes used to offer a GNN-based effective dimensionality reduction of the original data. The article uses a portion of the militarized international disputes (MIDs) dataset (from 1946 to 2010) as a running example to briefly illustrate its ideas and steps.
Generative AI (GenAI) offers potential for English language teaching (ELT), but it has pedagogical limitations in multilingual contexts, often generating standard English forms rather than reflecting the pluralistic usage that represents diverse sociolinguistic realities. In response to mixed results in existing research, this study examines how ChatGPT, a text-based generative AI tool powered by a large language model (LLM), is used in ELT from a Global Englishes (GE) perspective. Using the Design and Development Research approach, we tested three ChatGPT models: Basic (single-step prompts); Refined 1 (multi-step prompting); and Refined 2 (GE-oriented corpora with advanced prompt engineering). Thematic analysis showed that Refined Model 1 provided limited improvements over Basic Model, while Refined Model 2 demonstrated significant gains, offering additional affordances in GE-informed evaluation and ELF communication, despite some limitations (e.g., defaulting to NES norms and lacking tailored GE feedback). The findings highlight the importance of using authentic data to enhance the contextual relevance of GenAI outputs for GE language teaching (GELT). Pedagogical implications include GenAI–teacher collaboration, teacher professional development, and educators’ agentive role in orchestrating diverse resources alongside GenAI.
The emergence of large language models (LLMs) provides an opportunity for AI to operate as a co-ideation partner during the creative processes. However, designers currently lack a comprehensive methodology for engaging in co-ideation with LLMs, and there is a limited framework that describes the process of co-ideation between a designer and ChatGPT. This research thus aimed to explore how LLMs can act as codesigners and influence creative ideation processes of industrial designers and whether the ideation performance of a designer could be improved by employing the proposed framework for co-ideation with custom GPT. A survey was first conducted to detect how LLMs influenced the creative ideation processes of industrial designers and to understand the problems that designers face when using ChatGPT to ideate. Then, a framework which based on mapping content to guide the co-ideation between humans and custom GPT (named as Co-Ideator) was promoted. Finally, a design case study followed by a survey and an interview was conducted to evaluate the ideation performance of the custom GPT and framework compared with traditional ideation methods. Also, the effect of custom GPT on co-ideation was compared with a non-artificial intelligence (AI)-used condition. The findings indicated that if users employed co-ideation with custom GPT, the novelty and quality of ideation outperformed by using traditional ideation.
The nexus of artificial intelligence (AI) and memory is typically theorized as a ‘hybrid’ or ‘symbiosis’ between humans and machines. The dangers related to this nexus are subsequently imagined as tilting the power balance between its two components, such that humanity loses control over its perception of the past to the machines. In this article, I propose a new interpretation: AI, I posit, is not merely a non-human agency that changes mnemonic processes, but rather a window through which the past itself gains agency and extends into the present. This interpretation holds two advantages. First, it reveals the full scope of the AI–memory nexus. If AI is an interactive extension of the past, rather than a technology acting upon it, every application of it constitutes an act of memory. Second, rather than locating AI’s power along familiar axes – between humans and machines, or among competing social groups – it reveals a temporal axis of power: between the present and the past. In the article’s final section, I illustrate the utility of this approach by applying it to the legal system’s increasing dependence on machines, which, I claim, represents not just a technical but a mnemonic shift, where the present is increasingly falling under the dominion of the past – embodied by AI.
GenAI has significant potential to transform the design process, driving efficiency and innovation from ideation to testing. However, its integration into professional design workflows faces a gap: designers often lack control over outcomes due to inconsistent results, limited transparency, and unpredictability. This paper introduces a framework to foster human ownership in GenAI-assisted design. Developed through a mixed- methods approach—including a survey of 21 designers and a workshop with 12 experts from product design and architecture—the framework identifies strategies to enhance ownership. It organizes these strategies into source, interaction, and outcome, and maps them across four design phases: define, ideate, deliver, and test. This framework offers actionable insights for responsibly integrating GenAI tools in design practices.
Text-to-Image Generative AI (GenAI) platforms offer designers new opportunities for inspiration-seeking and concept generation, marking a significant shift from traditional visualisation approaches like sketching. This study investigates how designers work with text-to-image GenAI during inspiration-seeking and ideation, aiming to characterise designers’ behaviours through designer-GenAI interaction data. Analysis of 503 prompts by four designers engaging in a GenAI supported design task identifies two distinct behaviours: exploratory, characterised by short, diverse prompts with low similarity; and narrowing, characterised by longer, high-similarity prompts used with detail focused variation functions. The findings highlight the value of GenAI interaction data to reveal patterns in designers’ behaviours, offering insights into how these tools support designers and inform best practices.
With the increase of service robots, understanding how people perceive their human-likeness and capabilities in use contexts is crucial. Advancements in generative AI offer the potential to create realistic, dynamic video representations of robots in motion. This study introduces an AI-assisted workflow for creating video representations of robots for evaluation studies. As a comparative study, it explores the effect of AI-generated videos on people's perceptions of robot designs in three service contexts. Nine video clips depicting robots in motion were created and presented in an online survey. Videos increased human-likeness perceptions for supermarket robots but had the same effect on restaurant and delivery robots as images. Perceptions of capabilities showed negligible differences between media types. No significant differences in the effectiveness of communication were found.
In The Secret Life of Copyright, copyright law meets Black Lives Matter and #MeToo in a provocative examination of how our legal regime governing creative production unexpectedly perpetuates inequalities along racial, gender, and socioeconomic lines while undermining progress in the arts. Drawing on numerous case studies – Harvard’s slave daguerreotypes, celebrity sex tapes, famous Wall Street statues, beloved musicals, and dictator copyrights – the book argues that, despite their purported neutrality, key rules governing copyrights – from the authorship, derivative rights, and fair use doctrines to copyright’s First Amendment immunity – systematically disadvantage individuals from traditionally marginalized communities. Since laws regulating the use of creative content increasingly mediate participation and privilege in the digital world, The Secret Life of Copyright provides a template for a more robust copyright system that better addresses egalitarian concerns and serves the interests of creativity.
This chapter examines the transformative effects of generative AI (GenAI) on competition law, exploring how GenAI challenges traditional business models and antitrust regulations. The evolving digital economy, characterised by advances in deep learning and foundation models, presents unique regulatory challenges due to market power concentration and data control. This chapter analyses the approaches adopted by the European Union, United States, and United Kingdom to regulate the GenAI ecosystem, including recent legislation such as the EU Digital Markets Act, the AI Act, and the US Executive Order on AI. It also considers foundational models’ reliance on key resources, such as data, computing power, and human expertise, which shape competitive dynamics across the AI market. Challenges at different levels—including infrastructure, data, and applications—are investigated, with a focus on their implications for fair competition and market access. The chapter concludes by offering insights into the balance needed between fostering innovation and mitigating the risks of monopolisation, ensuring that GenAI contributes to a competitive and inclusive market environment.
Several criminal offences can originate from or culminate with the creation of content. Sexual abuse can be perpetrated by producing intimate material without the subject’s consent, while incitement to criminal activity can begin with a simple conversation. When the task of generating content is entrusted to artificial agents, it becomes necessary to delve into the associated risks posed by this technology. Generative AI changes criminal affordances because it simplifies access to harmful or dangerous content, amplifies the range of recipients, creates new kinds of harmful content, and can exploit cognitive vulnerabilities to manipulate user behaviour. Given this evolving landscape, the question that arises is whether criminal law should be involved in the policies aimed at fighting and preventing Generative AI-related harms. The bulk of criminal law scholarship to date would not criminalise AI harms on the theory that AI lacks moral agency. However, when a serious harm occurs, responsibility needs to be distributed considering the guilt of the agents involved, and, if it is lacking, it needs to fall back because of their innocence. Legal systems need to start exploring whether and how guilt can be preserved when the actus reus is completely or partially delegated to Generative AI.
This chapter deals with the use of Large Language Models (LLMs) in the legal sector from a comparative law perspective, exploring their advantages and risks, the pertinent question as to whether the deployment of LLMs by non-lawyers can be classified as an unauthorized practice of law in the US and Germany, what lawyers, law firms and legal departments need to consider when using LLMs under professional rules of conduct - especially the American Bar Association Model Rules of Professional Conduct and the Charter of Core Principles of the European Legal Profession of the Council of Bars and Law Societies of Europe, and, finally, how the recently published AI Act will affect the legal tech market – specifically, the use of LLMs. A concluding section summarizes the main findings and points out open questions.
While generative AI enables the creation of diverse content, including images, videos, text, and music, it also raises significant ethical and societal concerns, such as bias, transparency, accountability, and privacy. Therefore, it is crucial to ensure that AI systems are both trustworthy and fair, optimising their benefits while minimising potential harm. To explore the importance of fostering trustworthiness in the development of generative AI, this chapter delves into the ethical implications of AI-generated content, the challenges posed by bias and discrimination, and the importance of transparency and accountability in AI development. It proposes six guiding principles for creating ethical, safe, and trustworthy AI systems. Furthermore, legal perspectives are examined to highlight how regulations can shape responsible generative AI development. Ultimately, the chapter underscores the need for responsible innovation that balances technological advancement with societal values, preparing us to navigate future challenges in the evolving AI landscape.
Generative AI promises to have a significant impact on intellectual property law and practice in the United States. Already several disputes have arisen that are likely to break new ground in determining what IP protects and what actions infringe. Generative AI is also likely to have a significant impact on the practice of searching for prior art, creating new materials, and policing rights. This chapter surveys the emerging law of generative AI and IP in the United States, sticking as close as possible to near-term developments and controversies. All of the major IP areas are covered, at least briefly, including copyrights, patents, trademarks, trade secrets, and rights of publicity. For each of these areas, the chapter evaluates the protectability of AI-generated materials under current law, the potential liability of AI providers for their use of existing materials, and likely changes to the practice of creation and enforcement.
It is well-known that, to be properly valued, high-quality products must be distinguishable from poor-quality ones. When they are not, indistinguishability creates an asymmetry in information that, in turn, leads to a lemons problem, defined as the market erosion of high-quality products. Although the valuation of generative artificial intelligence (GenAI) systems’ outputs is still largely unknown, preliminary studies show that, all other things being equal, human-made works are evaluated at significantly higher values than machine-enabled ones. Given that these works are often indistinguishable, all the conditions for a lemons problem are present. Against that background, this Chapter proposes a Darwinian reading to highlight how GenAI could potentially lead to “unnatural selection” in the art market—specifically, a competition between human-made and machine-enabled artworks that is not based on the merits but distorted by asymmetrical information. This Chapter proposes solutions ranging from top-down rules of origin to bottom-up signalling. It is argued that both approaches can be employed in copyright law to identify where the human author has exercised the free and creative choices required to meet the criterion of originality, and thus copyrightability.
This chapter will focus on how Chinese and Japanese copyright law balance content owner’s desire for copyright protection with the national policy goal of enabling and promoting technological advancement, in particular in the area of AI-related progress. In discussing this emerging area of law, we will focus mainly on the two most fundamental questions that the widespread adoption of generative AI pose to copyright regulators: (1) does the use and refinement of training data violate copyright law, and (2) who owns a copyright in content produced by or with the help of AI?
This chapter explores the intricate relationship between consumer protection and GenAI. Prominent tools like Bing Chat, ChatGPT4.0, Google’s Gemini (formerly known as Bard), OpenAI’s DALL·E, and Snapchat’s AI chatbot are widely recognized, and they dominate the generative AI landscape. However, numerous smaller, unbranded GenAI tools are embedded within major platforms, often going unrecognized by consumers as AI-driven technology. In particular, the focus of this chapter is the phenomenon of algorithmic consumers, whose interactions with digital tools, including GenAI, have become increasingly dynamic, engaging, and personalized. Indeed, the rise of algorithmic consumers marks a pivotal shift in consumer behaviour, which is now characterized by heightened levels of interactivity and customization.
Generative AI has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E, and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly within the ambit of the AI Act, it is imperative to acknowledge that generative AI also engenders substantial tension with data protection laws. The example of generative AI puts a finger on the sore spot of the contentious relationship between data protection law and machine learning built on the unresolved conflict between the protection of individuals, rooted in fundamental data protection rights and the massive amounts of data required for machine learning, which renders data processing nearly universal. In the case of LLMs, which scrape nearly the whole internet, this training inevitably relies on and possibly even creates personal data under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection, and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
This chapter explores the privacy challenges posed by generative AI and argues for a fundamental rethinking of privacy governance frameworks in response. It examines the technical characteristics and capabilities of generative AIs that amplify existing privacy risks and introduce new challenges, including nonconsensual data extraction, data leakage and re-identification, inferential profiling, synthetic media generation, and algorithmic bias. It surveys the current landscape of U.S. privacy law and its shortcomings in addressing these emergent issues, highlighting the limitations of a patchwork approach to privacy regulation, the overreliance on notice and choice, the barriers to transparency and accountability, and the inadequacy of individual rights and recourse. The chapter outlines critical elements of a new paradigm for generative AI privacy governance that recognizes collective and systemic privacy harms, institutes proactive measures, and imposes precautionary safeguards, emphasizing the need to recognize privacy as a public good and collective responsibility. The analysis concludes by discussing the political, legal, and cultural obstacles to regulatory reform in the United States, most notably the polarization that prevents the enactment of comprehensive federal privacy legislation, the strong commitment to free speech under the First Amendment, and the “permissionless” innovation approach that has historically characterized U.S. technology policy.
Generative AI offers a new lever for re-enchanting public administration, with the potential to contribute to a turning point in the project to ‘reinvent government’ through technology. Its deployment and use in public administration raise the question of its regulation. Adopting an empirical perspective, this chapter analyses how the United States of America and the European Union have regulated the deployment and use of this technology within their administrations. This transatlantic perspective is justified by the fact that these two entities have been very quick to regulate the issue of the deployment and use of this technology within their administrations. They are also considered to be emblematic actors in the regulation of AI. Finally, they share a common basis in terms of public law, namely their adherence to the rule of law. In this context, the chapter highlights four regulatory approaches to regulating the development and use of generative AI in public administration: command and control, the risk-based approach, the experimental approach, and the management-based approach. It also highlights the main legal issues raised by the use of such technology in public administration and the key administrative principles and values that need to be safeguarded.
The rapid development of generative artificial intelligence (AI) systems, particularly those fuelled by increasingly advanced large language models (LLMs), has raised concerns of their potential risks among policymakers globally. In July 2023, Chinese regulators enacted the Interim Measures for the Management of Generative AI Services (“the Measures”). The Measures aim to mitigate various risks associated with public-facing generative AI services, particularly those concerning content safety and security. At the same time, Chinese regulators are seeking the further development and application of such technology across diverse industries. Tensions between these policy objectives are reflected in the provisions of the Measures that entail different types of obligations on generative AI service providers. Such tensions present significant challenges for implementation of the regulation. As Beijing moves towards establishing a comprehensive legal framework for AI governance, legislators will need to further clarify and balance the responsibilities of diverse stakeholders.