To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The transformative impact of artificial intelligence (AI) across various sectors, with recent advancements, such as the release of the generative AI model GPT-4, raises critical legal and policy concerns. These concerns include important societal and potentially existential impacts: Threats to democracy, workforce displacement, copyright challenges, environmental effects, new and more lethal cybersecurity threat vectors, and the potential for AI advanced to become uncontrollable or be used for malicious purposes if it falls into the wrong hands. Human rights concerns are also implicated, including the potential for biased and discriminatory decision-making, unreasonable privacy impacts, inaccurate and unfair outcomes, and lack of transparency and due process. The unveiling of GPT-4 emphasizes the need for legislation to address these issues. The European Union (EU) has taken a global lead by enacting the Artificial Intelligence Act (AIA) to regulate AI development, placement, and use, and by proposing the AI Liability Directive (AILD), which aims to facilitate civil claims for damages arising from AI products and services. The AIA takes a comprehensive, risk-based approach to regulating AI across sectors. Significant differences had to be negotiated among the EU co-legislators to reach a consensus on the final text of the AIA, such as defining AI systems, regulating foundation models, determining bans on specific AI systems, and establishing redress rights for consumers and fundamental rights violations. The chapter explores the global context, the EU legislative approach, the key issues that had to be resolved, and the interaction of the AIA with other EU laws, particularly with the General Data Protection Regulation (GDPR).
From a distance, smart contracts seem exciting: Unlike humans, who might opportunistically decide to deviate from the agreed terms, their code will execute “no-matter-what,” ensuring the terms are adhered to and the contract is performed. Smart contracts would thus seem like a valuable addition to conventional contracts. A perfect transaction technology, indeed! A closer analysis of the smart contract narrative and the relevant technical scholarship reveals a peculiar dissonance between how smart contracts are described and what smart contracts really are. Taking the unfortunate terminology at face value and analyzing smart contracts as if they were contracts in the legal sense might constitute a waste of academic time. Even if they constituted an improvement over existing transacting practices, would – or could – smart contracts still be contracts? Would they even belong to the same category of legal phenomena? Maybe the fundamental question is: what are smart contracts? To many, these questions may seem like unnecessary hairsplitting, typical of haughty academics. In practice, however, how something is defined and categorized has immediate practical implications. Sidestepping the overly optimistic narrative of “unstoppable legal innovation,” this chapter deconstructs the concept of smart contracts and aims to provide a more commonsensical and factual grounding for future legal analyses of this phenomenon.
Credit card processing relies deeply on technology, so it is no surprise that technological forces are responsible for some of the problems with opaque pricing in this market. Technology made modern credit card processing possible by speeding up the transactions and making transactions less expensive. But this same technology made pricing harder for merchants to understand and compare among different credit card processors. Academic scholarship has failed to address nontransparent pricing for merchant card processing, and laws in various countries are focused on interchange fees, not merchant fees. This chapter argues that legal academics should study credit card processing fees and that regulators should use Canadian laws as an example of how to foster transparency.
Advances in technology increasingly inform how consumers make sense of the world and how organizations do business, resulting in complex dynamics between designers who define and craft products, companies who sell them, and consumers who use them. They also contribute to new legal and regulatory issues and potential market interventions related to risk mitigation and consumers’ susceptibility to errors of judgment due to cognitive biases. This chapter takes a behavioral, human-centered perspective to explore these emergent legal issues through three key lenses: (1) how contemporary digital products deliver consumer value, contribute to new forms of economic and noneconomic currency, and harness infrastructure that balances paternalistic oversight with consumer agency; (2) how digital products’ features shape consumer engagement with other individuals, products themselves, and the companies that produce these products; and finally, (3) how increasingly sophisticated data-driven technologies, such as generative AI and machine learning, create asymmetrical relationships between producers and consumers who often lack the conceptual models necessary to comprehend tradeoffs and terms of exchange fully.
Some commentators have said that artificial intelligence (AI) is advancing rapidly in substantial ways toward human-like intelligence. The case may be overstated. Advances in generative AI are remarkable, but large language models (LLMs) are talkers, not doers. Moves toward some kind of robust agency for AI are, however, coming. Humans and their law must prepare for it. This chapter addresses this preparation from the standpoint of contract law and contract practices. For an AI agent to participate as a contracting agent, in a philosophical or psychological sense, with humans in the formation of a contract, the following requirements will have to be met: (1) the AI in question will need to possess the cognitive functions to act with intention and that intention must cause the AI to take a particular action; (2) humans must be in a position to recognize and respect that intention; (3) the AI must have the capacity to engage with humans (and other AI) in shared intentions, meaning the cognitive capacity to share a goal the parties can plan for and execute; (4) the AI will have to have the capacity to recognize and respect the practical authority of law and legal obligation; (5) the AI will have to have the capacity to recognize and respect practical authority in a claim accountability sense, in accepting that a contract forms a binding commitment to others. In other words, the AI will not only have to be able to engage in shared intentionality but also understand and accept it as a binding commitment recognized by the law; and (6) the AI will have to possess the ability to participate in these actions with humans or in some hybrid form with humans.
The development of AI promises to increase innovation and facilitate advancements in multiple fields. Yet, as companies rush products to market in a race for dominance in this highly competitive field, the potential for widespread social harm is foreseeable. In the absence of legislation, commercial law and tort law provide standards and remedies governing new products; however, companies may alter these default laws by contract. This chapter argues that, until there are industry specific regulations governing AI products and services, adhesive contracts that alter the default rules of tort and commercial law should not be enforceable.
The recent crypto winter – and the malfeasance of crypto bad actors – has revealed a difficulty in the developing law of digital property. Although the standard recourse for improperly taking someone else’s rivalrous digital property should be conversion (pay for it) or replevin (give it back), courts have only begun the common law process of articulating standards for these causes of action. In short, the current law invites and incentivizes digital theft because it can be very hard to get digital property back. We argue here that the common law is strongest in technology cases when it proceeds by analogy well-rooted in traditional case law, and that digital conversion and replevin are directly applicable to situations where someone has converted or improperly taken the digital property of another.
More than most innovations, smartphones have transformed the human experience. Most people now live with powerful computational devices within arm’s reach, day and night. By enabling the platform economy and bringing computers closer to the human experience, smartphones also opened new doors for tracking and surveillance. The sum of these changes radically altered the consumer contracting environment, exerting new pressures on the foundations of contract law. This chapter examines key factors in this transformation: unprecedented scale, privacy risks, linguistic complexity, and fundamental asymmetries. In sum, the smartphone era has exacerbated old conundrums in consumer contracting – while also introducing new ones. The net result: a further decoupling of consumer reality and contract law.
This chapter first discusses how Bitcoin works in functional terms (as opposed to technical aspects), focusing on the structure of a decentralized, pseudonymous payment system. The chapter next discusses possible applications of the underlying blockchain technology, such as stock trading, property records, peer-to-peer sharing services, and smart contracts. Turning to the law, the chapter discusses several matters that the Uniform Commercial Code Amendments in 2022 addressed: A legal definition applicable to blockchain technologies, the negotiability of digital assets, the use of digital assets as collateral, and whether cryptocurrencies are money. The chapter then discusses some remaining issues, such as whether bitcoin transactions be traced and whether smart contracts are subject to contract law, and whether parties could opt out of contract law. Finally, the chapter looks specifically at the application of secured lending law to analogous transactions using smart contracts.
This chapter explains why education is a special application domain of AI that focuses on optimizing human learning and teaching. We outline multiple perspectives on the role of AI in education, highlighting the importance of the augmentation perspective in which human learners and teachers closely collaborate with AI supporting human strengths. To illustrate the variety of AI applications used in the educational sector, we provide an overview of students-faced, teacher-faced, and administrative AI solutions. Next, we discuss the ethical and social impacts of AI in education and outline how ethics in AI and education have developed from the Beijing consensus after UNESCO’s conference on AI in Education 2019, to the recent European ethical guidelines on the use of AI and data in teaching and learning for educators. Finally, we introduce an example of the Dutch value compass for the digital transformation of education and the embedded ethics approach of the National Education Lab AI around developing and cocreating new intelligent innovations in collaboration with educational professionals, scientists, and companies.
AI brings risks but also opportunities for consumers. When it comes to consumer law, which traditionally focuses on protecting consumers’ autonomy and self-determination, the increased use of AI also poses major challenges. This chapter discusses both the challenges and opportunities of AI in the consumer context (Section 10.2 and 10.3) and provides a brief overview of some of the relevant consumer protection instruments in the EU legal order (Section 10.4). A case study on dark patterns illustrates the shortcomings of the current consumer protection framework more concretely (Section 10.5).