To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A natural question is why AI in design? Although the design applications written about in the journal vary widely, the common thread is that researchers use AI techniques to implement their ideas. The use of AI techniques for design applications, at least when AI EDAM was started, was partially a reaction against the predominant design methods based on some form of optimization. Knowledge-based techniques, particularly rule-based systems of various sorts, were very popular. One of the draws of these methods, I believe, was their ability to represent knowledge that is hard or awkward to represent in traditional optimization frameworks. This mirrors my experience: at the time, I was working in configuration with components that had a large number compatibility and resource constraints. Although many constraints could be represented in mixed integer linear programming systems, it was not easy to conceptualize, write, and most importantly, maintain the constraints in those systems.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
Data is one of the most valuable resources in the twenty-first century. Property rights are a tried-and-tested legal response to regulating valuable assets. With non-personal, machine-generated data within an EU context, mainstream IP options are not available, although certain types of machine generated data may be protected as trade secrets or within sui generis database protection. However, a new IP right is not needed. The formerly proposed EU data producer’s right is a cautionary tale for jurisdictions considering a similar model. A new property right would both strengthen the position of de facto data holders and drive up costs. However, with data, there are valuable lessons to be learned from constructed commons models.
The EU Artificial Intelligence Act proposal is based on a risk-oriented approach. While AI systems that pose an unacceptable risk will be banned, high-risk AI systems will be subject to strict obligations before they can be put on the market. Most of the provisions deal with high-risk systems, setting out obligations on providers, users and other participants across the AI value chain. At the heart of the proposal is the notion of co-regulation through standardization based on the New Legislative Framework. Accordingly, this chapter provides a critical analysis of the proposal, with particular focus on how the co-regulation, standardization and certification system envisaged contributes to European governance of AI and addresses the manifold ethical and legal concerns of (high-risk) AI systems.
The hope is that legal rules relating to AI technologies can frame their progress and limit the risks of abuse. This hope is tentative as technology seriously challenges the theory and practice of the law across legal traditions. The use of interdisciplinary and comparative methodologies makes clear that AI is currently impacting our understanding of the law. AI can be understood as a regulatory technology and confirms that AI can produce normative effects some of which may be contrary to public laws and regulations.
This paper examines the evidence for the marginal feminine endings *-ay- and *-āy- in Proto-Semitic, and the feminine endings *-e and *-a in Proto-Berber. Their similar formation (*CV̆CC-ay/āy), semantics (verbal abstracts, underived concrete feminine nouns) and plural morphology (replacement of the feminine suffix by a plural suffix with -w-) suggest that this feminine formation should be reconstructed to a shared ancestor which may be called Proto-Berbero-Semitic.
Metacognition is the concept of reasoning about an agent’s own internal processes and was originally introduced in the field of developmental psychology. In this position chapter, we examine the concept of applying metacognition to artificial intelligence (AI). We introduce a framework for understanding metacognitive AI that we call TRAP: transparency, reasoning, adaptation, and perception.
This paper is written at a tipping point in the development of generative AI and related technologies and services, which heralds a new battleground between humans and computers in the shaping of reality. Large language models (LLMs) scrape vast amounts of data from the so called ‘publicly available' internet, enabling new ways for the past to be represented and reimagined at scale, for individuals and societies. Moreover, generative AI changes what memory is and what memory does, pushing it beyond the realm of individual, human influence, and control, yet at the same time offering new modes of expression, conversation, creativity, and ways of overcoming forgetting. I argue here for a ‘third way of memory’, to recognise how the entanglements between humans and machines both enable and endanger human agency in the making and the remixing of individual and collective memory. This includes the growth of AI agents, with increasing autonomy and infinite potential to make, remake, and repurpose individual and collective pasts, beyond human consent and control. This paper outlines two key developments of generative AI-driven services: firstly, they untether the human past from the present, producing a past that was never actually remembered in the first place, and, secondly, they usher in a new ‘conversational’ past through the dialogical construction of memory in the present. Ultimately, developments in generative AI are making it more difficult for us to recognise the human influence on, and pathways from, the past, and that human agency over remembering and forgetting is increasingly challenged.
The chapter analyses the impact of AI and IP Law. In August 2019, news reports carried stories about the first patent applications naming an AI algorithm, called DABUS, as an inventor on patent applications. Almost immediately, the United States Patent Office published a request for comments, asking questions about how it should approach AI and patent law. Less than a year later, the questions were seemingly definitively resolved.
The AI agent does not have legal personality. It cannot be sued in court and it has no patrimony. Therefore, applying the law of agency seems to depend on granting intelligent agents a legal personality. Without legal personality, no declaration of intent would exist; without a declaration of intent, the law of agency is not applicable. Applying agency law principles to the problems that arise as a result of transactions generated by AI agents creates more problems than solutions. Therefore, applying the law of agency seems to depend on granting intelligent agents a legal personality.
The EU definitions of AI moved from a narrow one to a broad one because of the EU policy which is to govern the phenomenon of AI in the broadest way that includes a wide range of situations. The key contents of the main EU AI documents including the European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics, the Ethics Guidelines for Trustworthy AI, the proposed AI Act, and the recent Proposal for an AI Liability Directive, are examined.
This chapter introduces basic concepts of AI to lawyers, deals with key concepts, the capabilities and limitations of AI, and identifies technological challenges which might require legal responses.
“Performing AI” raises new questions about creative labor. Might the mathematical entities called neural networks that constitute much contemporary AI research be expressive and “perform,” thus leveling the playing field between human beings and nonhuman machines? What human societal models do neural networks enact? What bodily, mental, and affective work is required to integrate neural networks into the profoundly anthropocentric domain of the performing arts?
Technological tools currently being developed are capable of substantially assisting judges in their daily work. In particular, data analysis of widely available court decisions will evaluate, in an unprecedented way, the activity of the courts and the quality of justice. In doing so, it will allow for more efficient and faster dispute resolution, as well as cost reductions for litigants and society. Technological evolution will probably not cause the disappearance of humans from judicial adjudication but a new, progressive and subtle redistribution of tasks between men and machines.
1. Business entities currently employ AI and other algorithmic techniques in essentially all sectors of the economy in order to influence potential customers. The concept of AI is discussed elsewhere in this book. This contribution is more concerned with what is happening on the market under the label ‘AI’ and how this may affect those who are generally labelled as consumers. After all, the focus of legal research is not so much on ‘new’ technology itself, but rather on the aspects of social life that this technology makes newly salient.
To that end, I will first identify and categorise some of the ways in which business entities employ what is commonly referred to as AI as well as the risks and benefits of such uses (part 2). For this, I will rely on the findings of the ARTificial intelligence SYstems and consumer law & policy project (ARTSY Project) conducted by the European University Institute in Florence under the supervision of professor Hans Micklitz. Subsequently, I will examine how the legislator intends to adapt consumer policy to the changing circumstances created by these previously mentioned developments (part 3). I will limit this study to European Union consumer policy as the Belgian legislator is likely to adopt this approach. I will then examine some of the hurdles (AI-driven) autonomous agents present to consumer autonomy as well as the question to what extent and how this can be dealt with within the current consumer law framework (part 4). In particular, I will discuss a number of market practices which are closely related to the advent of autonomous agents. In this regard, I will rely on the key issues in the consumer domain as defined in a briefing document to the European Parliament prepared by one of the researchers of the ARTSY Project. I will not elaborate on consumer privacy as privacy considerations are discussed elsewhere in this book. Finally, I will recapitulate my findings and contemplate on the nature of consumer rights in the era of AI (part 5).
BENEFITS AND RISKS OF AI AS A MARKET TOOL
2. A sectoral analysis prepared within the framework of the ARTSY Project shows that the use of AI is booming in several domains.