To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A natural question is why AI in design? Although the design applications written about in the journal vary widely, the common thread is that researchers use AI techniques to implement their ideas. The use of AI techniques for design applications, at least when AI EDAM was started, was partially a reaction against the predominant design methods based on some form of optimization. Knowledge-based techniques, particularly rule-based systems of various sorts, were very popular. One of the draws of these methods, I believe, was their ability to represent knowledge that is hard or awkward to represent in traditional optimization frameworks. This mirrors my experience: at the time, I was working in configuration with components that had a large number compatibility and resource constraints. Although many constraints could be represented in mixed integer linear programming systems, it was not easy to conceptualize, write, and most importantly, maintain the constraints in those systems.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI philosophy” provides a new method for philosophy.
Data is one of the most valuable resources in the twenty-first century. Property rights are a tried-and-tested legal response to regulating valuable assets. With non-personal, machine-generated data within an EU context, mainstream IP options are not available, although certain types of machine generated data may be protected as trade secrets or within sui generis database protection. However, a new IP right is not needed. The formerly proposed EU data producer’s right is a cautionary tale for jurisdictions considering a similar model. A new property right would both strengthen the position of de facto data holders and drive up costs. However, with data, there are valuable lessons to be learned from constructed commons models.
The EU Artificial Intelligence Act proposal is based on a risk-oriented approach. While AI systems that pose an unacceptable risk will be banned, high-risk AI systems will be subject to strict obligations before they can be put on the market. Most of the provisions deal with high-risk systems, setting out obligations on providers, users and other participants across the AI value chain. At the heart of the proposal is the notion of co-regulation through standardization based on the New Legislative Framework. Accordingly, this chapter provides a critical analysis of the proposal, with particular focus on how the co-regulation, standardization and certification system envisaged contributes to European governance of AI and addresses the manifold ethical and legal concerns of (high-risk) AI systems.
The hope is that legal rules relating to AI technologies can frame their progress and limit the risks of abuse. This hope is tentative as technology seriously challenges the theory and practice of the law across legal traditions. The use of interdisciplinary and comparative methodologies makes clear that AI is currently impacting our understanding of the law. AI can be understood as a regulatory technology and confirms that AI can produce normative effects some of which may be contrary to public laws and regulations.
The EU definitions of AI moved from a narrow one to a broad one because of the EU policy which is to govern the phenomenon of AI in the broadest way that includes a wide range of situations. The key contents of the main EU AI documents including the European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics, the Ethics Guidelines for Trustworthy AI, the proposed AI Act, and the recent Proposal for an AI Liability Directive, are examined.
This chapter introduces basic concepts of AI to lawyers, deals with key concepts, the capabilities and limitations of AI, and identifies technological challenges which might require legal responses.
“Performing AI” raises new questions about creative labor. Might the mathematical entities called neural networks that constitute much contemporary AI research be expressive and “perform,” thus leveling the playing field between human beings and nonhuman machines? What human societal models do neural networks enact? What bodily, mental, and affective work is required to integrate neural networks into the profoundly anthropocentric domain of the performing arts?
Technological tools currently being developed are capable of substantially assisting judges in their daily work. In particular, data analysis of widely available court decisions will evaluate, in an unprecedented way, the activity of the courts and the quality of justice. In doing so, it will allow for more efficient and faster dispute resolution, as well as cost reductions for litigants and society. Technological evolution will probably not cause the disappearance of humans from judicial adjudication but a new, progressive and subtle redistribution of tasks between men and machines.
1. Business entities currently employ AI and other algorithmic techniques in essentially all sectors of the economy in order to influence potential customers. The concept of AI is discussed elsewhere in this book. This contribution is more concerned with what is happening on the market under the label ‘AI’ and how this may affect those who are generally labelled as consumers. After all, the focus of legal research is not so much on ‘new’ technology itself, but rather on the aspects of social life that this technology makes newly salient.
To that end, I will first identify and categorise some of the ways in which business entities employ what is commonly referred to as AI as well as the risks and benefits of such uses (part 2). For this, I will rely on the findings of the ARTificial intelligence SYstems and consumer law & policy project (ARTSY Project) conducted by the European University Institute in Florence under the supervision of professor Hans Micklitz. Subsequently, I will examine how the legislator intends to adapt consumer policy to the changing circumstances created by these previously mentioned developments (part 3). I will limit this study to European Union consumer policy as the Belgian legislator is likely to adopt this approach. I will then examine some of the hurdles (AI-driven) autonomous agents present to consumer autonomy as well as the question to what extent and how this can be dealt with within the current consumer law framework (part 4). In particular, I will discuss a number of market practices which are closely related to the advent of autonomous agents. In this regard, I will rely on the key issues in the consumer domain as defined in a briefing document to the European Parliament prepared by one of the researchers of the ARTSY Project. I will not elaborate on consumer privacy as privacy considerations are discussed elsewhere in this book. Finally, I will recapitulate my findings and contemplate on the nature of consumer rights in the era of AI (part 5).
BENEFITS AND RISKS OF AI AS A MARKET TOOL
2. A sectoral analysis prepared within the framework of the ARTSY Project shows that the use of AI is booming in several domains.
Governing AI is about getting AI right. Building upon AI scholarship in science and technology studies, technology law, business ethics, and computer science, it documents potential risks and actual harms associated with AI, lists proposed solutions to AI-related problems around the world, and assesses their impact. The book presents a vast range of theoretical debates and empirical evidence to document how and how well technical solutions, business self-regulation, and legal regulation work. It is a call to think inside and outside the box. Technical solutions, business self-regulation, and especially legal regulation can mitigate and even eliminate some of the potential risks and actual harms arising from the development and use of AI. However, the long-term health of the relationship between technology and society depends on whether ordinary people are empowered to participate in making informed decisions to govern the future of technology – AI included.
AI and Image illustrates the importance of critical perspectives in the study of AI and its application to image collections in the art and heritage sector. The authors' approach is that such entanglements of image and AI are neither dystopian or utopian but may amplify, reduce or condense existing societal inequalities depending on how they may be implemented in relation to human expertise and sensibility in terms of diversity and inclusion. The Element further discusses regulations around the use of AI for such cultural datasets as they touch upon legalities, regulations and ethics. In the conclusion they emphasise the importance of the professional expert factor in the entanglements of AI and images and advocate for a continuous and renegotiating professional symbiosis between human and machines. This title is also available as Open Access on Cambridge Core.
After its launch on 30 November 2022 ChatGPT (or Chat Generative Pre-Trained Transformer) quickly became the fastest-growing app in history, gaining one hundred million users in just two months. Developed by the US-based artificial-intelligence firm OpenAI, ChatGPT is a free, text-based AI system designed to interact with the user in a conversational way. Capable of answering complex questions with sophistication and of conversing in a breezy and impressively human style, ChatGPT can also generate outputs in a seemingly endless variety of formats, from professional memos to Bob Dylan lyrics, HTML code to screenplays and five-alarm chilli recipes to five-paragraph essays. Its remarkable capability relative to earlier chatbots gave rise to both astonishment and concern in the tech sector. On 22 March 2023 a group of more than one thousand scientists and entrepreneurs published an open letter calling for a six-month moratorium on further human-competitive AI development – a moratorium that was not observed.
In this chapter, Fruzsina Molnár-Gábor and Johanne Giesecke consider specific aspects of how the application of AI-based systems in medical contexts may be guided under international standards. They sketch the relevant international frameworks for the governance of medical AI. Among the frameworks that exist, the World Medical Association’s activity appears particularly promising as a guide for standardisation processes. The organisation has already unified the application of medical expertise to a certain extent worldwide, and its guidance is anchored in the rules of various legal systems. It might provide the basis for a certain level of conformity of acceptance and implementation of new guidelines within national rules and regulations, such as those on new technology applications within the AI field. In order to develop a draft declaration, the authors then sketch out the potential applications of AI and its effects on the doctor–patient relationship in terms of information, consent, diagnosis, treatment, aftercare, and education. Finally, they spell out an assessment of how further activities of the WMA in this field might affect national rules, using the example of Germany.