To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A natural question is why AI in design? Although the design applications written about in the journal vary widely, the common thread is that researchers use AI techniques to implement their ideas. The use of AI techniques for design applications, at least when AI EDAM was started, was partially a reaction against the predominant design methods based on some form of optimization. Knowledge-based techniques, particularly rule-based systems of various sorts, were very popular. One of the draws of these methods, I believe, was their ability to represent knowledge that is hard or awkward to represent in traditional optimization frameworks. This mirrors my experience: at the time, I was working in configuration with components that had a large number compatibility and resource constraints. Although many constraints could be represented in mixed integer linear programming systems, it was not easy to conceptualize, write, and most importantly, maintain the constraints in those systems.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
Data is one of the most valuable resources in the twenty-first century. Property rights are a tried-and-tested legal response to regulating valuable assets. With non-personal, machine-generated data within an EU context, mainstream IP options are not available, although certain types of machine generated data may be protected as trade secrets or within sui generis database protection. However, a new IP right is not needed. The formerly proposed EU data producer’s right is a cautionary tale for jurisdictions considering a similar model. A new property right would both strengthen the position of de facto data holders and drive up costs. However, with data, there are valuable lessons to be learned from constructed commons models.
The EU Artificial Intelligence Act proposal is based on a risk-oriented approach. While AI systems that pose an unacceptable risk will be banned, high-risk AI systems will be subject to strict obligations before they can be put on the market. Most of the provisions deal with high-risk systems, setting out obligations on providers, users and other participants across the AI value chain. At the heart of the proposal is the notion of co-regulation through standardization based on the New Legislative Framework. Accordingly, this chapter provides a critical analysis of the proposal, with particular focus on how the co-regulation, standardization and certification system envisaged contributes to European governance of AI and addresses the manifold ethical and legal concerns of (high-risk) AI systems.
The hope is that legal rules relating to AI technologies can frame their progress and limit the risks of abuse. This hope is tentative as technology seriously challenges the theory and practice of the law across legal traditions. The use of interdisciplinary and comparative methodologies makes clear that AI is currently impacting our understanding of the law. AI can be understood as a regulatory technology and confirms that AI can produce normative effects some of which may be contrary to public laws and regulations.
Governing AI is about getting AI right. Building upon AI scholarship in science and technology studies, technology law, business ethics, and computer science, it documents potential risks and actual harms associated with AI, lists proposed solutions to AI-related problems around the world, and assesses their impact. The book presents a vast range of theoretical debates and empirical evidence to document how and how well technical solutions, business self-regulation, and legal regulation work. It is a call to think inside and outside the box. Technical solutions, business self-regulation, and especially legal regulation can mitigate and even eliminate some of the potential risks and actual harms arising from the development and use of AI. However, the long-term health of the relationship between technology and society depends on whether ordinary people are empowered to participate in making informed decisions to govern the future of technology – AI included.
AI and Image illustrates the importance of critical perspectives in the study of AI and its application to image collections in the art and heritage sector. The authors' approach is that such entanglements of image and AI are neither dystopian or utopian but may amplify, reduce or condense existing societal inequalities depending on how they may be implemented in relation to human expertise and sensibility in terms of diversity and inclusion. The Element further discusses regulations around the use of AI for such cultural datasets as they touch upon legalities, regulations and ethics. In the conclusion they emphasise the importance of the professional expert factor in the entanglements of AI and images and advocate for a continuous and renegotiating professional symbiosis between human and machines. This title is also available as Open Access on Cambridge Core.
After its launch on 30 November 2022 ChatGPT (or Chat Generative Pre-Trained Transformer) quickly became the fastest-growing app in history, gaining one hundred million users in just two months. Developed by the US-based artificial-intelligence firm OpenAI, ChatGPT is a free, text-based AI system designed to interact with the user in a conversational way. Capable of answering complex questions with sophistication and of conversing in a breezy and impressively human style, ChatGPT can also generate outputs in a seemingly endless variety of formats, from professional memos to Bob Dylan lyrics, HTML code to screenplays and five-alarm chilli recipes to five-paragraph essays. Its remarkable capability relative to earlier chatbots gave rise to both astonishment and concern in the tech sector. On 22 March 2023 a group of more than one thousand scientists and entrepreneurs published an open letter calling for a six-month moratorium on further human-competitive AI development – a moratorium that was not observed.
This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI philosophy” provides a new method for philosophy.
In this chapter, Fruzsina Molnár-Gábor and Johanne Giesecke consider specific aspects of how the application of AI-based systems in medical contexts may be guided under international standards. They sketch the relevant international frameworks for the governance of medical AI. Among the frameworks that exist, the World Medical Association’s activity appears particularly promising as a guide for standardisation processes. The organisation has already unified the application of medical expertise to a certain extent worldwide, and its guidance is anchored in the rules of various legal systems. It might provide the basis for a certain level of conformity of acceptance and implementation of new guidelines within national rules and regulations, such as those on new technology applications within the AI field. In order to develop a draft declaration, the authors then sketch out the potential applications of AI and its effects on the doctor–patient relationship in terms of information, consent, diagnosis, treatment, aftercare, and education. Finally, they spell out an assessment of how further activities of the WMA in this field might affect national rules, using the example of Germany.
It is only recently that the EPO’s Boards of Appeal have had to deal with appeals relating to the surge of AI based inventions. In doing so the Boards of Appeal have adopted a gradualist approach, adapting the extensive EPO case law relating to the patentability of computer programs ‘as such’ and applying it to AI inventions. The most recent change to the Guidelines indicates the EPO’s willingness to adapt to technological developments and to refine its approach to patentability of inventions involving AI, while at the same time taking a firm line against patenting non-technical inventions.
AI is a complex, multifaceted concept and is therefore hard to define because AI can refer to technological artifacts, certain methods or a scientific field that is split into many subfields and that is continuously changing and evolving AI systems can therefore be seen as digital artifacts that require hardware and software components and that contain at least one learning or learned component, i.e., a component that is able to change the system’s behavior based on presented data and the processing of this data.
This chapter turns to the possibility that the AI systems challenging the legal order may also offer at least part of the solution. Here China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.