To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, the law scholar Christoph Krönke focuses on the legal challenges faced by healthcare AI Alter Egos, especially in the European Union. Firstly, the author outlines the functionalities of AI Alter Egos in the healthcare sector. Based on this, he explores the applicable legal framework as AI Alter Egos have two main functions: collecting a substantive database and proposing diagnoses. The author spells out that concerning the database, European data protection laws, especially the GDPR, are applicable. For healthcare AI in general, the author analyses the European Medical Devices Regulation (MDR). He argues that MDR regulates the market and ensures high standards with regard to the quality of medical devices. Altogether, the author concludes that AI Alter Egos are regulated by an appropriate legal framework in the EU, but it has to be open for developments in order to remain appropriate.
In this chapter, Mathias Paul explores the topic of AI systems in the financial industry. After outlining different areas of application of AI in the financial sector and different regulatory regimes relevant to robo-finance, the author analyses the risks emerging from AI applications in the financial industry. He argues that AI systems applied in this sector usually do not create new risks. Instead, existing risks can actually be mitigated through AI applications. The author then analyses personal responsibility frameworks that have been suggested by scholars in the field of robo-finance, and shows why they are not a sufficient approach for regulation. He concludes by discussing the Draft AI Act proposed by the European Commission as a suitable regulatory approach based on the risks linked to specific AI systems and AI based practices.
In this chapter, the ethics and international law scholar Silja Voeneky and the mathematician Thorsten Schmidt propose a new adaptive regulation scheme for AI-driven products and services. To this end, the authors examine different regulatory regimes, including the European Medical Devices Regulation (MDR), and the proposed AI Act by the European Commission and analyse the advantages and drawbacks. They conclude that regulatory approaches in general and with regard to AI driven high risk products and services have structural and specific deficits. Hence, a new regulatory approach is suggested by the authors, which avoids these shortcomings. At its core, the proposed adaptive regulation requires that private actors, as companies developing and selling high risk AI driven products and services, pay a proportionate amount of money as a financial guarantee into a fund before the product or service enters the market. The authors lay down what amount of regulatory capital can be seen as proportionate and the accompanying rules and norms to implement adaptive regulation.
This chapter by the philosopher Johanna Thoma focuses on the ‘moral proxy problem’, which arises when an autonomous artificial agent makes a decision as a proxy for a human agent, without it being clear for whom specifically it does so. Thoma recognises that, in general, there are broadly two categories of agents an artificial agent can be a proxy for: low-level agents (individual users or the kinds of human agents artificial agents are usually replacing) and high-level agents (designers, distributors, or regulators). She argues that we do not get the same recommendations under different agential frames: whilst the former suggests the agents be programmed without risk neutrality, which is common in choices made by humans, the latter suggests the contrary, since the choices are considered part of an aggregate of many similar choices. The author argues that the largely unquestioned implementation of risk neutrality in the design of artificial agents deserves critical scrutiny. Such scrutiny should reveal that the treatment of risk is intimately connected with our answer to the questions about agential perspective and responsibility.
We consider dynamics represented by successive stochastic moves. Assuming we know the transition probabilities for going from one configuration to the next, we will discuss ways to determine the probabilities of the individual configurations.
In this chapter, Thomas Burri, an international lawyer, examines how general ethical norms on AI diffuse into domestic law directly, without engaging international law. The chapter discusses various ethical AI frameworks and shows how they influenced the European Union Commission’s proposal for an AI Act. It reveals the origins of the EU proposal and explains the substance of the future EU AI regulation. The chapter concludes that, overall, international law has played a marginal role in this process; it was largely sidelined.
We point out that complexity science is developing fast and that present and future scientific and societal challenges will need fundamental improvements in our ability to analyse and deal with complex emergent behaviour. It is therefore desirable to spread widely the awareness of approaches and insights from complexity science.
In this chapter, the philosopher Mathias Risse reflects on the medium and long-term prospects and challenges democracy faces from AI. Comparing the political nature of AI systems with traffic infrastructure, the author points out AI’s potential to greatly strengthen democracy, but only with the right efforts. The chapter starts with a critical examination of the relation between democracy and technology with a historical perspective before outlining the techno skepticism prevalent in several grand narratives of AI. Finally, the author explores the possibilities and challenges that AI may lead to in the present digital age. He argues that technology critically bears on what forms of human life get realised or imagined, as it changes the materiality of democracy (by altering how collective decision making unfolds) and what its human participants are like. In conclusion, Mathias Risse argues that both technologists and citizens need to engage with ethics and political thoughts generally to have the spirit and dedication to build and maintain a democracy-enhancing AI infrastructure.
Instrument delivery is critical part in vascular intervention surgery. Due to the soft-body structure of instruments, the relationship between manipulation commands and instrument motion is non-linear, making instrument delivery challenging and time-consuming. Reinforcement learning has the potential to learn manipulation skills and automate instrument delivery with enhanced success rates and reduced workload of physicians. However, due to the sample inefficiency when using high-dimensional images, existing reinforcement learning algorithms are limited on realistic vascular robotic systems. To alleviate this problem, this paper proposes discrete soft actor-critic with auto-encoder (DSAC-AE) that augments SAC-discrete with an auxiliary reconstruction task. The algorithm is applied with distributed sample collection and parameter update in a robot-assisted preclinical environment. Experimental results indicate that guidewire delivery can be automatically implemented after 50k sampling steps in less than 15 h, demonstrating the proposed algorithm has the great potential to learn manipulation skill for vascular robotic systems.
Conceptual and mathematical models can serve many purposes. We will discuss why simple stylistic models are particularly useful in complexity science since they can help to identify the most essential mechanisms amongst the profusion of interdependencies at play.
In this chapter, Philipp Kellmeyer discusses how to protect mental privacy and mental integrity in the interaction of AI-based neurotechnology from the perspective of philosophy, ethics, neuroscience, and psychology. The author argues that mental privacy and integrity are important anthropological goods that need to be protected from unjustified interferences. He then outlines the current scholarly discussion and policy initiatives about neurorights and takes the position that while existing human rights provide sufficient legal instruments, an approach is required that makes these rights actionable and justiciable to protect mental privacy and mental integrity, for example, by connecting fundamental rights to specific applied laws.
The author spells out the different key features of AI systems, introducing inter alia the notions of machine learning and deep learning as well as the use of AI systems as part of robotics.