We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The availability of data is a condition for the development of AI. This is no different in the context of healthcare-related AI applications. Healthcare data are required in the research, development, and follow-up phases of AI. In fact, data collection is also necessary to establish evidence of compliance with legislation. Several legislative instruments, such as the Medical Devices Regulation and the AI Act, enacted data collection obligations to establish (evidence of) the safety of medical therapies, devices, and procedures. Increasingly, such health-related data are collected in the real world from individual data subjects. The relevant legal instruments therefore explicitly mention they shall be without prejudice to other legal acts, including the GDPR. Following an introduction to real-world data, evidence, and electronic health records, this chapter considers the use of AI for healthcare from the perspective of healthcare data. It discusses the role of data custodians, especially when confronted with a request to share healthcare data, as well as the impact of concepts such as data ownership, patient autonomy, informed consent, and privacy and data protection-enhancing techniques.
The chapter examines a classic subject of HRI, social robotics, and the law, such as the design, manufacture, and use of humanoid AI robots for healthcare. The aim is to illustrate a new set of legal challenges that are unique to these systems when displayed in outer space. Such challenges may require the adoption of new legal standards, in particular, either sui generis standards of space law, or stricter or more flexible standards for HRI in space missions, down to a new “principle of equality” between human standards and robotic standards in outer space. The chapter complements current discussions and initiatives on the development of standards for the use of humanoid AI robots in health law, consumer law, and cybersecurity by considering the realignment of terrestrial standards that we may thus expect with the increasing use of humanoid AI systems in space journeys. The assumption is that breathtaking advancements of AI and robotics, current trends on the privatization of the space, and the evolution of current regulatory frameworks, also but not only in space law, will put the development of these new legal standards in the spotlight.
This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on the reliability of AI-inclusive medical practices and programs, its knowledge and understanding among different stakeholders involved, its effect on epistemic and communicative duties and burdens on medical professionals and, finally, its interaction and alignment with the public’s ethical values and interests as well as background sociopolitical conditions against which AI-inclusive healthcare systems are embedded. To assess the applicability of these conditions, we explore a recent proposal for AI-inclusivity within the Dutch Newborn Screening Program. In doing so, we illustrate the importance, scope, and potential challenges of fostering and maintaining institutional epistemic trust in a context where generating, assessing, and providing reliable and timely screening results for genetic risk is of high priority. Finally, to motivate the general relevance of our discussion and case study, we end with suggestions for strategies, interventions, and measures for AI-inclusivity in healthcare more widely.
Despite technological and medical advances, amputations continue to increase. Amputees face significant challenges when acquiring and using prosthetic devices, challenges which are made worse as their emotional needs, aspirations, mobility, prosthesis requirements and problems change over time. These challenges require custom solutions for each individual amputee, a fact that current amputee centered prosthesis services tend to ignore. The work reported in this paper contributes an AI based Prosthesis Development Service Framework to cater for the current and evolving needs of amputees.
This chapter offers lessons from engineering and other industries that promise developments in healthcare, and practical guidance for clinician-engineer partnerships. Section 1 provides guidance on how to establish a shared vocabulary and common understanding between engineers and clinicians of what terms such as AI and ML do and don’t mean. Section 2 identifies challenges clinician-engineering partnerships must overcome to deliver sustained value and ways to avoid common causes of failure. Section 3 provides specific advice on how to design projects to produce value at a series of stages rather than rely on the success of one, ambitious final model. Section 4 concludes by drawing on cautionary lessons from healthcare and other industries.
This article examines the National Health Data Network (RNDS), the platform launched by the Ministry of Health in Brazil as the primary tool for its Digital Health Strategy 2020–2028, including innovation aspects. The analysis is made through two distinct frameworks: Right to health and personal data protection in Brazil. The first approach is rooted in the legal framework shaped by Brazil’s trajectory on health since 1988, marked by the formal acknowledgment of the Right to health and the establishment of the Unified Health System, Brazil’s universal access health system, encompassing public healthcare and public health actions. The second approach stems from the repercussions of the General Data Protection Law, enacted in 2018 and the inclusion of Right to personal data protection in Brazilian’s Constitution. This legislation, akin to the EU’s General Data Protection Regulations, addressed the gap in personal data protection in Brazil and established principles and rules for data processing. The article begins by explanting the two approaches, and then it provides a brief history of health informatics policies in Brazil, leading to the current Digital Health Strategy and the RNDS. Subsequently, it delves into an analysis of the RNDS through the lenses of the two aforementioned approaches. In the final discussion sections, the article attempts to extract lessons from the analyses, particularly in light of ongoing discussions such as the secondary use of data for innovation in the context of different interpretations about innovation policies.
1. In this chapter, I will first discuss the rise of robotics and AI in the healthcare sector and the concern of some scholars that this may lead to a dehumanisation of the physician-patient relationship (part 2). I will then elaborate on four potential existing legal safeguards against such dehumanisation: the fact that only qualified persons are allowed to provide healthcare (part 3) and the resulting liability of the physician if things go wrong (part 4); the right of the patient to receive information about his/her health condition and to give his/her prior informed consent under the Belgian Law on Patient Rights (part 5), and finally transparency and informed consent under the General Data Protection Regulation (GDPR) (part 6). I will conclude with an overview (part 7).
THE RISE OF ROBOTICS AND AI TO DEAL WITH INCREASING DEMANDS IN THE HEALTHCARE SECTOR
2. A recent publication commissioned by the European Parliament states that the health sector is facing increasing demands on services brought on by issues such as an ageing population, an increase of chronic diseases, budgetary constraints, and a shortage of qualified workers. Developments in the field of robotics and AI can provide countless opportunities for addressing these challenges, resulting in necessary and significant cost and time savings. These efficiency benefits are the result of the fact the work is done more efficiently, more quickly and at a lower cost than a human actor could do it. According to the same study, the application of robotics and AI could lead to improvements in fields such as medical diagnosis, surgical intervention, prevention and treatment of diseases, and support for rehabilitation and longterm care. They could also contribute to more effective and automated work management processes, while offering continuous training for healthcare workers. It is estimated that the market for AI in healthcare will reach around $6,6 billion by 2021 and $8 billion by 2022, with significant cost savings for healthcare systems. According to a recent French study, the health sector is internationally the second most impacted sector by robotics and AI after the telecommunications and technologies sector, but preceding the financial services and automotive sector.
A natural question is why AI in design? Although the design applications written about in the journal vary widely, the common thread is that researchers use AI techniques to implement their ideas. The use of AI techniques for design applications, at least when AI EDAM was started, was partially a reaction against the predominant design methods based on some form of optimization. Knowledge-based techniques, particularly rule-based systems of various sorts, were very popular. One of the draws of these methods, I believe, was their ability to represent knowledge that is hard or awkward to represent in traditional optimization frameworks. This mirrors my experience: at the time, I was working in configuration with components that had a large number compatibility and resource constraints. Although many constraints could be represented in mixed integer linear programming systems, it was not easy to conceptualize, write, and most importantly, maintain the constraints in those systems.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
The emergence of digital platforms and the new application economy are transforming healthcare and creating new opportunities and risks for all stakeholders in the medical ecosystem. Many of these developments rely heavily on data and AI algorithms to prevent, diagnose, treat, and monitor diseases and other health conditions. A broad range of medical, ethical and legal knowledge is now required to navigate this highly complex and fast-changing space. This collection brings together scholars from medicine and law, but also ethics, management, philosophy, and computer science, to examine current and future technological, policy and regulatory issues. In particular, the book addresses the challenge of integrating data protection and privacy concerns into the design of emerging healthcare products and services. With a number of comparative case studies, the book offers a high-level, global, and interdisciplinary perspective on the normative and policy dilemmas raised by the proliferation of information technologies in a healthcare context.
This paper examines the evidence for the marginal feminine endings *-ay- and *-āy- in Proto-Semitic, and the feminine endings *-e and *-a in Proto-Berber. Their similar formation (*CV̆CC-ay/āy), semantics (verbal abstracts, underived concrete feminine nouns) and plural morphology (replacement of the feminine suffix by a plural suffix with -w-) suggest that this feminine formation should be reconstructed to a shared ancestor which may be called Proto-Berbero-Semitic.
AI and Image illustrates the importance of critical perspectives in the study of AI and its application to image collections in the art and heritage sector. The authors' approach is that such entanglements of image and AI are neither dystopian or utopian but may amplify, reduce or condense existing societal inequalities depending on how they may be implemented in relation to human expertise and sensibility in terms of diversity and inclusion. The Element further discusses regulations around the use of AI for such cultural datasets as they touch upon legalities, regulations and ethics. In the conclusion they emphasise the importance of the professional expert factor in the entanglements of AI and images and advocate for a continuous and renegotiating professional symbiosis between human and machines. This title is also available as Open Access on Cambridge Core.
After its launch on 30 November 2022 ChatGPT (or Chat Generative Pre-Trained Transformer) quickly became the fastest-growing app in history, gaining one hundred million users in just two months. Developed by the US-based artificial-intelligence firm OpenAI, ChatGPT is a free, text-based AI system designed to interact with the user in a conversational way. Capable of answering complex questions with sophistication and of conversing in a breezy and impressively human style, ChatGPT can also generate outputs in a seemingly endless variety of formats, from professional memos to Bob Dylan lyrics, HTML code to screenplays and five-alarm chilli recipes to five-paragraph essays. Its remarkable capability relative to earlier chatbots gave rise to both astonishment and concern in the tech sector. On 22 March 2023 a group of more than one thousand scientists and entrepreneurs published an open letter calling for a six-month moratorium on further human-competitive AI development – a moratorium that was not observed.
This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI philosophy” provides a new method for philosophy.
In this chapter, Fruzsina Molnár-Gábor and Johanne Giesecke consider specific aspects of how the application of AI-based systems in medical contexts may be guided under international standards. They sketch the relevant international frameworks for the governance of medical AI. Among the frameworks that exist, the World Medical Association’s activity appears particularly promising as a guide for standardisation processes. The organisation has already unified the application of medical expertise to a certain extent worldwide, and its guidance is anchored in the rules of various legal systems. It might provide the basis for a certain level of conformity of acceptance and implementation of new guidelines within national rules and regulations, such as those on new technology applications within the AI field. In order to develop a draft declaration, the authors then sketch out the potential applications of AI and its effects on the doctor–patient relationship in terms of information, consent, diagnosis, treatment, aftercare, and education. Finally, they spell out an assessment of how further activities of the WMA in this field might affect national rules, using the example of Germany.
It is only recently that the EPO’s Boards of Appeal have had to deal with appeals relating to the surge of AI based inventions. In doing so the Boards of Appeal have adopted a gradualist approach, adapting the extensive EPO case law relating to the patentability of computer programs ‘as such’ and applying it to AI inventions. The most recent change to the Guidelines indicates the EPO’s willingness to adapt to technological developments and to refine its approach to patentability of inventions involving AI, while at the same time taking a firm line against patenting non-technical inventions.