To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The availability of data is a condition for the development of AI. This is no different in the context of healthcare-related AI applications. Healthcare data are required in the research, development, and follow-up phases of AI. In fact, data collection is also necessary to establish evidence of compliance with legislation. Several legislative instruments, such as the Medical Devices Regulation and the AI Act, enacted data collection obligations to establish (evidence of) the safety of medical therapies, devices, and procedures. Increasingly, such health-related data are collected in the real world from individual data subjects. The relevant legal instruments therefore explicitly mention they shall be without prejudice to other legal acts, including the GDPR. Following an introduction to real-world data, evidence, and electronic health records, this chapter considers the use of AI for healthcare from the perspective of healthcare data. It discusses the role of data custodians, especially when confronted with a request to share healthcare data, as well as the impact of concepts such as data ownership, patient autonomy, informed consent, and privacy and data protection-enhancing techniques.
The chapter examines a classic subject of HRI, social robotics, and the law, such as the design, manufacture, and use of humanoid AI robots for healthcare. The aim is to illustrate a new set of legal challenges that are unique to these systems when displayed in outer space. Such challenges may require the adoption of new legal standards, in particular, either sui generis standards of space law, or stricter or more flexible standards for HRI in space missions, down to a new “principle of equality” between human standards and robotic standards in outer space. The chapter complements current discussions and initiatives on the development of standards for the use of humanoid AI robots in health law, consumer law, and cybersecurity by considering the realignment of terrestrial standards that we may thus expect with the increasing use of humanoid AI systems in space journeys. The assumption is that breathtaking advancements of AI and robotics, current trends on the privatization of the space, and the evolution of current regulatory frameworks, also but not only in space law, will put the development of these new legal standards in the spotlight.
Artificial intelligence (AI) can potentially enhance healthcare professionals’ understanding of certain disorders, facilitating improved diagnosis, treatment, and prevention. Exploring potential psychological factors that can possibly influence healthcare professionals’ attitudes towards AI in their work is crucial to assist successful adoption and utilization of these technologies.
Objectives
The possible role of burnout, perceived distress, and factors related to work circumstances on willingness to use AI were explored in this investigation.
Methods
Attitudes towards artificial intelligence, perceived distress and factors related to work were assessed by using an online questionnaire. Participants (86 % women, Mage = 46.9 years, SD = 11.3) were healthcare professionals recruited from Hungarian hospitals and healthcare institutions.
Results
Linear regression analysis indicated that most participants (58%) were open to using AI in their work. Significant predictors of use were job satisfaction, work performance, and administrative workload. Higher burnout levels and perceived distress were not associated with attitudes towards AI.
Conclusions
The present findings suggested that work-related environmental factors may have a greater predictive power in explaining the propensity to use AI in healthcare than individual psychological factors. However, the explanatory power of these factors in AI use was modest (7.5%), suggesting that future research should investigate further possible predictors of attitudes towards AI such as social factors.
This paper motivates institutional epistemic trust as an important ethical consideration informing the responsible development and implementation of artificial intelligence (AI) technologies (or AI-inclusivity) in healthcare. Drawing on recent literature on epistemic trust and public trust in science, we start by examining the conditions under which we can have institutional epistemic trust in AI-inclusive healthcare systems and their members as providers of medical information and advice. In particular, we discuss that institutional epistemic trust in AI-inclusive healthcare depends, in part, on the reliability of AI-inclusive medical practices and programs, its knowledge and understanding among different stakeholders involved, its effect on epistemic and communicative duties and burdens on medical professionals and, finally, its interaction and alignment with the public’s ethical values and interests as well as background sociopolitical conditions against which AI-inclusive healthcare systems are embedded. To assess the applicability of these conditions, we explore a recent proposal for AI-inclusivity within the Dutch Newborn Screening Program. In doing so, we illustrate the importance, scope, and potential challenges of fostering and maintaining institutional epistemic trust in a context where generating, assessing, and providing reliable and timely screening results for genetic risk is of high priority. Finally, to motivate the general relevance of our discussion and case study, we end with suggestions for strategies, interventions, and measures for AI-inclusivity in healthcare more widely.
The implementation of artificial intelligence (AI) tools into clinical spheres emphasizes the critical need for an AI-competent healthcare workforce that can interpret AI output and identify its limitations. Without comprehensive training, there is a risk of misapplication, mistrust, and underutilization. Workforce skill development events such as workshops and hackathons can increase AI competence and foster interdisciplinary collaboration to promote optimal patient care.
Methods:
The University of Florida hosted the AI for Clinical Care (AICC) workshop in April 2024 to address the need for AI-competent healthcare professionals. The hybrid workshop featured a beginner and advanced track with interactive sessions, hands-on skill development, and networking opportunities led by experts. An anonymous, voluntary post-workshop survey asked participants to score their knowledge and skills before and after the AICC workshop. A second, follow-up survey was administered approximately nine months later.
Results:
Ninety participants attended the AICC workshop, forty-one attendees completed the post-workshop survey, and six attendees completed the follow-up survey. Paired T-tests of the post-workshop survey revealed statistically significant (P < .001) increases in self-reported knowledge gain across all six beginner track learning objectives and significant (P < .05) increases across all five advanced track objectives. Feedback indicated participants appreciated the interactive format, although communication and networking needed improvement.
Conclusion:
The AICC workshop successfully advanced AI literacy among biomedical professionals and promoted collaborative peer networks. Continued efforts are recommended to enhance participant engagement and ensure equitable access to AI education in clinical settings.
Despite technological and medical advances, amputations continue to increase. Amputees face significant challenges when acquiring and using prosthetic devices, challenges which are made worse as their emotional needs, aspirations, mobility, prosthesis requirements and problems change over time. These challenges require custom solutions for each individual amputee, a fact that current amputee centered prosthesis services tend to ignore. The work reported in this paper contributes an AI based Prosthesis Development Service Framework to cater for the current and evolving needs of amputees.
This paper presents a novel compact substrate-integrated waveguide (SIW) antenna that can be utilized for several applications, such as fifth generation (5 G), internet of things (IoT), artificial intelligence, and medical applications. Six antennas are designed. In addition, there are single SIW, 2 × 1 arrays, and 4 × 1 arrays with and without slots. The proposed feeding method is implemented to create a slot antenna array with incorporating KS connector. The proposed designs target the 25–28 GHz band, covering up to 3 GHz of bandwidth depending on the configuration. The structures are designed and simulated using a numerical technique package (CST). The simulation results demonstrate that the SIW antenna array functions efficiently in terms of the gain, return loss, and radiation patterns. Furthermore, the proposed structures have a total volume of 62 mm × 55 mm × 0.508 mm. In conclusion, the experimental results demonstrate that the performance of the proposed antenna is in good agreement with the simulations. Moreover, the gain increased from 8.5 dBi for the single SIW antenna to 14.7 dBi for the four-element array at 26 GHz, indicating a 72.9% improvement.
This chapter offers lessons from engineering and other industries that promise developments in healthcare, and practical guidance for clinician-engineer partnerships. Section 1 provides guidance on how to establish a shared vocabulary and common understanding between engineers and clinicians of what terms such as AI and ML do and don’t mean. Section 2 identifies challenges clinician-engineering partnerships must overcome to deliver sustained value and ways to avoid common causes of failure. Section 3 provides specific advice on how to design projects to produce value at a series of stages rather than rely on the success of one, ambitious final model. Section 4 concludes by drawing on cautionary lessons from healthcare and other industries.
This article examines the National Health Data Network (RNDS), the platform launched by the Ministry of Health in Brazil as the primary tool for its Digital Health Strategy 2020–2028, including innovation aspects. The analysis is made through two distinct frameworks: Right to health and personal data protection in Brazil. The first approach is rooted in the legal framework shaped by Brazil’s trajectory on health since 1988, marked by the formal acknowledgment of the Right to health and the establishment of the Unified Health System, Brazil’s universal access health system, encompassing public healthcare and public health actions. The second approach stems from the repercussions of the General Data Protection Law, enacted in 2018 and the inclusion of Right to personal data protection in Brazilian’s Constitution. This legislation, akin to the EU’s General Data Protection Regulations, addressed the gap in personal data protection in Brazil and established principles and rules for data processing. The article begins by explanting the two approaches, and then it provides a brief history of health informatics policies in Brazil, leading to the current Digital Health Strategy and the RNDS. Subsequently, it delves into an analysis of the RNDS through the lenses of the two aforementioned approaches. In the final discussion sections, the article attempts to extract lessons from the analyses, particularly in light of ongoing discussions such as the secondary use of data for innovation in the context of different interpretations about innovation policies.
1. In this chapter, I will first discuss the rise of robotics and AI in the healthcare sector and the concern of some scholars that this may lead to a dehumanisation of the physician-patient relationship (part 2). I will then elaborate on four potential existing legal safeguards against such dehumanisation: the fact that only qualified persons are allowed to provide healthcare (part 3) and the resulting liability of the physician if things go wrong (part 4); the right of the patient to receive information about his/her health condition and to give his/her prior informed consent under the Belgian Law on Patient Rights (part 5), and finally transparency and informed consent under the General Data Protection Regulation (GDPR) (part 6). I will conclude with an overview (part 7).
THE RISE OF ROBOTICS AND AI TO DEAL WITH INCREASING DEMANDS IN THE HEALTHCARE SECTOR
2. A recent publication commissioned by the European Parliament states that the health sector is facing increasing demands on services brought on by issues such as an ageing population, an increase of chronic diseases, budgetary constraints, and a shortage of qualified workers. Developments in the field of robotics and AI can provide countless opportunities for addressing these challenges, resulting in necessary and significant cost and time savings. These efficiency benefits are the result of the fact the work is done more efficiently, more quickly and at a lower cost than a human actor could do it. According to the same study, the application of robotics and AI could lead to improvements in fields such as medical diagnosis, surgical intervention, prevention and treatment of diseases, and support for rehabilitation and longterm care. They could also contribute to more effective and automated work management processes, while offering continuous training for healthcare workers. It is estimated that the market for AI in healthcare will reach around $6,6 billion by 2021 and $8 billion by 2022, with significant cost savings for healthcare systems. According to a recent French study, the health sector is internationally the second most impacted sector by robotics and AI after the telecommunications and technologies sector, but preceding the financial services and automotive sector.
A natural question is why AI in design? Although the design applications written about in the journal vary widely, the common thread is that researchers use AI techniques to implement their ideas. The use of AI techniques for design applications, at least when AI EDAM was started, was partially a reaction against the predominant design methods based on some form of optimization. Knowledge-based techniques, particularly rule-based systems of various sorts, were very popular. One of the draws of these methods, I believe, was their ability to represent knowledge that is hard or awkward to represent in traditional optimization frameworks. This mirrors my experience: at the time, I was working in configuration with components that had a large number compatibility and resource constraints. Although many constraints could be represented in mixed integer linear programming systems, it was not easy to conceptualize, write, and most importantly, maintain the constraints in those systems.
Many ethical questions about our future with intelligent machines rest upon assumptions concerning the origins, development and ideal future of humanity and of the universe, and hence overlap considerably with many religious questions. First, could computers themselves become moral in any sense, and could different components of morality – whatever they are – be instantiated in a computer? Second, could computers enhance the moral functioning of humans? Do computers potentially have a role in narrowing the gap between moral aspiration and how morality is actually lived out? Third, if we develop machines comparable in intelligence to humans, how should we treat them? This question is especially acute for embodied robots and human-like androids. Fourthly, numerous moral issues arise as society changes such that artificial intelligence plays an increasingly significant role in making decisions, with implications for how human beings function socially and as individuals, treat each other and access resources.
The emergence of digital platforms and the new application economy are transforming healthcare and creating new opportunities and risks for all stakeholders in the medical ecosystem. Many of these developments rely heavily on data and AI algorithms to prevent, diagnose, treat, and monitor diseases and other health conditions. A broad range of medical, ethical and legal knowledge is now required to navigate this highly complex and fast-changing space. This collection brings together scholars from medicine and law, but also ethics, management, philosophy, and computer science, to examine current and future technological, policy and regulatory issues. In particular, the book addresses the challenge of integrating data protection and privacy concerns into the design of emerging healthcare products and services. With a number of comparative case studies, the book offers a high-level, global, and interdisciplinary perspective on the normative and policy dilemmas raised by the proliferation of information technologies in a healthcare context.
AI films are present from the 1950s onwards, but really begin to gather pace from the 1980s and into the 1990s. From 2000s onwards, more and more AI films proliferate, as filmmakers respond to the development of AI technology in the real world. In this decade, Cynthia Braziel’s MIT robot Kismet recognises and responds to facial expressions. Robots are sent to hostile landscapes – the Nomad robot explores remote Antarctica and the Nasa rovers’ Spirit and Opportunity roam the surface of Mars.The Darpa Grand Challenge is launched in 2004 to promote autonomous vehicles, and, closer to home, iRobot’s Roomba successfully vacuums and avoids obstacles. AI technology is being used to explore space, to interact with humans, and to assist in the domestic sphere.
In AI ilm, several thematic strands are notable in this decade.The binary between human and artificial intelligence, which has been questioned and deconstructed in previous decades, is further collapsed, and the integration of AI technology in human life is acknowledge and explored.In The Matrix Reloaded and Revolutions, machines are allies and not just enemies; in Terminator 3, John Connor’s Terminator saviour is also, in the future, his killer.
This introductory chapter is divided into two main parts. The first outlines aspects of artificial intelligence that are explored in the films. It is not a history of artificial intelligence, but rather an outline of key elements of that history that fascinate filmmakers, such as how intelligence is defined, the Turing Test, the establishment of the field in the 1950s, The Chinese Room Experiment, the Physical Grounding Hypothesis, affective intelligence, ambient intelligence, distributed and multi-agent intelligence, consciousness upload, and the singularity.The second half of the chapter gives an overview of the main attitudes towards AI presented in film, including the typical technophobic fears about AI, and humanist, posthumanist and transhumanist stances. The various types of AI presented in film are then outlined: robot, digital and combinations of the two. The themes and tropes that recur in AI film are set out, including the AI as visual double of the human and the AI ‘reveal’; parallels and doubling between human and AI characters, and ways in which they intersect and hybridise.The representation of gender is explored, in terms of female characters in AI film, including female scientists, and the AI characters themselves, usually mirroring the patterns of representation of human characters. The importance of parent–child relationships is outlined, including humans acting as parents and families to artificial intelligence, and occasionally the other way around, and examining the theme of AI reproduction, via human, digital or other means.
This paper examines the evidence for the marginal feminine endings *-ay- and *-āy- in Proto-Semitic, and the feminine endings *-e and *-a in Proto-Berber. Their similar formation (*CV̆CC-ay/āy), semantics (verbal abstracts, underived concrete feminine nouns) and plural morphology (replacement of the feminine suffix by a plural suffix with -w-) suggest that this feminine formation should be reconstructed to a shared ancestor which may be called Proto-Berbero-Semitic.
Governing AI is about getting AI right. Building upon AI scholarship in science and technology studies, technology law, business ethics, and computer science, it documents potential risks and actual harms associated with AI, lists proposed solutions to AI-related problems around the world, and assesses their impact. The book presents a vast range of theoretical debates and empirical evidence to document how and how well technical solutions, business self-regulation, and legal regulation work. It is a call to think inside and outside the box. Technical solutions, business self-regulation, and especially legal regulation can mitigate and even eliminate some of the potential risks and actual harms arising from the development and use of AI. However, the long-term health of the relationship between technology and society depends on whether ordinary people are empowered to participate in making informed decisions to govern the future of technology – AI included.