To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Intended for researchers and practitioners in interaction design, this book shows how Bayesian models can be brought to bear on problems of interface design and user modelling. It introduces and motivates Bayesian modelling and illustrates how powerful these ideas can be in thinking about human-computer interaction, especially in representing and manipulating uncertainty. Bayesian methods are increasingly practical as computational tools to implement them become more widely available, and offer a principled foundation to reason about interaction design. The book opens with a self-contained tutorial on Bayesian concepts and their practical implementation, tailored for the background and needs of interaction designers. The contributed chapters cover the use of Bayesian probabilistic modelling in a diverse set of applications, including improving pointing-based interfaces; efficient text entry using modern language models; advanced interface design using cutting-edge techniques in Bayesian optimisation; and Bayesian approaches to modelling the cognitive processes of users.
The chapter focuses the attention, firstly, on the origins of the right to die and its intersections with the development of life-maintaining medical technologies. Then, the analysis goes on by distinguishing between a right to refusal (current or by an advance directive) medical supports, and the recognition of some form of active aid in suiciding, taking into account the principal elements of the American, Canadian, European and Chinese legal frameworks.
While dealing with the issue at the heart of this paper a fundamental question has to be tackled in greater depth: is the right to access to the Internet a human right (or a fundamental right: below is my attempt to introduce a terminological clarification in this regard) which enjoys a semantic, conceptual and constitutional autonomy? In other words, is access to the Internet an autonomous right or only a precondition for enjoying, among others, freedom of expression? Why does the classification as a free-standing or derived right matter? Does it carry normative implications or is it primarily a rhetorical tool? In trying to answer those questions, it may perhaps be beneficial to resist the temptation to rely on a “rhetoric” of fundamental rights and human rights, which is widespread throughout the various debates concerning the relationship between law and technology after the rise of the Internet. The language of rights (especially new rights) in Internet law is more than (rhetorically) appealing.
Neurorights are novel human rights that specify areas of protection from potential abuses of neurotechnologies. They protect mental privacy, mental freedom and fair access to neuroenhancement. We discuss neurorights research and advocacy, including the Chilean constitutional amendment and neuroprotection bill of law, which explicitly protect neurorights and adopt a medical model for the regulation of all neurotechnologies, defining them as medical devices. These Chilean bills could serve as a model for legislation elsewhere.
This chapter focuses on m-Health, i.e. technologies offered through mobile devices with particular regard to those having a specific health purpose. The contribution highlights that the mass use of these technologies is raising many challenges to national and European legislators, who are now facing a twofold task: assuring safety and reliance of the data generated by these products and protecting patients/consumers’ privacy and confidentiality. From the first perspective, such software may sometimes be classified as medical devices, although this classification is not always easy since there could be “border-line products”. If a software is classified as a medical device, then its safety and efficacy are guaranteed by the applicability of relevant regulations, which dictate specific prerequisites, obligations and responsibilities for manufacturers as well as distributors. From a data protection perspective, the mass use of these technologies allows the collection of huge amounts of personal data, both sensitive data (as relating to health conditions) and data that can nonetheless contribute to the creation of detailed user profiles.
Telemedicine is the delivery of healthcare services by means of information and communication technologies. Although it was initially conceived as a means of overcoming geographical barriers and dealing with emergency situations, the spread of telemedicine in daily practice is reshaping the innermost features of medical practice and shifting organisational patterns in healthcare. Advocates of telemedicine argue that it will redesign healthcare accessibility, improving service quality and optimising costs. However, the use of telemedicine raises a number of ethical, legal and social issues, an overview of which is given in this chapter. The second section deals with the EU policy for the promotion of telemedicine, and reference is made to the provisions offered by the European Telehealth Code. In the third section, some of the major ethical concerns raised by telemedicine are discussed. In the fourth, room is given to the role of telemedicine within the management of the CoViD-19 health emergency. In the conclusions, it is argued that adequate policies and rules are required to ensure a consistent spread and a safe use of telemedicine in alternative to in-person healthcare.
Algorithmic transparency is the basis of machine accountability and the cornerstone of policy frameworks that regulate the use of artificial intelligence techniques. The goal of algorithmic transparency is to ensure accuracy and fairness in decisions concerning individuals.AI techniques replicate bias, and as these techniques become more complex, bias becomes more difficult to detect. But the principle of algorithmic transparency remains important across a wide range of sectors. Credit determinations, employment assessments, educational tracking, as well as decisions about government benefits, border crossings, communications surveillance and even inspections in sports stadiums increasingly rely on black box techniques that produce results that are unaccountable, opaque, and often unfair. Even the organizations that rely on these methods often do not fully understand their impact or their weaknesses.
Although lay participation has long been a feature of scientific research, the past decades have seen an explosion in the number of citizen science projects. Simultaneously, the number of low-cost network connected devices collectively known as Internet of Things devices has proliferated. The increased use of Internet of Things devices in citizen science exists has coincided with a reconsideration of the right to science under international law. Specifically, the Universal Declaration of Human Rights and the International Covenant on Economic Social and Cultural Rights both recognise a right to benefit and participate in the scientific process. Whilst it is unclear whether this right protects participation by citizen scientists, it provides a useful framework to help chart the ethical issues raised by citizen science. In this chapter, we first describe the origins and boundaries of the right to science, as well as its relevance to citizen science. We then use the findings of a scoping review to examine three main ethical and legal issues for using Internet of Things devices in citizen science.
Human behaviour is increasingly governed by automated decisional systems based on machine learning (ML) and ‘Big Data’. While these systems promise a range of benefits, they also throw up a congeries of challenges, not least for our ability as humans to understand their logic and ramifications. This chapter maps the basic mechanics of such systems, the concerns they raise, and the degree to which these concerns may be remedied by data protection law, particularly those provisions of the EU General Data Protection Regulation that specifically target automated decision-making. Drawing upon the work of Ulrich Beck, the chapter employs the notion of ‘cognitive sovereignty’ to provide an overarching conceptual framing of the subject matter. Cognitive sovereignty essentially denotes our moral and legal interest in being able to comprehend our environs and ourselves. Focus on this interest, the chapter argues, fills a blind spot in scholarship and policy discourse on ML-enhanced decisional systems, and is vital for grounding claims for greater explicability of machine processes.
In this chapter, we sketch out a preliminary account of the normative questions raised by an emerging form of human-machine interaction that we call the “hybrid mind.” By hybrid mind, we consider the direct coupling of the human cognitive system with an artificial cognitive system, so that cognitive processes of the two systems are functionally integrated through bi-directional interactions and mutually adapt to each other. This inquiry is provoked by the development of novel technologies such as closed-loop or adaptive neuroprostheses that can consist of implanted or external components and establish a direct communication pathway between the human brain and an external computing device. This communication pathway is typically mediated and facilitated by artificially intelligent components such as machine learning algorithms. This development represents only the latest step in the evolution of human beings and their technologies, a process that has necessitated a parallel evolution in our moral concepts and practices over time. Our objective as ethicists and legal scholars is to propose a concept of this hybrid mind as an interesting unit of analysis.
This chapter argues that the notion of human dignity provides an overarching normative framework for assessing the ethical and legal acceptability of emerging life sciences technologies. After depicting the increasing duality that characterizes modern technologies, this chapter examines two different meanings of human dignity: the classical meaning that refers to the inherent worth of every individual, and the more recent understanding of this notion that refers to the integrity and identity of humankind, including future generations. The close connection between human dignity and human rights is outlined, as well as the key-role of dignity in international human rights law, and very especially in the human rights instruments relating to bioethics. The chapter concludes by briefly presenting the challenges to human dignity and human rights posed by neurotechnologies and germline gene editing technologies.
Technology has been at the heart of our species since the dawn of mankind. The genus homo, to which we all belong, split from a common hominin ancestor about 7 million years ago. This means ‘yesterday’ in geological time: if life on Earth were a 24-hour day, the genus homo would have inhabited this planet for only a couple of minutes. This recency is attested by our genetic heritage as our DNA is 98.8% identical to that of chimpanzees. In this relatively short time, our species and our species’ proximal ancestors established a dynamic and creative relationship with their environment. In particular, they developed the ability to modify their environment with the goal of developing technological tools by means of which they could, in turn, further modify that environment in a more radical and transformative way and thereby even modify themselves. We call ‘technology’ any product of human (and also, in principle, other species’) labor resulting in physical systems that would not be present in the natural environment in the absence of such labor. It has been estimated that as early as 3.4 million years ago, our remote ancestor Australopithecus afarensis, a small-brained early hominin species, used stone tools to separate meat from the bones of large mammals.
Conventional medical ethics, medical law and human rights protect us against the technological manipulation of our bodies, in part through recognising and enforcing a right to bodily integrity. In this chapter, we will explore the possibility of that we might also protect ourselves against the technological manipulation of our minds through recognising an analogous right to mental integrity. In the first part of the chapter, we describe some of the recent developments in the areas of persuasive and monitoring technologies, and how they are currently being used, e.g., in criminal justice and on the internet. In the second part we survey existing and proposed novel human rights law relevant to mental integrity. In the third part we argue that, though the right to mental integrity has thus far particularly been debated regarding neurointerventions, it would also apply to at least some persuasive and monitoring technologies. Finally, fourth, we consider how existing (i) law and (ii) philosophical scholarship might help to resolve the thony question of which persuasive and monitoring technologies would infringe the right to mental integrity.
Human beings are technical beings. Their lives cannot be accomplished without technique. At the end of modernity, the symbiosis between science and technique has become so tight that we can now speak of technoscience that gives us more power and responsibility than simple traditional techniques, as in the case of the new biotechnologies. In this chapter we offer a critical reflection on the two major anthropotechnical proposals, that is, the bio-project and the info-project and present arguments and criteria crucial to human rights development and their relevance for an adequate technological humanism. More concretely, we carry out a philosophical analysis of the importance of responsibility for safeguarding the duties of future generations and a non-dualistic anthropology. We also highlight the relevance of societal responsibility, care and solidarity in making the impossible detachment of human beings from technology an opportunity to develop a fruitful debate on human rights based on a deeper understanding of human beings’ relational nature.
This chapter discusses the right to have a child in the context of the latest developments of the reproductive technologies. According to the author, while no one can be legitimately deprived of the right to have a child, this statement does not equate to claiming a positive right to have a child. This question has become more complicated since the first in vitro baby was born in 1978 and as more and more new reproductive technologies have been developed since then. In particular, ethical dilemmas emerge when in vitro fertilization involves donated gametes, or when the intending mother needs a surrogate mother because she does not have a womb. Legal regulations of surrogacy agreements vary from total ban to acceptance, or simply remain silent on their legitimacy. In this diverse legal landscape, Sandor discusses on the ethical legal framework of the claims to have access to the latest reproductive services, including those technologies that replace or transplant the human womb.