To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Brain–computer interface (BCI) is a promising technology for restoring communication in individuals with locked-in syndrome (LIS). BCI technology offers a potential tool for individuals with impaired or absent means of effective communication to use brain activity to control an output device such as a computer keyboard. Exploratory studies of BCI devices for communication in people with LIS are underway. Research with individuals with LIS presents not only technological challenges, but ethical challenges as well. Whereas recent attention has been focused on ethical issues that arise at the initiation of studies, such as how to obtain valid consent, relatively little attention has been given to issues at the conclusion of studies. BCI research in LIS highlights one such challenge: How to decide when an exploratory BCI research study should end. In this article, we present the case of an individual with presumed LIS enrolled in an exploratory BCI study. We consider whether two common ethical frameworks for stopping randomized clinical trials—equipoise and nonexploitation—can be usefully applied to elucidating researcher obligations to end exploratory BCI research. We argue that neither framework is a good fit for exploratory BCI research. Instead, we apply recent work on clinician-researcher fiduciary obligations and in turn offer some preliminary recommendations for BCI researchers on how to end exploratory BCI studies.
Brain–computer interfaces (BCIs) are driven essentially by algorithms; however, the ethical role of such algorithms has so far been neglected in the ethical assessment of BCIs. The goal of this article is therefore twofold: First, it aims to offer insights into whether (and how) the problems related to the ethics of BCIs (e.g., responsibility) can be better grasped with the help of already existing work on the ethics of algorithms. As a second goal, the article explores what kinds of solutions are available in that body of scholarship, and how these solutions relate to some of the ethical questions around BCIs. In short, the article asks what lessons can be learned about the ethics of BCIs from looking at the ethics of algorithms. To achieve these goals, the article proceeds as follows. First, a brief introduction into the algorithmic background of BCIs is given. Second, the debate about epistemic concerns and the ethics of algorithms is sketched. Finally, this debate is transferred to the ethics of BCIs.
Moral enhancement has been accused of curtailing human freedoms. In this article, I suggest the opposite: moral enhancement and individual freedom can go hand in hand. The first section defines freedom, enhancement, and morality and argues that only a naturalistic account of morality allows for the concept of enhancement. The second section looks at ways that freedom may be threatened by moral enhancement, especially by the method of implementation, the creation of new externalities, or the limitation of volitional options. I argue that virtue ethics offers the safest model for moral enhancement. The third section describes ways in which moral enhancement can be achieved while maintaining, or even increasing, individual freedom. Such methods include shifting of the moral axis, replacing vicious options with virtuous ones, and increasing the number of volitional options available. The article concludes in the fourth section by arguing that the technology and techniques that allow us moral enhancement are likely to be the same ones that allow greater freedom than we already enjoy.
Sometimes one’s greatest academic disappointments can have unexpected outcomes. This is especially true when one is trying to change career trajectories or do something that others did not take seriously. My path into neuroethics was an unexpected journey catalyzed in part by constructive disappointment and the disbelief of colleagues who thought that the work I was pursuing nearly two decades prior was a fool’s errand. After all, could anyone—in his or her right mind—ever conceive of waking up a person unconscious from brain injury and getting him to speak? 1
Common understandings of neuroethics, that is, of its distinctive nature, are premised on two distinct sets of claims: (1) neuroscience can change views about the nature of ethics itself and neuroethics is dedicated to reaping such an understanding of ethics, and (2) neuroscience poses challenges distinct from other areas of medicine and science and neuroethics tackles those issues. Critiques have rightfully challenged both claims, stressing how the first may lead to problematic forms of reductionism whereas the second relies on debatable assumptions about the nature of bioethics specialization and development. Informed by philosophical pragmatism and our experience in neuroethics, we argue that these claims are ill founded and should give way to pragmatist reconstructions; namely, that neuroscience, much like other areas of empirical research on morality, can provide useful information about the nature of morally problematic situations, but does not need to promise radical and sweeping changes to ethics based on neuroscientism. Furthermore, the rationale for the development of neuroethics as a specialized field need not to be premised on the distinctive nature of the issues it tackles or of neurotechnologies. Rather, it can espouse an understanding of neuroethics as both a scholarly and a practical endeavor dedicated to resolving a series of problematic situations raised by neurological and psychiatric conditions.
The body-to-head transplant (BHT) planned to be undertaken later this year at China’s Harbin Medical University by neurosurgeons Sergio Canavero and Xiaoping Ren has attracted considerable attention and criticism. The intended operation gives rise to philosophical queries about the body–brain–mind relationship and nature of the subjective self; technical and ethical issues regarding the scientific soundness, safety, and futility of the procedure; the adequacy of prior research; and the relative merit, folly, and/or danger of forging new boundaries of what is biomedically possible. Moreover, that this procedure, which has been prohibited from being undertaken in other countries, has been sanctioned in China brings into stark relief ways that differing social and political values, philosophies, ethics, and laws can affect the scope and conduct of research. Irrespective of whether the BHT actually occurs, the debate it has generated reveals and reflects both the evermore international enterprise of brain science, and the need for neuroethical discourse to include and appreciate multicultural views, values, and voices.
Research participants are entitled to many rights that may easily come into conflict. The most important ones are that researchers respect their autonomy as persons and act on the principles of beneficence, nonmaleficence, and justice. Since 2014, research subjects from numerous states in the United States of America also have a legal “right to try” that allows them, under certain circumstances, to receive experimental (i.e., preliminarily tested) interventions, including medical devices, before official approval from the United States Food and Drug Administration. In the context of experimental interventions, such as deep brain stimulation (DBS) for Alzheimer’s disease, this article argues that research participants ought never to have a legal “right to try” without a corresponding “right to be sure.” The latter refers to external epistemic justification construed in terms of reliance on reliable evidence. This article demonstrates that the mere complexity of intervention ensembles, as in the case of DBS for Alzheimer’s disease which serves as a paradigm example, illustrate how unanswered and/or unasked open questions give rise to a “combinatorial explosion” of uncertainties that require epistemic responses that no single research team alone is likely able to provide. From this assessment, several epistemic asymmetrical relations between researchers and participants are developed. By elucidating these epistemic asymmetries, this article unravels the reasons why open science, transparent exhaustive data reporting, preregistration, and continued constant critical appraisal via pre- and postpublication peer review are not scientific virtues of moral excellence but rather ordinary obligations of the scientific work routine required to increase reliability and strength of evidence.
Some authors have questioned the moral authority of advance directives (ADs) in cases in which it is not clear if the author of the AD is identical to the person to whom it later applies. This article focuses on the question of whether the latest results of neuroimaging studies have moral significance with regard to the moral authority of ADs in patients with disorders of consciousness (DOCs). Some neuroimaging findings could provide novel insights into the question of whether patients with DOCs exhibit sufficient psychological continuity to be ascribed diachronic personal identity. If those studies were to indicate that psychological continuity is present, they could justify the moral authority of ADs in patients with DOCs. This holds at least if respect for self-determination is considered as the foundation for the moral authority of ADs. The non-identity thesis in DOCs could no longer be applied, in line with clinical and social practice.
In this article, we present a pragmatic approach to neuroethics, referring back to John Dewey and his articulation of the “common good” and its discovery through systematic methods. Pragmatic neuroethics bridges philosophy and social sciences and, at a very basic level, considers that ethics is not dissociable from lived experiences and everyday moral choices. We reflect on the integration between empirical methods and normative questions, using as our platform recent bioethical and neuropsychological research into moral cognition, action, and experience. Finally, we present the protocol of a study concerning teenagers’ morality in everyday life, discussing our epistemological choices as an example of a pragmatic approach in empirical ethics. We hope that this article conveys that even though the scope of neuroethics is broad, it is important not to move too far from the real life encounters that give rise to moral questions in the first place.
In this article, we begin by identifying three main neuroethical approaches: neurobioethics, empirical neuroethics, and conceptual neuroethics. Our focus is on conceptual approaches that generally emphasize the need to develop and use a methodological modus operandi for effectively linking scientific (i.e., neuroscience) and philosophical (i.e., ethics) interpretations. We explain and assess the value of conceptual neuroethics approaches and explain and defend one such approach that we propose as being particularly fruitful for addressing the various issues raised by neuroscience: fundamental neuroethics.
Highly immersive virtual reality (VR) systems have been introduced into the consumer market in recent years. The improved technological capabilities of these systems as well as the combination with biometric sensors, for example electroencephalography (EEG), in a closed-loop hybrid VR-EEG, opens up a range of new potential medical applications. This article first provides an overview of the past and current clinical applications of VR systems in neurology and psychiatry and introduces core concepts in neurophilosophy and VR research (such as agency, trust, presence, and others). Then, important adverse effects of highly immersive VR simulations and the ethical implications of standalone and hybrid VR systems for therapy in neurology and psychiatry are highlighted. These new forms of VR-based therapy may strengthen patients in exercising their autonomy. At the same time, however, these emerging systems present ethical challenges, for example in terms of moral and legal accountability in interactions involving “intelligent” hybrid VR systems. A user-centered approach that is informed by the target patients’ needs and capabilities could help to build beneficial systems for VR therapy.
Neuroscientific interventions are increasingly proposed as solutions for social problems, beyond their application in biomedicine. For example, there is increasing interest, particularly from outside commentators, in harnessing neuroscientific advances as an alternative method of punishing criminal offenders. Such neuropunishments are seen as a potentially more effective, less costly, and more humane alternative to incarceration, with overall better results for offender, communities, and societies. This article considers whether neuroscience as a field should engage more actively with such proposals, and whether more research should be done to explore the use of neurointerventions for punishment. It concludes that neuroscientists and those working at the intersection of neuroscience and the clinic should actively shape these debates.
This short article proposes a conceptual structure for “neurolaw,” modeled loosely on the bipartite division of the sister field of neuroethics by Adina Roskies into the “ethics of neuroscience” and the “neuroscience of ethics.” As normative fields addressing the implications of scientific discoveries and expanding technological capacities affecting the brain, “neurolaw” and neuroethics have followed parallel paths. Similar foundational questions arise for both about the validity and utility of recognizing them as distinct subfields of law and ethics, respectively. In both, a useful distinction can be drawn between a self-reflexive inquiry (the neuroscience of ethics and law) and an inquiry into the development and use of brain science and technologies (the ethics and law of neuroscience). In both fields, these two forms of inquiry interact in interesting ways. In addition to a proposed conceptual structure for neurolaw, the article also addresses the neurolegal versions of the critiques made against neuroethics, including charges of reductionism, fact/value confusion, and biological essentialism.
According to a familiar distinction, neuroethics incorporates the neuroscience of ethics and the ethics of neuroscience. Within neuroethics, these two parts have provoked distinct and separate lines of inquiry, and there has been little discussion of how the two parts overlap. In the present article, I try to draw a connection between these two parts by considering the implications that are raised for ethics by scientific findings about the way we make moral decisions. The main argument of the article is that although neuroscience is “stretching” ethics by revealing the empirical basis of our moral decisions and, thereby, challenging our present understanding of the dominant ethical theories, substantial further questions remain regarding the impact that neuroscience will have on ethics more broadly.