To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study examines sociophonetic variation in different functions of like among adolescents in London and Edinburgh. It attempts to determine the factors that may explain this variation. Our results suggest that the function of like correlates primarily with contextual factors, rather than the phonetic factors of vowel quality, /l/ to vowel duration and /k/ realisation. In particular, the preceding and following segments and their bigram predictability emerge as highly significant, in addition to the boundary strength following like. In both London and Edinburgh, the vowel appears to be the only non-contextual feature that is sensitive to the function of like: quotative be like is more likely to be monophthongised than other functions of like. We argue that the more monophthongal nature of quotative like is due to the syntactic and prosodic context in which it occurs.
The ethical principle of autonomy requires physicians to respect patient autonomy when present, and to protect the patient who lacks autonomy. Fulfilling this ethical obligation when a patient has a communication impairment presents considerable challenges. Standard methods for evaluating decision-making capacity require a semistructured interview. Some patients with communication impairments are unable to engage in a semistructured interview and are at risk of the wrongful loss of autonomy. In this article, we present a general strategy for assessing decision-making capacity in patients with communication impairments. We derive this strategy by reflecting on a particular case. The strategy involves three steps: (1) determining the reliability of communication, (2) widening the bandwidth of communication, and (3) using compensatory measures of decision-making capacity. We argue that this strategy may be useful for assessing decision-making capacity and preserving autonomy in some patients with communication impairments.
Implantation of deep brain stimulator (DBS) leads for Gilles de Tourette syndrome was first described by Visser-Vanderwalle et al., with a reported 70%–90% decrease in tic frequency.1 Since that time, several targets, including the basal/ganglia and striatum, have been described. The target remains experimental, and in this case, leads were implanted under an investigator-initiated research protocol. Ms. L. reported an excellent intraoperative reduction in the “urge to tic” that persisted for 15 weeks postoperatively, indicating that the leads were well placed. Furthermore, although her tic frequency has increased, it remains improved from baseline and returns to baseline when stimulation is discontinued. Although her response does not represent what her treatment team would consider the “desired medical/therapeutic outcomes,” there is no question that the patient recognizes benefit from her stimulation. In fact, she clearly states that “it’s like I’ve felt a new way and don’t want to go back to the old way.”
In 2015, we published an article entitled “The Medicalization of Love,” in which we argued that both good and bad consequences could be expected to follow from love’s medicalization, depending on how the process unfolded. A flurry of commentaries followed; here we offer some preliminary thoughts in reply to the more substantial of the criticisms that were raised. We focus in particular on the nature of love itself as well as the role it plays (or should play) in our lives; we also touch on a number of practical issues concerning the likely effects of any plausible “real-life” love drugs and conclude with a call for careful regulation.
In 2012, the FDA approved for the differential diagnosis of Alzheimer’s disease a brain-imaging technology, Amyvid-PET (aka florbetapir-PET), capable of non-invasively estimating the burden of amyloid plaques; this approval for one indication renders the technology a candidate for off-label use for another indication according to a physician’s judgment. What should a physician do if an educated, pro-active, and concerned patient requests off-label use of Amyvid-PET to help her estimate the likelihood that her mild memory complaints are “just normal aging” or are likely to profoundly worsen? I consider reasons that a physician might justify denial of such a request, including concerns of safety, uncertain benefit, and fair resource allocation, but cautiously conclude that there may be certain cases where off-label bioprediction would be permissible.
Ongoing developments in neuroscientific techniques and technologies—such as neuroimaging—offer potential for greater insight into human behavior and have fostered temptation to use these approaches in legal contexts. Neuroscientists are increasingly called on to provide expert testimony, interpret brain images, and thereby inform judges and juries who are tasked with determining the guilt or innocence of an individual. In this essay, we draw attention to the actual capabilities and limitations of currently available assessment neurotechnologies and examine whether neuroscientific evidence presents unique challenges to existing frameworks of evidence law. In particular, we focus on (1) fundamental questions of relevance and admissibility that can and should be posed before the tests afforded in Daubert v. Merrill Dow Pharmaceuticals or Frye v. U.S. are applied and (2) how these considerations fit into the broader contexts of criminal law. We contend that neuroscientific evidence must first be scrutinized more heavily for its relevance, within Daubert and Federal Rule of Evidence 702, to ensure that the right questions are asked of neuroscientists, so as to enable expert interpretation of neuroscientific evidence within the limits of their knowledge and discipline that allows the judge or jury to determine the facts at issue in the case. We use the analogy provided by the Daubert court of an expert on the phases of the moon testifying to an individual’s behavior on a particular night to ensure that we are, in fact, asking the neuroscientific expert the appropriate question.
The case of Ms. L. provides a wonderful opportunity to highlight the underlying value commitments that often deeply influence decisionmaking in medicine and more specifically in innovative neurosurgical procedures. In order to give a fair opinion on how Dr. Impf, as clinician and researcher, should act, a much richer and thicker understanding of the actual perspectives of the stakeholders would be necessary. Because this is not available, I highlight three important elements: the terms under which the deep brain stimulation (DBS) is implanted, the proper goals of a healthcare team, and the fallacy of a “natural” or immutable self. These elements are brought together in this case by a set of unexpected effects on the patient that were not intended and that are judged and categorized differently by various stakeholders within the case. In the end, I hope that there was full transparency and agreement about obligations, responsibilities, and outcomes prior to the implantation of the DBS between the physician and patient. Further, it is important to remember that just because a result is serendipitous does not mean that it should be discounted as a proper benefit. Finally, each person authors variations on their own self that are molded by environment and social networks. If Ms. L. continues to demonstrate an ability to author a desired self, the DBS is no more inappropriate a tool than many other artifacts that are used regularly by others to mold themselves.
In the contemporary debate on the use of the neurosciences in ethics and law, numerous arguments have been bandied about among scientists and philosophers looking to uphold or reject the reliability and validity of scientific findings obtained by brain imaging technologies. Among the most vexing questions is, Can we trust that technology? One point of disagreement is whether brain scans offer a window through which to observe the functioning of the mind, in such a way as to enable lawyers, judges, physicians, and lawmakers to detect anomalies in brain function that may account for criminal unconscious behavior. Those who stand behind brain imaging believe that this can indeed be achieved, whereas those in opposition stress that brain scans are highly open to interpretation and that the data they provide is insufficient to establish causal connections. The question essentially comes down to whether technology can reliably be used to determine the intentions of the individual, thus establishing mens rea, for example, and hence responsibility. This article focuses on the latter notion and explores whether we can rely on the neurosciences to shed light on a complex form of moral and legal reasoning, as well as the role of the neurosciences in reawakening a philosophical and legal interest in trying to set responsibility on an empirical basis.
Human beings are sensorimotor coupled to the actual world and also attuned to the symbolic world of culture and the techniques of adaptation that culture provides. The self-image and self-shaping mediated by that mirror directly affects the neurocognitive structures that integrate human neural activity and reshape its processing capacities through top-down or autopoietic effects. Thus a crack’d mirror, which disrupts the processes of enactive self-configuration, can be disabling for an individual. That is exactly what happens in postcolonial or immigration contexts in which individuals’ cultural adaptations are marginalized and disconnected in diverse and often painful and disorienting ways. The crack’d mirror is therefore a powerful trope for neuroethics and helps us understand the social and moral pathologies of many indigenous and immigrant communities.
On seeing promising results in a small number of patients, some researchers are conducting trials to determine whether deep brain stimulation (DBS) is an effective treatment for anorexia nervosa (AN). This article asks whether we should open enrollment in trials of DBS for AN to adolescents. Despite concerns about informed consent, parental consent, and unforeseeable psychological sequelae, the article concludes that the risks to anorexic adolescents associated with participation in trials of DBS are reasonable considering the substantial risks of not enrolling teens with AN in research on DBS. The seriousness of AN, its high incidence in teens, and serious shortfalls in the AN treatment literature point to the need for improved, evidence-based treatments for teens with AN. This unmet need generates an obligation on the part of researchers and physicians to promote and conduct research on AN in adolescents.
Ms. L. is a 31-year-old female who presents to Dr. Impf, a neurosurgeon. Ms. L. has a more than 25-year history of iteratively worsening Tourette syndrome, characterized by severe motoric and postural tics and respiratory expression (grunting). Ms. L. is a rather shy, somewhat introverted woman who spends her time with her husband and a small group of friends, mostly watching cooking shows. Although she has been, and is generally, a good student, she describes her academic performance as “not stellar.” Following years of unsuccessful attempts at pharmacological therapy, Ms. L. was evaluated and accepted into an investigator-initiated research protocol (with provision for humanitarian care exemption) that Dr. Impf and her team were running for deep brain stimulation (DBS). During the neurosurgical procedure, electrodes were satisfactorily placed at bilateral subcortical targets (within the basal ganglia/striatum), and Ms. L. reported a reduced “urge to tic” and decreased frequency and severity of tics intraoperatively, immediately following surgery, during her recovery, and for 15 weeks following surgery. Then, however, the tics began to return and increased in frequency and progression, although not to the full preoperative extent. Attempts at altering DBS current parameters were not successful in reducing the severity and frequency of tics. Decreasing or terminating DBS current resulted in full rebound tics and respiratory expression, and Ms. L. finds this to be even more problematic than before (stating: “It’s like I’ve felt a new way and don’t want to go back to the other way”). Per definition, DBS did not achieve the desired medical/therapeutic outcomes.
Closed-loop medical devices such as brain-computer interfaces are an emerging and rapidly advancing neurotechnology. The target patients for brain-computer interfaces (BCIs) are often severely paralyzed, and thus particularly vulnerable in terms of personal autonomy, decisionmaking capacity, and agency. Here we analyze the effects of closed-loop medical devices on the autonomy and accountability of both persons (as patients or research participants) and neurotechnological closed-loop medical systems. We show that although BCIs can strengthen patient autonomy by preserving or restoring communicative abilities and/or motor control, closed-loop devices may also create challenges for moral and legal accountability. We advocate the development of a comprehensive ethical and legal framework to address the challenges of emerging closed-loop neurotechnologies like BCIs and stress the centrality of informed consent and refusal as a means to foster accountability. We propose the creation of an international neuroethics task force with members from medical neuroscience, neuroengineering, computer science, medical law, and medical ethics, as well as representatives of patient advocacy groups and the public.
Consciousness defines us as persons. It allows us to have both pleasurable and painful experiences. I present four neurological conditions in the clinical setting to explore how consciousness can be beneficial or harmful to patients: intraoperative awareness, prolonged disorders of consciousness, locked-in syndrome, and the effects of narcotics and sedation on terminally ill patients. The ethical significance of consciousness for patients in these conditions depends on two factors: the content of one’s experience and whether one can report this content to others. I argue that the value or disvalue of phenomenal consciousness, what it is like to be aware, may depend on its relation to access consciousness, the ability to report or communicate the content of awareness. Phenomenal consciousness can have disvalue when one wants or expects to be unconscious. It can also have disvalue in the absence of access consciousness because it can allow the patient to experience pain and suffer. Technology that enabled neurologically compromised patients to reliably communicate their experience and wishes could benefit and prevent harm to them. More generally, the neurological conditions I discuss raise the question of when and in what respects consciousness is preferable to unconsciousness.
Here we have a case in which (1) the outcome(s) for the patient do not comport with the projected—or initially defined—outcomes of the research study, and (2) these outcomes represent cognitive and behavioral effects that are positively interpreted by the patient, but not by the patient’s immediate family. The 6Cs approach, which frames the technique or technology—and its effects—within defined considerations of domains and dimensions, can be used as part of a multistep approach to addressing issues arising from the use of neurotechnology.1 The approach recommends that the medical team consider the following domains and dimensions when engaging neuroethical analyses:
• The capacities and limitations of current neuroscience and technology (neuroS/T), and the capacity of the patient
• The consequences incurred by neuroS/T on recipients, families, and society in the short, intermediate, and long term
• The character of the recipient (e.g., patterns of cognition, emotion, and behavior) affected by neuroS/T
• The continuity of research and clinical care
• The contexts of need and value that influence the use or nonuse of neuroS/T
•Consent through provision of the most information possible 2