To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Based on an investigation of the Old Bailey Corpus, this article explores the development and usage patterns of maximizers in Late Modern English (LModE). The maximizers to be considered for inclusion in the study are based on the lists provided in Quirk et al. (1985) and Huddleston & Pullum (2002). The aims of the study were to (i) document the frequency development of maximizers, (ii) investigate the sociolinguistic embedding of maximizers usage (gender, class) and (iii) analyze the sociopragmatics of maximizers based on the speakers’ roles, such as judge or witness, in the courtroom.
Of the eleven maximizer types focused on in the investigation, perfectly and entirely were found to dominate in frequency. The whole group was found to rise over the period 1720 to 1913. In terms of gender, social class and speaker roles, there was variation in the use of maximizers across the different speaker groups. Prominently, defendants, but also judges and lawyers, maximized more than witnesses and victims; further, male speakers and higher-ranking speakers used more maximizers. The results were interpreted taking into account the courtroom context and its dialogue dynamics.
To what extent, if any, should minors have a say about whether they participate in research that offers them no prospect of direct benefit? This article addresses this question as it pertains to minors who cannot understand enough about what their participation would involve to make an autonomous choice, but can comprehend enough to have and express opinions about participating. The first aim is to defend David Wendler and Seema Shah’s claim that minors who meet this description should not be offered a choice about whether they participate. The second aim is to show, contra Wendler and Shah, that the principle of nonmaleficence requires more with respect to giving these minors a say than merely respecting their dissent. Additionally, it requires that investigators obtain affirmation of their non-dissent. This addresses intuitive concerns about denying children a choice, while steering clear of the problems that arise with allowing them one.
In a recent paper in Cambridge Quarterly of Healthcare Ethics on the necessary conditions for morally responsible animal research David DeGrazia and Jeff Sebo claim that the key requirements for morally responsible animal research are (1) an assertion of sufficient net benefit, (2) a worthwhile-life condition, and (3) a no-unnecessary-harm condition. With regards to the assertion (or expectation) of sufficient net benefit (ASNB), the authors claim that morally responsible research offers unique benefits to humans that outweigh the costs and harms to humans and animals. In this commentary we will raise epistemic, practical, and ethical challenges to DeGrazia and Sebo’s emphasis on benefits in the prospective assessment of research studies involving animals. We do not disagree with DeGrazia and Sebo that, at the theoretical level, the benefits of research justify our using animals. Our contribution intends to clarify, at the practical level, how we should understand benefits in the prospective assessment and moral justification of animal research. We argue that ASNB should be understood as an assessment of Expectation of Knowledge Production (EKP) in the prospective assessment and justification of animal research. EKP breaks down into two further claims: (1) that morally responsible research generates knowledge worth having and (2) that morally responsible research is designed and executed to produce generalizable knowledge. We understand the condition called knowledge worth having as scientists’ testing a hypothesis that, whether verified or falsified, advances an important interest, and production of generalizable knowledge in terms of scientific integrity. Generalizable knowledge refers to experimental results that generalize to a larger population beyond the animals studied. Generalizable scientific knowledge is reliable, replicable, and accurately descriptive. In sum, morally responsible research will be designed and carefully executed to successfully test a hypothesis that, whether verified or falsified, advances important interests. Our formulation of EKP, crucially, does not require further showing that an experiment involving animals will produce societal benefits.
Advance directives entail a refusal expressed by a still-healthy patient. Three consequences stem from that fact: (a) advance refusal is unspecific, since it is impossible to predict what the patient’s conditions and the risk-benefit ratio may be in the foreseeable future; (b) those decisions cannot be as well informed as those formulated while the disease is in progress; (c) while both current consent and refusal can be revoked as the disease unfolds, until the treatment starts out, advance directives become effective when the patient becomes incapable or unconscious; such decisions can therefore not be revoked at any stage of the disease. Therefore, advance directives are binding for doctors only at the stage of advance treatment planning, i.e., only if they refer to an illness already in progress.
This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient–physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information might suggest a positive shift in the patient-physician relationship, the physician’s ‘need to care’ might be irreplaceable, and robot healthcare workers (‘robot carers’) might be seen as contributing to dehumanized healthcare practices.
Any space program involving long-term human missions will have to cope with serious risks to human health and life. Because currently available countermeasures are insufficient in the long term, there is a need for new, more radical solutions. One possibility is a program of human enhancement for future deep space mission astronauts. This paper discusses the challenges for long-term human missions of a space environment, opening the possibility of serious consideration of human enhancement and a fully automated space exploration, based on highly advanced AI. The author argues that for such projects, there are strong reasons to consider human enhancement, including gene editing of germ line and somatic cells, as a moral duty.
The mission and value statements of healthcare organizations serve as the foundational philosophy that informs all aspects of the organization. The ultimate goal is seamless alignment of values to mission in a way that colors the overall life and culture of the organization. However, full alignment between healthcare organizational values and mission in a fashion that influences the daily life and culture of healthcare organizations does not always occur. Grounded in the belief that a lack of organizational alignment to explicit organizational mission and value statements often stems from the failure to develop processes that enable realization of the leadership’s good intentions, the authors propose an organizational ethics dashboard to empower leaders of healthcare organizations to assess the adequacy of systems in place to support alignment with the stated ethical mission.