To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article opens up a new perspective on market liberalism's triumph in the late twentieth century through an examination of the political battles that were fought in Britain over the regulation of homework. Ubiquitous in the late Victorian era, this form of waged labor was curtailed by Edwardian wage regulations but resurged in the 1970s as a result of competition from low-wage economies abroad and fast-changing consumer tastes. Alongside growing use of homeworkers in consumer industries, new information technologies made it increasingly possible for some forms of professional work to move into the home. This article explores the debates that swirled around these different forms of homework, pitting antipoverty campaigners, feminists, and activists against ministers, employers, and civil servants. It shows how Conservative and New Labour governments failed to recognize the structural similarities between Victorian-style “sweated” labor and the emerging world of telework, freelancing, and self-employment, and how the intellectual excitement generated by Britain's transition toward a postindustrial future dovetailed with the New Right commitment to deregulation and the creation of “flexible” labor markets. A brief comparison with homework in the United States underlines the value of local, particular histories to our larger understanding of ideological change in modern societies.
Indigenous peoples throughout the circumpolar north have made significant progress in terms of securing self-determination through greater political autonomy. Although such change is important, it must be accompanied by greater economic control aimed at reducing state dependencies. Using an analytical framework developed by Norwegian political scientist, Stein Rokkan, this article explores the interplay between economy, territory, and identity among the Inuit in Canada and the Sámi in Norway. It reveals that the economic destinies of both groups have been profoundly influenced by both domestic and international factors that determine the focus and type of economic development they undertake. While the Inuit have pursued a balance of modern and traditional forms of economic development that is grounded in a regionally based model of self-rule, the Sámi have opted for an economic development model that emphasises traditional economic activities and is supported by Norway’s international commitments to the rights of Indigenous peoples.
This paper argues that uterine transplants are a potentially dangerous distraction from the development of alternative methods of providing reproductive options for women with absolute uterine factor infertility (AUFI). We consider two alternatives in particular: the bioengineering of wombs using stem cells (which would carry fewer risks than uterine transplants) and ectogenesis (which would not require surgical intervention for either the prospective mother with AUFI or a womb donor). Whether biologically or mechanically engineered, these womb replacements could provide a way for women to have children, including genetically related offspring for those who would value this possibility. Most importantly, this alternative would avoid the challenge of sourcing wombs for transplant, a practice that we argue would likely be exploitative and unethical. Continued research into bioengineering and ectogenesis will therefore remain morally important despite the recent development of uterine transplantation, even if the procedure reaches routine clinical application.
The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine.
The amount of data available to healthcare practitioners is growing, and the rapid increase in available patient data is becoming a problem for healthcare practitioners, as they are often unable to fully survey and process the data relevant for the treatment or care of a patient. Consequently, there are currently several efforts to develop systems that can aid healthcare practitioners with reading and processing patient data and, in this way, provide them with a better foundation for decision-making about the treatment and care of patients. There are also efforts to develop algorithms that provide suggestions for such decisions. However, the development of these systems and algorithms raises several concerns related to the privacy of patients, the patient–practitioner relationship, and the autonomy of healthcare practitioners. The aim of this article is to provide a foundation for understanding the ethical challenges related to the development of a specific form of data-processing systems, namely clinical algorithms.
Two of the most poignant decisions in pediatrics concern disagreements between physicians and families over imperiled newborns. When can the family demand more life-sustaining treatment (LST) than physicians want to provide? When can it properly ask for less? The author looks at these questions from the point of view of decision theory, and first argues that insofar as the family acts in the child’s best interest, its choices cannot be constrained, and that the maximax and minimax strategies are equally in the child’s best interest. He then proposes a guideline according to which the family can demand LST if it is physiologically possible to preserve a life the child can be expected to welcome, and refuse such treatment if it causes suffering that is “more than can be borne” even if an uncompromised life is expected to emerge.
Important advances in biomedical and behavioral research ethics have occurred over the past few decades, many of them centered on identifying and eliminating significant harms to human subjects of research. Comprehensive attention has not been paid to the totality of harms experienced by animal subjects, although scientific and moral progress require explicit appraisal of these harms. Science is a public good and the prioritizing within, conduct of, generation of, and application of research must soundly address questions about which research is morally defensible and valuable enough to support through funding, publication, tenure, and promotion. Likewise, educational pathways of re-imagined science are critical.
The removal of controversial names and monuments from the public sphere in the United States has gained traction in the context of efforts to achieve social justice for historically mistreated and marginalized communities. Such debates are increasingly raising issues in the healthcare setting as hospitals and medical schools grapple with the legacies of figures whose scientific contributions are clouded with ethical transgressions. Present efforts to address these challenges have largely occurred at the institutional level. The results have been guidelines that are complex, highly inconsistent across institutions, and largely downplay the symbolic importance of such historical redress. This paper proposes a simpler three-part test for name and monument removal in the medical and hospital settings that places greater weight on the symbolic importance of the renaming process itself instead of only considering the outcomes.
The purpose of this article is to show that animal rights are not necessarily at odds with the use of animals for research. If animals hold basic moral rights similar to those of humans, then we should consequently extend the ethical requirements guiding research with humans to research with animals. The article spells out how this can be done in practice by applying the seven requirements for ethical research with humans proposed by Ezekiel Emanuel, David Wendler, and Christine Grady to animal research. These requirements are (1) social value, (2) scientific validity, (3) independent review, (4) fair subject selection, (5) favorable risk–benefit ratio, (6) informed consent, and (7) respect for research subjects. In practice, this means that we must reform the practice of animal research to make it more similar to research with humans, rather than completely abolish the former. Indeed, if we ban animal research altogether, then we would also deprive animals of its potential benefits—which would be ethically problematic.
Teaching healthcare ethics at the doctoral level presents a particular challenge. Ethics is often taught to medical students, but rarely is medicine taught to graduate students in health care ethics. In this paper, Medicine for Ethicists [MfE] — a course taught both didactically and experientially — is described. Eight former MfE students were independently interviewed in a semi-structured, open-ended format regarding their experience in the experiential component of the course. Themes included concrete elements about the course, elements related to the broader PhD student learning experience, and themes related to the students’ past and future career experiences. Findings are related to the educational philosophy of John Dewey and David Kolb’s experiential learning theory. Broader implications of this work are explored.
How should we respond to patients who do not wish to take on the responsibility and burdens of making decisions about their own care? In this paper, we argue that existing models of decision-making in modern healthcare are ill-equipped to cope with such patients and should be supplemented by an “appointed fiduciary” model where decision-making authority is formally transferred to a medical professional. Healthcare decisions are often complex and for patients can come at time of vulnerability. While this does not undermine their capacity, it can be excessively burdensome. Most existing models of decision-making mandate that patients with capacity must retain ultimate responsibility for decisions. An appointed fiduciary model provides a formalized mechanism through which those few patients who wish to defer responsibility can hand over decision-making authority. By providing a formal structure for deferring to an appointed fiduciary, the confusions and risks of the informal transfers that can occur in practice are avoided. Finally, we note how appropriate governance and law can provide safeguards against risks to the welfare of patients and medical professionals.
Euthanasia and physician-assisted suicide are common practice in the Netherlands. In response to increasing requests from patients to end their lives, physicians are finding themselves placed in particularly precarious situations because of advance directives written by patients suffering from severe dementia. In April 2020, the Supreme Court of the Netherlands issued two judgments in the so-called Dormicum case: a case involving the deliberate termination of the life of a 74-year-old woman suffering from advanced dementia by a geriatrician in a nursing home in The Hague. The judgment of the lower criminal court was upheld, but the sanction imposed by the appellate disciplinary court was quashed. In this paper, the author reviews the two Supreme Court rulings, argues that both are fundamentally flawed and raises questions as to what they mean for Dutch criminal law, physicians, and patients going forward.
This history of a statement attributed to the developmental biologist Lewis Wolpert exemplifies the making and uses of quotations in recent science. Wolpert's dictum, ‘It is not birth, marriage or death, but gastrulation which is truly the most important time in your life’, was produced in a series of international shifts of medium and scale. It originated in his vivid declaration in conversation with a non-specialist at a workshop dinner, gained its canonical form in a colleague's monograph, and was amplified as a quotation on a poster derived from an undergraduate project. Although it drew on Wolpert's authority and he accepted his authorship, it thus represents a collective sifting of earlier claims for the significance of prenatal existence through the values of 1980s developmental biology. Juxtaposing a technical term with major life events has let teachers engage students, and researchers entice journalists, while sharing an in-joke that came to mark community identity. Serious applications include arguing for an extension of the fourteen-day limit on human-embryo research. On this evidence, quotations have been kept busy addressing every audience of specialized knowledge.
In this article, I explore how the twin forces of imperial and entomological power allowed Britain to shape locust research and control across Africa, the Middle East and South Asia from the 1920s to the early 1950s. Imperial power came from the size of the formal and informal empire, and alliances with other colonial powers to tackle a common threat to agriculture and trade. Entomological authority came primarily from the work of Boris Uvarov and his small team of museum and fieldworkers based at the Imperial Bureau of Entomology (IBE), later the Imperial Institute of Entomology (IIE). I begin by discussing how Uvarov's phase theory of the origin of swarming changed the prospects for the control of locust plagues. The imperial gaze and networks of the IBE and IIE were suited to a problem that was transnational and transcontinental. In the 1930s, Britain was drawn into plans for international cooperation on locust organizations that met the needs of science, to give better sharing of knowledge, and the needs for science, to secure the resources for research and control. However, such organizations were only created during the Second World War, when new plagues threatened military operations, as I show in relation to the measures taken to control the red locust and desert locust. In the final section, I follow the fate of the wartime cooperation in initiatives to establish permanent control organizations. It is a story of the decline of British political power in locust affairs as the United Nations Food and Agriculture Organization and regional agencies took over. My account of British locust research and control reveals a neglected aspect of histories of entomology and imperial/colonial science, especially their international relations and the continuing importance of metropolitan research centres.