To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Much has been written about the so-called Franklin expedition (1845–), but not about the master mariners, who joined as “Greenland pilots,” as experienced whaling masters on Royal Navy expeditions were usually called in the 19th century. Having been on Royal Navy expeditions to the Arctic before, Thomas Blanky, the ice master of HMS Terror, was mentioned here and there in contemporary sources. But who he was and how and why he joined the expedition are still widely unanswered questions, to be dealt with for the first time here.
Regulating war has long been a concern of the international community. From the Hague Conventions to the Geneva Conventions and the multiple treaties and related institutions that have emerged in the twentieth and twenty-first centuries, efforts to mitigate the horrors of war have focused on regulating weapons, defining combatants, and ensuring access to the battlefield for humanitarians. But regulation and legal codes alone cannot be the end point of an engaged ethical response to new weapons developments. This short essay reviews some of the existing ethical works on lethal autonomous weapon systems (LAWS), highlighting how rule- and consequence-based accounts fail to provide adequate guidance for how to deal with them. I propose a virtue-based account, which I link up with an Aristotelian framework, for how the international community might better address these weapons systems.
The development of new technologies that enable autonomous weapon systems poses a challenge to policymakers and technologists trying to balance military requirements with international obligations and ethical norms. Some have called for new international agreements to restrict or ban lethal autonomous weapon systems. Given the tactical and strategic value of the technologies and the proliferation of threats, the military continues to explore the development of new autonomous technologies to execute national security missions. The rapid global diffusion and dual-use nature of autonomous systems necessitate a proactive approach and a shared understanding of the technical realities, threats, military relevance, and strategic implications of these technologies from these communities. Ultimately, developing AI-enabled defense systems that adhere to global norms and relevant treaty obligations, leverage emerging technologies, and provide operational advantages is possible. The development of a workable and realistic regulatory framework governing the use of lethal autonomous weapons and the artificial intelligence that underpins autonomy will be best supported through a coordinated effort of the regulatory community, technologists, and military to create requirements that reflect the global proliferation and rapidly evolving threat of autonomous weapon systems. This essay seeks to demonstrate that: (1) the lack of coherent dialogue between the technical and policy communities can create security, ethical, and legal dilemmas; and (2) bridging the military, technical, and policy communities can lead to technology with constraints that balance the needs of military, technical, and policy communities. It uses case studies to show why mechanisms are needed to enable early and continuous engagement across the technical, policymaking, and operational communities. The essay then uses twelve interviews with AI and autonomy experts, which provide insight into what the technical and policymaking communities consider fundamental to the progression of responsible autonomous development. It also recommends practical steps for connecting the relevant stakeholders. The goal is to provide the Department of Defense with concrete steps for building organizational structures or processes that create incentives for engagement across communities.
This article investigates the Ottoman Greek Orthodox internal exiles, focusing on the deportees’ experiences and the intricacies of their agency during the Great War (1914–18). It does so by examining deportees’ understudied ego-documents, taken either from the collections of the Centre for Asia Minor Studies in Athens or from family archives. Organized into labor-battalions or housed in open internment camps in town quarters, the inland exiles were deported to secure the rear front and homogenize the country, but their deportation was characterized by local influences and inconsistencies. Several of the Greek Orthodox exiles managed to survive and maintain their cultural ties by exploiting such inconsistencies, either by selling their skills or by resisting exile through solidarity, desertion, and resistance.
The urgency of climate change has never been greater, nor the moral case for responding to it more compelling. This review essay critically compares Darrel Moellendorf's Mobilizing Hope and Catriona McKinnon's Climate Change and Political Theory. Moellendorf's book defends the moral importance of poverty alleviation through sustainable economic growth and argues for a mass climate movement based on the promise of a more prosperous future. By contrast, McKinnon provides a political vocabulary to articulate the many faces of climate injustice, and to critically examine proposed policy solutions—notably including the indefinite pursuit of economic growth. While both find reasons to be hopeful, their wide-ranging accounts reflect different visions of what a just and sustainable future might look like. They reflect different understandings of sustainable development and the significance of environmental values; the scope of permissible climate activism; and the ethics of geoengineering. Building upon them, I argue in favor of a more pluralistic vision of a just climate future, one that is capable of speaking to the range of moral interests bearing upon the climate and biodiversity crises, and that supports sustainable development that is inclusive of diverse human-nature relationships.
Accountability for developing, deploying, and using any emerging weapons system is affirmed as a guiding principle by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. Yet advances in emerging technologies present accountability challenges throughout the life cycle of a weapons system. Mindful of a lack of progress at the Convention on Certain Conventional Weapons since 2019, this essay argues for a mechanism capable of imputing accountability when individual agent accountability is exceeded, forensic accountability unreliable, and aspects of political accountability fail.
ChatGPT launched in November 2022, triggering a global debate on the use of artificial intelligence (AI). A debate on AI-enabled lethal autonomous weapon systems (LAWS) has been underway far longer. Two sides have emerged: one in favor and one opposed to an international law ban on LAWS. This essay explains the position of advocates of a ban without attempting to persuade opponents. Supporters of a ban believe LAWS are already unlawful and immoral to use without the need of a new treaty or protocol. They nevertheless seek an express prohibition to educate and publicize the threats these weapons pose. Foremost among their concerns is the “black box” problem. Programmers cannot know what a computer operating a weapons system empowered with AI will “learn” from the algorithm they use. They cannot know at the time of deployment if the system will comply with the prohibition on the use of force or the human right to life that applies in both war and peace. Even if they could, mechanized killing affronts human dignity. Ban supporters have long known that “AI models are not safe and no one knows how to reliably make them safe” or morally acceptable in taking human life.