from PART II - Meanings of autonomy and human cognition under automation
Published online by Cambridge University Press: 05 August 2016
We are responsible for the world of which we are a part, not because it is an arbitrary construction of our choosing but because reality is sedimented out of particular practices that we have a role in shaping and through which we are shaped.
Karen Barad, Meeting the Universe Halfway[R]esearch and development in automation are advancing from a state of automatic systems requiring human control toward a state of autonomous systems able to make decisions and react without human interaction. DoD will continue to carefully consider the implications of these advancements.
US Department of Defense, Unmanned Systems Integrated RoadmapThis chapter takes up the question of how we might think about the increasing automation of military systems not as an inevitable ‘advancement’ of which we are the interested observers, but rather as an effect of particular world-making practices in which we need urgently to intervene. We begin from the premise that the foundation of the legality of killing in situations of war is the possibility of discrimination between combatants and non-combatants. At a time when this defining form of situational awareness seems increasingly problematic, military investments in the automation of weapon systems are growing. The trajectory of these investments, moreover, is towards the development and deployment of lethal autonomous weapons – that is, weapon systems in which the identification of targets and the initiation of fire is automated in ways that preclude deliberative human intervention. Challenges to these developments underscore the immorality and illegality of delegating responsibility for the use of force against human targets to machines, and the requirements of international humanitarian law that there be (human) accountability for acts of killing. In these debates, the articulation of differences between humans and machines is key.
The aim of this chapter is to strengthen arguments against the increasing automation of weapon systems, by expanding the frame or unit of analysis that informs these debates. We begin by tracing the genealogy of concepts of autonomy within the philosophical traditions that animate artificial intelligence, with a focus on the history of early cybernetics and contemporary approaches to machine learning in behaviour-based robotics.
To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.