Skip to main content
    • Aa
    • Aa

On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making


This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such a ban might be problematic for various reasons. It argues in favour of a theoretical foundation for such a ban based on human rights and humanitarian principles that are not only moral, but also legal ones. In particular, an implicit requirement for human judgement can be found in international humanitarian law governing armed conflict. Indeed, this requirement is implicit in the principles of distinction, proportionality, and military necessity that are found in international treaties, such as the 1949 Geneva Conventions, and firmly established in international customary law. Similar principles are also implicit in international human rights law, which ensures certain human rights for all people, regardless of national origins or local laws, at all times. I argue that the human rights to life and due process, and the limited conditions under which they can be overridden, imply a specific duty with respect to a broad range of automated and autonomous technologies. In particular, there is a duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy in each and every case. I argue that it would be beneficial to establish this duty as an international norm, and express this with a treaty, before the emergence of a broad range of automated and autonomous weapons systems begin to appear that are likely to pose grave threats to the basic rights of individuals.

Linked references
Hide All

This list contains references from the content that can be linked to their source. For a full set of references and notes please see the PDF or HTML where available.

Ronald C. Arkin , Governing Lethal Behavior in Autonomous Robots, CRC Press, 2009

Noel Sharkey , ‘Death strikes from the sky: the calculus of proportionality’, in IEEE Technology and Society Magazine, Vol. 28, No. 1, 2009, pp. 1619

Noel Sharkey , ‘Saying “no!” to lethal autonomous targeting’, in Journal of Military Ethics, Vol. 9, No. 4, 2010, pp. 369383

Markus Wagner , ‘Taking humans out of the loop: implications for international humanitarian law’, in Journal of Law Information and Science, Vol. 21, 2011, available at:

Robert Sparrow , ‘Killer robots’, in Journal of Applied Philosophy, Vol. 24, No. 1, 2007, pp. 6277

Brian Burridge , ‘UAVs and the dawn of post-modern warfare: a perspective on recent operations’, in RUSI Journal, Vol. 148, No. 5, October 2003, pp. 1823

Bradley Jay Strawser , ‘Moral predators: the duty to employ uninhabited aerial vehicles’, in Journal of Military Ethics, Vol. 9, No. 4, 2010, pp. 342368

Peter Asaro , ‘Modeling the moral user: designing ethical interfaces for tele-operation’, in IEEE Technology & Society, Vol. 28, No. 1, 2009, pp. 2024, available at:

Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

International Review of the Red Cross
  • ISSN: 1816-3831
  • EISSN: 1607-5889
  • URL: /core/journals/international-review-of-the-red-cross
Please enter your name
Please enter a valid email address
Who would you like to send this to? *



Altmetric attention score

Full text views

Total number of HTML views: 12
Total number of PDF views: 182 *
Loading metrics...

Abstract views

Total abstract views: 703 *
Loading metrics...

* Views captured on Cambridge Core between September 2016 - 21st August 2017. This data will be updated every 24 hours.