Skip to main content Accessibility help
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 10
  • Print publication year: 2011
  • Online publication date: June 2011

9 - When Is a Robot a Moral Agent?




Robots have been a part of our work environment for the past few decades, but they are no longer limited to factory automation. The additional range of activities they are being used for is growing. Robots are now automating a wide range of professional activities such as: aspects of the health-care industry, white collar office work, search and rescue operations, automated warfare, and the service industries.

A subtle but far more personal revolution has begun in home automation as robot vacuums and toys are becoming more common in homes around the world. As these machines increase in capability and ubiquity, it is inevitable that they will impact our lives ethically as well as physically and emotionally. These impacts will be both positive and negative, and in this paper I will address the moral status of robots and how that status, both real and potential, should affect the way we design and use these technologies.

Morality and Human-Robot Interactions

As robotics technology becomes more ubiquitous, the scope of human-robot interactions will grow. At the present time, these interactions are no different than the interactions one might have with any piece of technology, but as these machines become more interactive, they will become involved in situations that have a moral character that may be uncomfortably similar to the interactions we have with other sentient animals.

Arkin, Ronald (2007): Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture, U.S. Army Research Office Technical Report GIT-GVU-07–11. Retrived from:
Arkin, Ronald (2009): Governing Lethal Behavior in Autonomous Robots, Chapman & Hall/CRC.
Bringsjord, S. (2007): Ethical Robots: The Future Can Heed Us, AI and Society (online).
Dennett, Daniel (1998): When HAL Kills, Who's to Blame? Computer Ethics, in Stork, David, HAL's Legacy: 2001's Computer as Dream and Reality, MIT Press.
Dietrich, Eric (2001): Homo Sapiens 2.0: Why We Should Build the Better Robots of Our Nature, Journal of Experimental and Theoretical Artificial Intelligence, Volume 13, Issue 4, 323–328.
Floridi, Luciano, and Sanders, , J. W. (2004): On the Morality of Artificial Agents, Minds and Machines, 14.3, pp. 349–379.
Irrgang, Bernhard (2006): Ethical Acts in Robotics. Ubiquity, Volume 7, Issue 34 (September 5, 2006–September 11, 2006)
Lin, Patrick, Bekey, George, and Abney, Keith (2008): Autonomous Military Robotics: Risk, Ethics, and Design, US Department of Navy, Office of Naval Research, Retrived online:
Mitcham, Carl (1994): Thinking through Technology: The Path between Engineering and Philosophy, University of Chicago Press.
Nadeau, Joseph Emile (2006): Only Androids Can Be Ethical, in Ford, Kenneth, and Glymour, Clark, eds., Thinking about Android Epistemology, MIT Press, 241–248.
Sullins, John (2005): Ethics and Artificial Life: From Modeling to Moral Agents, Ethics and Information Technology, 7:139–148.
Sullins, John (2009): Telerobotic Weapons Systems and the Ethical Conduct of War, American Philosophical Association Newsletter on Philosophy and Computers, Volume 8, Issue 2 Spring 2009.