Skip to main content Accessibility help
  • Cited by 134
  • Print publication year: 2014
  • Online publication date: July 2014

15 - The ethics of artificial intelligence


This chapter surveys some of the ethical challenges that may arise as one can create artificial intelligences (AI) of various kinds and degrees. Some challenges of machine ethics are much like many other challenges involved in designing machines. There is nearly universal agreement among modern AI professionals that artificial intelligence falls short of human capabilities in some critical sense, even though AI algorithms have beaten humans in many specific domains such as chess. In creating a superhuman chess player, the human programmers necessarily sacrificed their ability to predict Deep Blue's local, specific game behavior. A different set of ethical issues arises when one can contemplate the possibility that some future AI systems might be candidates for having moral status. One also has moral reasons to treat them in certain ways, and to refrain from treating them in certain other ways. Superintelligence may be achievable by increasing processing speed.
Further reading
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. A comprehensive discussion of the challenges posed by the prospect of a machine intelligence revolution.
Wallach, W. and Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press. A survey of some issues in the machine ethics literature.
Yudkowsky, E. (2008). Artificial Intelligence as a positive and negative factor in global risk, in Bostrom, N. and Ćirković, M. (eds.), Global Catastrophic Risks (pp. 308–45). Oxford University Press. An introduction to the risks and challenges presented by the possibility of recursively self-improving superintelligent machines.
Asimov, I. (1942). Runaround, Astounding Science Fiction, March.
Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology 9(1) ().
Bostrom, N. (2004). The future of human evolution, in Tandy, C. (ed.) Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing (pp. 339–71). Palo Alto, California: Ria University Press) ().
Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press.
Goertzel, B. and Pennachin, C. (eds.) (2007). Artificial General Intelligence. Berlin: Springer.
Good, I. J. (1965). Speculations concerning the first ultraintelligent machine, in Alt, F. L. and Rubinoff, M. (eds.), Advances in Computers, vol. 6 (pp. 31–88). New York: Academic Press.
Hastie, T., Tibshirani, R., and Friedman, J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer.
Hirschfeld, L. A. and Gelman, S. A. (eds.) (1994). Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge University Press.
Hofstadter, D. (2006). Trying to muse rationally about the singularity scenario. Paper presented at the Singularity Summit at Stanford.
Howard, P. K. (1994). The Death of Common Sense: How Law is Suffocating America. New York: Warner Books.
Kamm, F. M. (2007). Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford University Press.
Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. New York: Viking.
McDermott, D. (1976). Artificial intelligence meets natural stupidity, ACM SIGART Newsletter 57: 4–9.
Omohundro, S. (2008). The basic AI drives, in Wang, P., Goertzel, B., and Franklin, S. (eds.), Artificial General Intelligence 2008: Proceedings of the First AGI Conference (pp. 483–92). Amsterdam: IOS Press.
Sandberg, A. (1999). The physics of information processing superobjects: Daily life among the Jupiter brains, Journal of Evolution and Technology 5(1).
Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. Paper presented at the VISION-21 Symposium, March.
Warren, M. A. (1997). Moral Status: Obligations to Persons and Other Living Things. Oxford University Press.
Yudkowsky, E. (2006). AI as a precise art. Paper presented at the 2006 AGI Workshop in Bethesda, MD.
Yudkowsky, E. (2008a). Artificial Intelligence as a positive and negative factor in global risk, in Bostrom, N. and Ćirković, M. (eds.), Global Catastrophic Risks (pp. 308–45). Oxford University Press.
Yudkowsky, E. (2008b). Cognitive biases potentially affecting judgment of global risks, in Bostrom, N. and Ćirković, M. (eds.), Global Catastrophic Risks (pp. 91–119). Oxford University Press.