Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-dnltx Total loading time: 0 Render date: 2024-04-23T05:43:53.396Z Has data issue: false hasContentIssue false

15 - The ethics of artificial intelligence

Published online by Cambridge University Press:  05 July 2014

Nick Bostrom
Affiliation:
Oxford University
Eliezer Yudkowsky
Affiliation:
Machine Intelligence Research Institute
Keith Frankish
Affiliation:
The Open University, Milton Keynes
William M. Ramsey
Affiliation:
University of Nevada, Las Vegas
Get access

Summary

The possibility of creating thinking machines raises a host of ethical issues, related both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. This chapter surveys some of the ethical challenges that may arise as we create artificial intelligences of various kinds and degrees.

Ethics in machine learning and other domain-specific AI algorithms

Imagine, in the near future, a bank using a machine learning algorithm to recommend mortgage applications for approval. A rejected applicant brings a lawsuit against the bank, alleging that the algorithm is discriminating racially against mortgage applicants. The bank replies that this is impossible, since the algorithm is deliberately blinded to the race of the applicants. Indeed, that was part of the bank’s rationale for implementing the system. Even so, statistics show that the bank’s approval rate for black applicants has been steadily dropping. Submitting ten apparently equally qualified genuine applicants (as determined by a separate panel of human judges) shows that the algorithm accepts white applicants and rejects black applicants. What could possibly be happening?

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2014

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. A comprehensive discussion of the challenges posed by the prospect of a machine intelligence revolution.Google Scholar
Wallach, W. and Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press. A survey of some issues in the machine ethics literature.CrossRefGoogle Scholar
Yudkowsky, E. (2008). Artificial Intelligence as a positive and negative factor in global risk, in Bostrom, N. and Ćirković, M. (eds.), Global Catastrophic Risks (pp. 308–45). Oxford University Press. An introduction to the risks and challenges presented by the possibility of recursively self-improving superintelligent machines.Google Scholar
Asimov, I. (1942). Runaround, Astounding Science Fiction, March.
Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology 9(1) ().Google Scholar
Bostrom, N. (2004). The future of human evolution, in Tandy, C. (ed.) Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing (pp. 339–71). Palo Alto, California: Ria University Press) ().Google Scholar
Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press.Google Scholar
Goertzel, B. and Pennachin, C. (eds.) (2007). Artificial General Intelligence. Berlin: Springer.CrossRef
Good, I. J. (1965). Speculations concerning the first ultraintelligent machine, in Alt, F. L. and Rubinoff, M. (eds.), Advances in Computers, vol. 6 (pp. 31–88). New York: Academic Press.Google Scholar
Hastie, T., Tibshirani, R., and Friedman, J. (2001). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer.CrossRefGoogle Scholar
Hirschfeld, L. A. and Gelman, S. A. (eds.) (1994). Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge University Press.CrossRef
Hofstadter, D. (2006). Trying to muse rationally about the singularity scenario. Paper presented at the Singularity Summit at Stanford.
Howard, P. K. (1994). The Death of Common Sense: How Law is Suffocating America. New York: Warner Books.Google Scholar
Kamm, F. M. (2007). Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford University Press.CrossRefGoogle Scholar
Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. New York: Viking.Google Scholar
McDermott, D. (1976). Artificial intelligence meets natural stupidity, ACM SIGART Newsletter 57: 4–9.Google Scholar
Omohundro, S. (2008). The basic AI drives, in Wang, P., Goertzel, B., and Franklin, S. (eds.), Artificial General Intelligence 2008: Proceedings of the First AGI Conference (pp. 483–92). Amsterdam: IOS Press.Google Scholar
Sandberg, A. (1999). The physics of information processing superobjects: Daily life among the Jupiter brains, Journal of Evolution and Technology 5(1).Google Scholar
Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. Paper presented at the VISION-21 Symposium, March.
Warren, M. A. (1997). Moral Status: Obligations to Persons and Other Living Things. Oxford University Press.Google Scholar
Yudkowsky, E. (2006). AI as a precise art. Paper presented at the 2006 AGI Workshop in Bethesda, MD.
Yudkowsky, E. (2008a). Artificial Intelligence as a positive and negative factor in global risk, in Bostrom, N. and Ćirković, M. (eds.), Global Catastrophic Risks (pp. 308–45). Oxford University Press.Google Scholar
Yudkowsky, E. (2008b). Cognitive biases potentially affecting judgment of global risks, in Bostrom, N. and Ćirković, M. (eds.), Global Catastrophic Risks (pp. 91–119). Oxford University Press.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×