Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-23T16:19:55.062Z Has data issue: false hasContentIssue false

7 - Machine Ethics and the Idea of a More-Than-Human Moral World

from PART III - ISSUES CONCERNING MACHINE ETHICS

Published online by Cambridge University Press:  01 June 2011

Michael Anderson
Affiliation:
University of Hartford, Connecticut
Susan Leigh Anderson
Affiliation:
University of Connecticut
Get access

Summary

“We are the species equivalent of that schizoid pair, Mr Hyde and Dr Jekyll; we have the capacity for disastrous destruction but also the potential to found a magnificent civilization. Hyde led us to use technology badly; we misused energy and overpopulated the earth, but we will not sustain civilization by abandoning technology. We have instead to use it wisely, as Dr Jekyll would do, with the health of the Earth, not the health of people, in mind.”

–Lovelock 2006: 6–7

Introduction

In this paper i will discuss some of the broad philosophical issues that apply to the field of machine ethics. ME is often seen primarily as a practical research area involving the modeling and implementation of artificial moral agents. However this shades into a broader, more theoretical inquiry into the nature of ethical agency and moral value as seen from an AI or information-theoretical point of view, as well as the extent to which autonomous AI agents can have moral status of different kinds. We can refer to these as practical and philosophical ME respectively.

Practical ME has various kinds of objectives. Some are technically well defined and relatively close to market, such as the development of ethically responsive robot care assistants or automated advisers for clinicians on medical ethics issues. Other practical ME aims are more long term, such as the design of a general purpose ethical reasoner/advisor – or perhaps even a “genuine” moral agent with a status equal (or as equal as possible) to human moral agents.

Type
Chapter
Information
Machine Ethics , pp. 115 - 137
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abram, D. (1996) The Spell of the Sensuous: Perception and Language in a More-Than-Human World. NY: Random House.Google Scholar
Aleksander, I. (2005) The World in My Mind, My Mind In The World: Key Mechanisms of Consciousness in Humans, Animals and Machines. Thorverton, Exeter: Imprint AcademicGoogle Scholar
Bostrom, N. (2000) “When Machines Outsmart Humans,” Futures, 35 (7), 759–764.CrossRefGoogle Scholar
Bostrom, N. (2004) “The Future of Human Evolution,” in Tandy, C., ed. Death and Anti-Death: Two Hundred Years after Kant; Fifty Years after Turing. Palo Alto, CA: Ria U.P., 339–371.Google Scholar
Bostrom, N. (2005) “The Ethics of Superintelligent Machines,” in Smit, I., W.Wallach, , and G.Lasker, (eds) Symposium on Cognitive, Emotive and Ethical aspects of Decision-making in Humans and Artificial Intelligence. InterSymp 05, Windsor, Ont: IIAS Press.Google Scholar
Calverley, D. (2005) “Android Science and the Animal Rights Movement: Are there Analogies?Proceedings of CogSci-2005 Workshop. Cognitive Science Society, Stresa, Italy, pp. 127–136.Google Scholar
Curry, P. (2006) Ecological Ethics: An Introduction. Cambridge: Polity Press.Google Scholar
Jaegher, H. (2008). “Social Understanding through Direct Perception? Yes, by Interacting.” Consciousness and Cognition 18, 535–42.CrossRefGoogle ScholarPubMed
Jaegher, H. & Di Paolo, E. (2007). “Participatory Sense-Making: An Enactive Approach to Social Cognition.” Phenomenology and the Cognitive Sciences. 6 (4), 485–507.CrossRefGoogle Scholar
Waal, F. (2006). Primates and Philosophers: How Morality Evolved. Oxford: Princeton U.P.Google Scholar
Dennett, D. (1978) “Why you Can't Make a Computer that Feels Pain.” Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, MA: MIT Press. 190–232.Google Scholar
Dennett, D. (1998) “The Practical Requirements for Making a Conscious Robot.” in Dennett, D., Brainchildren: Essays on Designing Minds. London: Penguin Books, 153 – 170.Google Scholar
Di Paolo, E. (2005), “Autopoiesis, Adaptivity, Teleology, Agency.” Phenomenology and the Cognitive Sciences. 4, 97–125.CrossRefGoogle Scholar
Dietrich, E. (2007) “After the Humans are Gone.'J. Experimental and Theoretical Art. Intell. 19(1): 55–67.CrossRefGoogle Scholar
Ernste, H. (2004). “The Pragmatism of Life in Poststructuralist Times.” Environment and Planning A. 36, 437–450.CrossRefGoogle Scholar
Floridi, L (2008a) ‘Artificial Intelligence's New Frontier: Artificial Companions and the Fourth Revolution’, Metaphilosophy, 39 (4–5), 651–655.CrossRefGoogle Scholar
Floridi, L. (2008b), “Information Ethics, its Nature and Scope,” in Hoven, J. and Weckert, J., eds., Moral Philosophy and Information Technology, Cambridge: Cambridge U.P., 40–65.Google Scholar
Franklin, S. (1995) Artificial Minds. Boston, MA: MIT PressGoogle Scholar
Frey, R.G. (1980). Interests and Rights: The Case against Animals. Oxford: Clarendon Press.Google Scholar
Gallagher, S. (2001) “The Practice of Mind: Theory, Simulation or Primary Interaction?Journal of Consciousness Studies, 8 (5–7), 83–108Google Scholar
Gallagher, S. (2008) “Direct Perception in the Intersubjective Context.” Consciousness and Cognition, 17, 535–43.CrossRefGoogle ScholarPubMed
Goertzel, B. (2006) “Ten Years to a Positive Singularity (If we Really, Really Try).” Talk to Transvision 2006, Helsinki, Finland. http://www.goertzel.org/papers/tenyears.htm.Google Scholar
Haikonen, Pentti (2003) The Cognitive Approach to Conscious Machines. Thorverton, Devon: Imprint Academic.Google Scholar
Holland, O., ed. (2003) Machine Consciousness. Special issue of Journal of Consciousness Studies, 10 (4–5).
Jonas, H. (1996/2001) The Phenomenon of Life: Toward a Philosophical Biology. Evanston, Ill: Northwestern U.P. (originally published by Harper & Row N.Y. in 1996).Google Scholar
Joy, B. (2000) “Why the Future Doesn't Need Us.” Wired 8 (04). www.wired.com/wired/archive/8.04/joy_pr.html.Google Scholar
Kurzweil, R. (2001) “One Half of An Argument” (Response to Lanier 2000). The Edge (online publication), 8.4.01. http://www.edge.org/3rd_culture/kurzweil/kurzweil_index.html.
Kurzweil, R. (2005) The Singularity is Near: When Humans Transcend Biology. NY: Viking Press.Google Scholar
Kant, I. (1997) Lectures on Ethics. Heath, P. and Schneewind, J.B., eds. Cambridge: Cambridge U.P.CrossRefGoogle Scholar
LaChat, M. (2004) “‘Playing God’ and the Construction of Artificial Persons.” In Smit, I., Wallach, W. and Lasker, G., eds., Symposium on Cognitive, Emotive and Ethical aspects of Decision-making in Humans and Artificial Intelligence, InterSymp 04, Windsor, Ont: IIAS Press.Google Scholar
Lanier, J. (2000) “One Half a ManifestoThe Edge, (online publication), 11.11.00. http://www.edge.org/3rd_culture/lanier/lanier_index.html.Google Scholar
Leopold, A. (1948) “A Land Ethic,” in A Sand County Almanac with Essays on Conservation from Round River. New York: Oxford U.P.Google Scholar
Lovelock, J. (1979) Gaia: A New Look at Life on Earth. Oxford: Oxford U.P.Google Scholar
Lovelock, J. (2006) The Revenge of Gaia: Why the Earth is Fighting Back, and How we can Still Save Humanity. London: Allen Lane.Google Scholar
Maturana, H. & Varela, F. (1980) Autopoiesis and Cognition: The Realization of the Living. Dordrecht, Holland: D. Reidel Publishing.CrossRefGoogle Scholar
Midgley, M. (1978) Beast and Man: The Roots of Human Nature. Ithaca, N.J.: Cornell U.P.Google Scholar
Moor, J (2006) “The Nature, Importance and Difficulty of Machine Ethics.” IEEE Intelligent Systems 21(4), 18–21.CrossRefGoogle Scholar
Moravec, H. (1988) Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA: Harvard U.P.Google Scholar
Naess, A. (1973) ‘The Shallow and the Deep, Long-Range Ecology MovementsInquiry 16: 95–100.CrossRefGoogle Scholar
Naess, A. & Sessions, G. (1984) “Basic Principles of Deep Ecology.” Ecophilosophy. 6: 3–7.Google Scholar
Regan, T. (1983) The Case for Animal Rights. Berkeley: University of California Press.Google Scholar
Singer, P. (1977) Animal Liberation. London: Granada.Google Scholar
Sparrow, R. (2007) “Killer Robots”, Applied Philosophy, 24(1), 62–77.CrossRefGoogle Scholar
Sparrow, R. & Sparrow, L. (2006) “In the Hands of Machines? The Future of Aged Care.” Minds and Machines 16 (2), 141–161.CrossRefGoogle Scholar
Sylvan, R. & Bennett, D. (1994) The Greening of Ethics: From Human Chauvinism to Deep-Green Theory. Cambridge: White Horse Press.Google Scholar
Thompson, E. (2007) Mind in Life: Biology, Phenomenology and the Sciences of Mind. Cambridge, MA: Harvard U.P.Google Scholar
Torrance, S. (2000) “Towards an Ethics for EPersons.” Proc. AISB'00 Symposium on AI, Ethics and (Quasi-) Human Rights, University of Birmingham.Google Scholar
Torrance, S. (2007) “Two conceptions of Machine Phenomenality.” Journal of Consciousness Studies, 14 (7).Google Scholar
Torrance, S. (2008) “Ethics, Consciousness and Artificial Agents.” AI & Society 22(4)Google Scholar
Torrance, S. (2009) “Will Robots have their own ethics?Philosophy Now, April issue.Google Scholar
Torrance, S., Clowes, R., Chrisley, R., eds (2007) Machine Consciousness: Embodiment and Imagination. Special issue of Journal of Consciousness Studies, 14 (4).
Trevarthen, C. & Reddy, V. (2007) “Consciousness in Infants,” in Velmans, M. and Schneider, S., eds. The Blackwell Companion to Consciousness. Oxford: Blackwell & Co., 41–57.Google Scholar
Vinge, V. (1993) “The Coming Technological Singularity: How to Survive in the Post-Human Era.” Whole Earth Review, 77.Google Scholar
Wallach, W. & Allen, C. (2009) Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford U.P.CrossRefGoogle Scholar
Wilson, E.O. (1984) Biophilia. Cambridge, MA: Harvard U.P.Google Scholar
Wilson, E.O. (1994) The Diversity of Life. Harmondsworth: Penguin.Google Scholar
Wright, R. (1994) The Moral Animal: Evolutionary Psychology and Everyday Life. N.Y.: Pantheon Books.Google Scholar
Yudkowsky, (2001) “Creating Friendly AI.” www.singinst.org/upload/CFAI.html.
Yudkowsky, (2008) “Cognitive Biases Potentially Affecting Judgement of Global Risks,” in Bostrom, N. and Cirkovic, M., eds. Global Catastrophic Risks. Oxford: Oxford U.P. Pp. 91–119.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×