Hostname: page-component-8448b6f56d-jr42d Total loading time: 0 Render date: 2024-04-23T08:06:25.927Z Has data issue: false hasContentIssue false

Artificial Intelligence: Power to the People

Published online by Cambridge University Press:  07 June 2019

Abstract

To adequately estimate the beneficial and harmful effects of artificial intelligence (AI), we must first have a clear understanding of what AI is and what it is not. We need to draw important conceptual and definitional boundaries to ensure we accurately estimate and measure the impacts of AI from both empirical and normative standpoints. This essay argues that we should not conflate AI with automation or autonomy but keep them conceptually separate. Moreover, it suggests that once we have a broad understanding of what constitutes AI, we will see that it can be applied to all sectors of the economy and in warfare. However, it cautions that we must be careful where we apply AI, for in some cases there are serious epistemological concerns about whether we have an appropriate level of knowledge to create such systems. Opening the aperture to include such questions allows us to further see that while AI systems will be deployed in a myriad of forms, with greater or lesser cognitive abilities, these systems ought never to be considered moral agents. They cannot possess rights, and they do not have any duties.

Type
Roundtable: Artificial Intelligence and the Future of Global Affairs
Copyright
Copyright © Carnegie Council for Ethics in International Affairs 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

NOTES

1 Harrington, Donald F., “Automation's Impact on Industrial Safety,” Cleveland-Marshall Law Review 7, no. 2 (1958), p. 266Google Scholar.

2 Defense Science Board, Summer Study on Autonomy (Washington, D.C.: Office of the Under Secretary of Defense for Acquisition, Technology and Logisitics, June 2016), p. 4, www.hsdl.org/?view&did=794641.

3 “Autonomy in Weapons Systems” (DoD Directive 3000.09, United States Department of Defense, November 21, 2012), www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf. See also United States Office of the Director of National Intelligence, The Aim Initiative: A Strategy for Augmenting Intelligence Using Machines (Washington, D.C.: Office of the Director of National Intelligence, 2019), www.dni.gov/index.php/newsroom/reports-publications/item/1940-the-aim-initiative-a-strategy-for-augmenting-intelligence-using-machines.

4 For instance, the National Highway Traffic Safety Administration (NHTSA) in the United States demarcates a scale of 0–5, according to which “0” indicates a vehicle where all tasks are performed by a human driver and “5” indicates a vehicle that is capable of all tasks related to driving but for which the human driver may intervene and take control. Indeed, the NHTSA actually uses the two concepts interchangeably in its description of level 0, noting “zero autonomy” in the explanation but “no automation” in the header. See “Automated Vehicles for Safety,” NHTSA, www.nhtsa.gov/technology-innovation/automated-vehicles-safety.

5 Defense Science Board, Summer Study on Autonomy, p. 4.

6 Stephen Cave, “Intelligence: A History,” Aeon, February 21, 2017, aeon.co/essays/on-the-dark-history-of-intelligence-as-domination.

7 Mataric, Maja J., “Designing Emergent Behaviors: From Local Interactions to Collective Intelligence,” in Meyer, Jean-Arcady, Roitblat, Herbert L., and Wilson, Stewart W., eds., From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior (Cambridge, Mass.: MIT Press, 1993), pp. 432–40Google Scholar; Russell, Stuart J. and Norvig, Peter, Artificial Intelligence: A Modern Approach (Upper Saddle River, N.J.: Prentice Hall, 1995)Google Scholar; Sternberg, Robert J., Beyond IQ: A Triarchic Theory of Human Intelligence (Cambridge, U.K.: Cambridge University Press, 1985)Google Scholar; Schmidt, C. F., Sridharan, N. S., and Goodson, J. L., “The Plan Recognition Problem: An Intersection of Psychology and Artificial Intelligence,” Artificial Intelligence 11, nos. 1–2 (August 1978), pp. 4583CrossRefGoogle Scholar; Tolman, Edward Chace, Purposive Behavior in Animals and Men (Berkeley: University of California Press, 1949)Google Scholar; Lent, Michael van, Laird, John, Buckman, Josh, Hartford, Joe, Houchard, Steve, Steinkraus, Kurt, and Tedrake, Russ, “Intelligent Agents in Computer Games,” in Proceedings of the National Conference on Artificial Intelligence 16 (1999)Google Scholar, www.aaai.org/Papers/AAAI/1999/AAAI99-143.pdf.

8 Max Tegmark claims that “intelligence is the ability of an entity to achieve complex goals.” In some senses this is correct. However, this definition is overly general and relies heavily on tacit knowledge about the kinds of faculties an agent needs to possess to have this ability, as well as being overly specific in the complexity of the goals sought. Some may have very simple goals, but the routes by which they attain them may themselves be highly complex. See Tegmark, Max, Life 3:0: Being Human in the Age of Artificial Intelligence (New York: Alfred A. Knopf, 2017), p. 280Google Scholar.

9 Selmer Bringsjord and Naveen Sundar Govindarajulu, “Artificial Intelligence,” in Stanford Encyclopedia of Philosophy, 2018, plato.stanford.edu/entries/artificial-intelligence/ (italics in original).

10 Boden, Margaret A., AI: Its Nature and Future (New York: Oxford University Press, 2016), p. 1Google Scholar.

11 Frankish, Keith and Ramsey, William M., introduction to The Cambridge Handbook of Artificial Intelligence, edited by Frankish, Keith and Ramsey, William M. (Cambridge, U.K.: Cambridge University Press, 2014), p. 1CrossRefGoogle Scholar.

12 Russell, Stuart J., “Rationality and Intelligence,” Artificial Intelligence 94, nos. 1–2 (July 1997), pp. 5777CrossRefGoogle Scholar.

13 Haugeland, John, Artificial Intelligence: The Very Idea (Cambridge, Mass.: MIT Press, 1985)Google Scholar.

14 These techniques would include convolutional neural networks, deep neural networks, reinforcement learning, general adversarial networks, and the like.

15 See, for example, much of Jeff Clune's work on adversarial examples and machine vision. There is also concern for model inversion or data-poisoning attacks. Nguyen, Anh, Yosinski, Jason, and Clune, Jeff, “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images,” in Computer Vision and Pattern Recognition (Los Alamitos, Calif.: IEEE Computer Society, June 2015)Google Scholar, www.evolvingai.org/fooling.

16 See the Defense Advanced Research Projects Agency's web page on explainable AI at David Gunning, “Explainable Artificial Intelligence (XAI),” www.darpa.mil/program/explainable-artificial-intelligence.

17 For example, see Association for Computing Machinery Public Policy Council, “Statement on Algorithmic Transparency and Accountability,” Association for Computing Machinery, January 12, 2017, www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf; “Asilomar AI Principles,” Future of Life Institute, futureoflife.org/ai-principles/; Perspectives on Issues in AI Governance, Google AI, ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf; “About the Japanese Society for Artificial Intelligence Ethical Guidelines,” AI Committee Ethics Committee, May 3, 2017, ai-elsi.org/archives/514; Great Britain, Parliament, House of Lords, Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing, and Able? (London: House of Lords, 2018); European Commission's High-Level Expert Group on Artificial Intelligence, Draft Ethics Guidelines for Trustworthy AI, European Commission, December 18, 2018, ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai; European Group on Ethics in Science and New Technologies, Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, European Commission, March 9, 2018, ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf; IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, version 2, IEEE, 2017, standards. ieee.org/develop/indconn/ec/autonomous_systems.html; Montreal Declaration for a Responsible Development of Artificial Intelligence 2018, Montréal Declaration Responsible AI, www.montrealdeclaration-responsibleai.com/the-declaration.

18 Of course, expertise is a form of authority. However, I am referring here to more traditional forms of authority-related role responsibilities. Indeed, recent efforts by Google and Microsoft employees to rebuke their companies and leadership for working on projects for the U.S. military are a case in point. Google was forced to withdraw future efforts on Project Maven, a project that uses AI for video image processing from data obtained by remotely piloted aircraft. Microsoft employees likewise urged the company not to pursue a lucrative cloud-computing contract, Project JEDI, with the Pentagon because they deemed it to be unethical to aid the U.S. military in engaging in their activities. Google was also seen to be forced by its employees to withdraw from bidding on the same contract. See Scott Shane and Daisuke Wakabayashi, “‘The Business of War’: Google Employees Protest Work for the Pentagon,” New York Times, April 4, 2018, www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html; Employees of Microsoft, “An Open Letter to Microsoft: Don't Bid on the US Military's Project JEDI,” Medium, October 12, 2018, medium.com/s/story/an-open-letter-to-microsoft-dont-bid-on-the-us-military-s-project-jedi-7279338b713.

19 Nevertheless, even seemingly narrow tasks can be tricky, and another example may be useful here. In 2016, a nonprofit research organization called Open AI used reinforcement learning to train an agent to play the computer game Coast Runners. The humans presumed that the goal of the game was to score the most points to win. In training the agent, they used “points” as the reward function—the signal that tells the agent it is learning the right thing. After learning how to generally play the game, the agent began to act in an unpredictable way. It found a bug in the game and discovered that by doing donuts in one part of the course, it could indefinitely score points, but that in so doing it would never actually finish the game and would lose. This is known as a “reward hack.” The narrow task of playing the game was still difficult because the reward function in this case could have been misspecified or underspecified, or it could have just been a flaw in the environment itself that the agent learned to exploit, much to the chagrin of its developers. See Dario Amodei and Jack Clark, “Faulty Reward Functions in the Wild,” OpenAI, December 21, 2016, blog.openai.com/faulty-reward-functions/.

20 This is known as a problem of “catastrophic forgetting.”

21 Joe McDonald, “China Bars Millions from Travel for ‘Social Credit’ Offenses,” AP, February 22, 2019, www.apnews.com/9d43f4b74260411797043ddd391c13d8.

22 “Agent” here refers to both a human agent and an AI agent. One is obviously much more philosophically rich, but for AI an agent is considered one that is merely capable of observing its environment and acting on or in that environment to achieve a goal. See Russell and Norvig, Artificial Intelligence, ch. 2.

23 For example, Yann LeCun, the vice president and chief AI scientist at Facebook and Silver Professor of Dara Science, Computer Science, Neural Science, and Electrical Engineering at New York University, has recently stated that AI needs more “theory building” from mathematics. This is somewhat true but misses the point that every discipline has multiple theories on which to draw, particularly in relation to human behavior. See Yann LeCun (@ylecun), “Deep learning needs more theory. Many workshops I have helped organize in the last decade . . . ,” Twitter post, February 2, 2019, twitter.com/ylecun/status/1091732463284563968.

24 Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias: There's Software Used across the Country to Predict Future Criminals. And It's Biased against Blacks,” ProPublica, May 23, 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

25 A recent paper attempts to argue against this conclusion by positing an “anchor” to “connect behavior to intrinsic or nonderivative values” for an AI agent. This anchor presumably allows the agent to align its behavior to that of some normative standard. However, the authors’ argument falls squarely back into Hume's guillotine because, by training the agent on empirical cases, it cannot arrive at a normative conclusion. See Tae Wan Kim, Thomas Donaldson, and John Hooker, Mimetic vs Anchored Value Alignment in Artificial Intelligence, October 25, 2018, arxiv.org/pdf/1810.11116.pdf.