Skip to main content Accessibility help
×
Home

The Future Impact of Artificial Intelligence on Humans and Human Rights

  • Steven Livingston and Mathias Risse

Abstract

What are the implications of artificial intelligence (AI) on human rights in the next three decades? Precise answers to this question are made difficult by the rapid rate of innovation in AI research and by the effects of human practices on the adaption of new technologies. Precise answers are also challenged by imprecise usages of the term “AI.” There are several types of research that all fall under this general term. We begin by clarifying what we mean by AI. Most of our attention is then focused on the implications of artificial general intelligence (AGI), which entail that an algorithm or group of algorithms will achieve something like superintelligence. While acknowledging that the feasibility of superintelligence is contested, we consider the moral and ethical implications of such a potential development. What do machines owe humans and what do humans owe superintelligent machines?

  • View HTML
    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      The Future Impact of Artificial Intelligence on Humans and Human Rights
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      The Future Impact of Artificial Intelligence on Humans and Human Rights
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      The Future Impact of Artificial Intelligence on Humans and Human Rights
      Available formats
      ×

Copyright

References

Hide All

NOTES

1 For the nexus between human rights and AI, see Risse, Mathias, “Human Rights and Artificial Intelligence: An Urgently Needed Agenda,” Human Rights Quarterly 41, no. 1 (February 2019), pp. 116.

2 Zuboff, Shoshana, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Human Power (New York: PublicAffairs, 2019).

3 For a more complete description of deep learning, see Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning (Cambridge, Mass.: MIT Press, 2017), www.deeplearningbook.org/.

4 Graham Webster, Rogier Creemers, Paul Triolo, and Elsa Kania, “China's ‘New Generation Artificial Intelligence Development Plan’ (2017),” New America blog, July 20, 2017, www.newamerica.org/cybersecurity-initiative/digichina/blog/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/.

5 “Can We Rule Out Near-Term AGI?,” YouTube video, 17:19, posted by Web Summit, November 7, 2018, www.youtube.com/watch?time_continue=1&v=YHCSNsLKHfM. For a contrary view, see Trent Eady, “Does Recent Progress with Neural Networks Foretell Artificial General Intelligence?,” Medium, December 4, 2018, medium.com/protopiablog/does-recent-progress-with-neural-networks-foretell-artificial-general-intelligence-9545c17a5d8b.

6 Regarding state surveillance, see Deibert, Ronald J., “The Road to Digital Unfreedom: Three Painful Truths about Social Media,” Journal of Democracy 30, no. 1 (January 2019), pp. 2539; regarding corporate surveillance, see Zuboff, Age of Surveillance Capitalism; regarding open-source investigations by nonstate actors, see Steven Livingston and Sushma Raman, “Human Rights Documentation in Limited Access Areas: The Use of Technology in War Crimes and Human Rights Abuse Investigations” (Cambridge, Mass.: Carr Center for Human Rights Policy, May 2018), carrcenter.hks.harvard.edu/files/cchr/files/documentationandtech_designed_may_8_2018.pdf.

7 Peter Cluskey, “Social Media Evidence a Game-Changer in War Crimes Trial,” Irish Times, October 3, 2017, www.irishtimes.com/news/world/europe/social-media-evidence-a-game-changer-in-war-crimes-trial-1.3243098.

8 Bellingcat Investigation Team, “How a Werfalli Execution Site Was Geolocated,” Bellingcat, October 3, 2017, www.bellingcat.com/news/mena/2017/10/03/how-an-execution-site-was-geolocated/.

9 Shawna Wolverton, “Making the Move from Imagery to Insights with Planet Analytics,” Planet, July 18, 2018, www.planet.com/pulse/planet-analytics-launch/.

10 A. J. Rohn, “Google's PlaNet: Geolocating Photos Using Artificial Intelligence,” GIS Lounge, March 11, 2016, www.gislounge.com/google-planet-geolocating/.

11 Tobias Weyand, Ilya Kostrikov, and James Philbin, “PlaNet — Photo Geolocation with Convolutional Neural Networks,” arXiv.org, submitted February 17, 2016, arxiv.org/pdf/1602.05314.pdf.

12 Clay Dillow, “What Happens When You Combine Artificial Intelligence and Satellite Imagery,” Fortune, March 30, 2016, fortune.com/2016/03/30/facebook-ai-satellite-imagery/.

13 D'Agaro, Edo, “Artificial Intelligence Used in Genome Analysis Studies,” EuroBiotech Journal 2, no. 2 (April 2018), www.degruyter.com/downloadpdf/j/ebtj.2018.2.issue-2/ebtj-2018-0012/ebtj-2018-0012.pdf.

14 Norman, Donald, The Design of Everyday Things (New York: Basic Books, 1988).

15 Bennett, W. Lance and Segerberg, Alexandra, The Logic of Connective Action: Digital Media and the Personalization of Contentious Politics (New York: Cambridge University Press, 2013).

16 Gibson, James J., The Ecological Approach to Visual Perception (Boston: Houghton Mifflin, 1979).

17 “China: Police DNA Database Threatens Privacy,” Human Rights Watch, May 15, 2017, www.hrw.org/news/2017/05/15/china-police-dna-database-threatens-privacy.

18 Nicole Kobie, “The Complicated Truth about China's Social Credit System,” Wired, January 21, 2019, www.wired.co.uk/article/china-social-credit-system-explained.

19 Samantha Cole, “AI-Assisted Fake Porn Is Here and We're All Fucked,” Motherboard, December 11, 2017, motherboard.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn.

20 For a right-by-right discussion of the impact of AI on human rights, see Filippo A. Raso, Hannah Hilligoss, Vivek Krishnamurthy, Christopher Bavitz, and Levin Kim, “Artificial Intelligence & Human Rights: Opportunities & Risks,” Berkman Klein Center for Internet & Society at Harvard University, September 25, 2018, cyber.harvard.edu/sites/default/files/2018-09/2018-09_AIHumanRightsSmall.pdf. One topic we have not touched on here but should acknowledge because it is widely discussed is that of algorithmic fairness, which involves the responsible use of big data and machine learning in many domains of life; see, for instance, Barocas, Solon and Selbst, Andrew D., “Big Data's Disparate Impact,” California Law Review 104 (2016).

21 Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” The New York Times Archive, archive.nytimes.com/www.nytimes.com/library/cyber/surf/1120surf-vinge.html.

22 Shanahan, Murray, The Technological Singularity (Cambridge, Mass.: MIT Press, 2015), p. 233.

23 Sarah Knapton, “DeepMind's AlphaZero Now Showing Human-Like Intuition in Historical ‘Turning Point’ for AI,” Telegraph, December 6, 2018, www.telegraph.co.uk/science/2018/12/06/deepminds-alphazero-now-showing-human-like-intuition-creativity/.

24 Silver, David, Huang, Aja, Maddison, Chris J., Guez, Arthur, Sifre, Laurent, van den Driessche, George, Schrittwieser, Julian, et al. , “Mastering the Game of Go with Deep Neural Networks and Tree Search,” Nature 529 (2016), pp. 484–89.

25 At the core of reinforcement learning is backpropogation and the Monte Carlo Tree Search. “Backpropagation” is shorthand for “the backward propagation of errors.” An error is computed at the output and distributed backward throughout the neural network's layers. For more on backpropagation, see the “Backpropogation” page on the DeepAI website: deepai.org/machine-learning-glossary-and-terms/backpropagation. For the Monte Carlo Tree Search, see Martin Müller (2010), “Challenges in Monte Carlo Tree Search,” mcts.ai/.

26 Silver et al., “Mastering the Game of Go.”

27 “AlphaGo Zero — World's Best Go Player,” YouTube video, 2:08, posted by SciNews, October 18, 2017, www.youtube.com/watch?v=4Sm922Xp5N4.

28 Demis Hassabis and David Silver, “AlphaGo Zero: Learning from Scratch,” DeepMind, October 18, 2017, deepmind.com/blog/alphago-zero-learning-scratch/; Chris Duckett, “DeepMind AlphaGo Zero Learns on its Own without Meatbag Intervention,” ZDNet, October 19, 2017, www.zdnet.com/article/deepmind-alphago-zero-learns-on-its-own-without-meatbag-intervention/.

29 Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Žídek, et al., “De Novo Structure Prediction with Deep-Learning Based Scoring,” in Thirteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstracts), December 1–4, 2018, deepmind.com/blog/alphafold/.

30 Ian Sample, “Google's DeepMind Predicts 3D Shapes of Proteins,” Guardian, December 2, 2018, www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins.

31 Oren Etzioni, quoted in Larry Greenemeier, “AI versus AI: Self-Taught AlphaGo Zero Vanquishes Its Predecessor,” Scientific American, October 18, 2017, www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/; emphasis added.

32 Razvan Pascanu, Theophane Weber, Peter Battaglia, Yujia Li, Sébastien Recaniere, and David Reichert, “Agents That Imagine and Plan,” DeepMind, July 20, 2017, deepmind.com/blog/agents-imagine-and-plan/.

33 AlphaStar team, “AlphaStar: Mastering the Real-Time Strategy Game Star-Craft II,” DeepMind, January 24, 2019, deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/.

34 Kelsey Piper, “StarCraft Is a Deep, Complicated War Strategy Game. Google's AlphaStar AI Crushed It,” Vox, January 24, 2019, www.vox.com/future-perfect/2019/1/24/18196177/ai-artificial-intelligence-google-deepmind-starcraft-game.

35 Vinge, “The Coming Technological Singularity.”

36 April Glaser, “Elon Musk Wants to Connect Computers to Your Brain So We Can Keep Up with Robots,” Recode, March 27, 2013, www.recode.net/2017/3/27/15079226/elon-musk-computers-technology-brain-ai-artificial-intelligence-neural-lace; see also “We Are Already Cyborgs / Elon Musk / Code Conference 2016,” YouTube video, 5:11, posted by Recode, June 2, 2016, www.youtube.com/watch?list=PLKof9YSAshgyPqlKUUYrHfIQaOzFPSL4&v=ZrGPuUQsDjo.

37 Isobel Asher Hamilton, “Elon Musk Believes AI Could Turn Humans into an Endangered Species like the Mountain Gorilla,” Business Insider, November 26, 2018, www.businessinsider.com/elon-musk-ai-could-turn-humans-into-endangered-species-2018-11.

38 Kurt Schlosser, “MIT Student Wows ‘60 Minutes’ by Surfing the Internet and Ordering Pizza — with His Mind,” GeekWire, April 23, 2018, www.geekwire.com/2018/mit-student-wows-60-minutes-surfing-internet-ordering-pizza-mind/.

39 For explorations of such a blended world, see Marx Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (New York: Knopf, 2017).

40 See Moor, James H., “Four Kinds of Ethical RobotsPhilosophy Now 72 (2009), pp. 1214. See also Moor, James H. and Bynum, Terrell Ward, eds., CyberPhilosophy: The Intersection of Philosophy and Computing (Oxford: Basil Blackwell, 2002).

41 Floridi, Luciano and Sanders, J. W., “On the Morality of Artificial Agents,” Minds and Machines 14, no. 3 (2004), pp. 349–79. For an exploration of artificial morality and the agency of robots, see Misselhorn, Catrin, “Artificial Morality. Concepts, Issues and Challenges,” Society 55, no. 2 (April 2018), pp. 161–9.

42 For the philosophy of mind behind what is to come, see Braddon-Mitchell, David and Jackson, Frank, Philosophy of Mind and Cognition: An Introduction, 2nd ed. (Malden, Mass.: Blackwell, 2006); Carter, Matt, Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence (Edinburgh: Edinburgh University Press, 2007); and Heil, John, Philosophy of Mind: A Contemporary Introduction, 3rd ed. (New York: Routledge, 2012).

43 Nagel, Thomas, Mind & Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False (New York: Oxford University Press, 2012), p. 128. The emphasis on the differences between the subjective and the objective standpoint permeates Nagel's work, both in his political philosophy and in his philosophy of mind.

44 Chalmers, David J., The Conscious Mind: In Search of a Fundamental Theory (New York: Oxford University Press, 1996).

45 Dennett, Daniel C., Consciousness Explained (Boston: Back Bay Books, 1992); Dennett, Daniel C., From Bacteria to Bach and Back: The Evolution of Minds (New York: W. W. Norton, 2018), especially ch. 14.

46 Dennett, From Bacteria to Bach and Back, p. 400.

47 For the functionalist take on the mind, see Heil, Philosophy of Mind, ch. 6; for an early formulation of functionalism, see Putnam, Hilary, “Minds and Machines,” ch. 18 in Mind, Language, and Reality: Philosophical Papers, vol. 2 (Cambridge, U.K.: Cambridge University Press, 1975), pp. 362385; for influential critical discussion, see Block, Ned, “Troubles with Functionalism,” in Block, Ned, ed., Readings in the Philosophy of Psychology, vols. 1 (Cambridge, Mass: Harvard University Press, 1980), pp. 268305.

48 Harari, Yuval Noah, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2015), Kindle, Amazon ed., p. 293.

49 Zuboff, Age of Surveillance Capitalism.

50 Harari, Homo Deus, p. 314.

51 Elizabeth Glasure, “Artificial Intelligence Is the Next Big Player in Genomics,” Biospace, December 11, 2018, www.biospace.com/article/artificial-intelligence-is-the-next-big-player-in-genomics/; Himanshu Goenka, “Bioterrorism and Gene Editing: Can Crispr Tool Be Used as Biological Weapon in War?,” International Business Times, December 14, 2016, www.ibtimes.com/bioterrorism-gene-editing-can-crispr-tool-be-used-biological-weapon-war-2460102; see also Antonio Regalado, “Top U.S. Intelligence Official Calls Gene Editing a WMD Threat,” MIT Technology Review, February 9, 2016, www.technologyreview.com/s/600774/top-us-intelligence-official-calls-gene-editing-a-wmd-threat/. The H5N1 flu strain, for example, kills 60 percent of those it infects. Yet, among humans, it is not highly contagious. In 2011, researchers in the United States and Holland altered the H5N1 genome in a way that made its level of contagion high. A strain like this could “change world history if it were ever set free” by triggering a pandemic, “quite possibly with many millions of deaths.” Martin Enserink, “Scientists Brace for Media Storm around Controversial Flu Studies,” Science, November 23, 2011, www.sciencemag.org/news/2011/11/scientists-brace-media-storm-around-controversial-flu-studies.

52 Stephen Hsu, “Super-Intelligent Humans Are Coming,” Nautilus, October 16, 2014, nautil.us/issue/18/genius/super_intelligent-humans-are-coming.

53 Harari, Homo Deus, p. 4; emphasis added

54 Piketty, Thomas, Capital in the Twenty-First Century (Cambridge, Mass.: Harvard Belknap Press, 2013).

55 United Nations, “The World Population Prospects: The 2015 Revision, Key Findings and Advance Tables” (working paper ESA/P/WP.241, Department of Economic and Social Affairs, Population Division, United Nations, July 29, 2015), www.un.org/en/development/desa/publications/world-population-prospects-2015-revision.html.

56 Max Tegmark, speaking in “Do You Trust This Computer?,” YouTube video, 1:18:03, posted by Dr. Caleb Cheung, September 5, 2018, www.youtube.com/watch?v=DVprGRt39yg.

57 Bostrom, Nick, “A History of Transhumanist Thought,” Journal of Evolution and Technology 14, no. 1 (2005), nickbostrom.com/papers/history.pdf.

58 Harari, Homo Deus.

59 Pinker, Steven, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Penguin, 2017), Kindle, Amazon ed., p. 453.

60 Ellora Thadaney Israni, “When an Algorithm Helps Send You to Prison.” New York Times, October 26, 2017, www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html.

61 Page, Benjamin I., Seawright, Jason, and Lacombe, Mathew J., Billionaires and Stealth Politics (Chicago: University of Chicago Press, 2018); Mayer, Jane, Dark Money: The Hidden History of the Billionaires behind the Rise of the Radical Right (New York: Doubleday Press, 2016).

62 The following paragraphs draw on Risse, “Human Rights and Artificial Intelligence.”

63 Hume, David, An Enquiry concerning the Principles of Morals, ed. by Schneewind, J. B. (London: Hackett Publishing, 1983).

64 Kant, Immanuel, Groundwork for the Metaphysics of Morals, ed. by Gregor, Mary and Timmermann, Jens, (Cambridge: Cambridge University Press, 2012).

65 Another way of thinking about the categorical imperative is that it requires us to always act in ways that would pass a generalization test. Certain actions would be rendered impermissible because they would not hold up if everybody were to take them, as, for instance, stealing and lying would not: there would be no property to begin with if everybody stole, and no communication if everybody reserved the right to lie.

66 Petersen, Steve, “Superintelligence as Superethical,” in Lin, Patrick, Abney, Keith, and Jenkins, Ryan, eds., Robot Ethics 2.0 (New York: Oxford University Press, 2017), pp. 332–7; Chalmers, David, “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies 17, nos. 9–10; see also “What Makes People Happy? / Daniel Kahneman,” YouTube video, 9:47, from a discussion with Professor Kahneman at the 2017 Asilomar conference, posted by the Future of Life Institute, January 30, 2017, www.youtube.com/watch?v=z1N96In7GUc.

67 Scanlon, T. M., “What is Morality?” in The Harvard Sampler: Liberal Education in the Twenty-First Century, Shephard, Jennifer M., Kosslyn, Stephen M., Hammonds, Evelynn M., eds. (Cambridge, Mass.: Harvard University Press, 2011), pp. 243266

68 For speculation on what such mixed societies could be like, see Tegmark, Life 3.0, ch. 5.

69 For the point about Hobbes, see “Prof. Peter Railton — Machine Morality: Building or Learning?,” YouTube video, 33:56, posted by the Artificial Intelligence Channel, September 11, 2017, www.youtube.com/watch?v=SsPFgXeaeLI.

Keywords

The Future Impact of Artificial Intelligence on Humans and Human Rights

  • Steven Livingston and Mathias Risse

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed