1 For the nexus between human rights and AI, see Risse, Mathias, “Human Rights and Artificial Intelligence: An Urgently Needed Agenda,” Human Rights Quarterly 41, no. 1 (February 2019), pp. 1–16.
2 Zuboff, Shoshana, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Human Power (New York: PublicAffairs, 2019).
3 For a more complete description of deep learning, see Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning (Cambridge, Mass.: MIT Press, 2017), www.deeplearningbook.org/.
5 “Can We Rule Out Near-Term AGI?,” YouTube video, 17:19, posted by Web Summit, November 7, 2018, www.youtube.com/watch?time_continue=1&v=YHCSNsLKHfM. For a contrary view, see Trent Eady, “Does Recent Progress with Neural Networks Foretell Artificial General Intelligence?,” Medium, December 4, 2018, medium.com/protopiablog/does-recent-progress-with-neural-networks-foretell-artificial-general-intelligence-9545c17a5d8b.
6 Regarding state surveillance, see Deibert, Ronald J., “The Road to Digital Unfreedom: Three Painful Truths about Social Media,” Journal of Democracy 30, no. 1 (January 2019), pp. 25–39; regarding corporate surveillance, see Zuboff, Age of Surveillance Capitalism; regarding open-source investigations by nonstate actors, see Steven Livingston and Sushma Raman, “Human Rights Documentation in Limited Access Areas: The Use of Technology in War Crimes and Human Rights Abuse Investigations” (Cambridge, Mass.: Carr Center for Human Rights Policy, May 2018), carrcenter.hks.harvard.edu/files/cchr/files/documentationandtech_designed_may_8_2018.pdf.
11 Tobias Weyand, Ilya Kostrikov, and James Philbin, “PlaNet — Photo Geolocation with Convolutional Neural Networks,” arXiv.org, submitted February 17, 2016, arxiv.org/pdf/1602.05314.pdf.
14 Norman, Donald, The Design of Everyday Things (New York: Basic Books, 1988).
15 Bennett, W. Lance and Segerberg, Alexandra, The Logic of Connective Action: Digital Media and the Personalization of Contentious Politics (New York: Cambridge University Press, 2013).
16 Gibson, James J., The Ecological Approach to Visual Perception (Boston: Houghton Mifflin, 1979).
20 For a right-by-right discussion of the impact of AI on human rights, see Filippo A. Raso, Hannah Hilligoss, Vivek Krishnamurthy, Christopher Bavitz, and Levin Kim, “Artificial Intelligence & Human Rights: Opportunities & Risks,” Berkman Klein Center for Internet & Society at Harvard University, September 25, 2018, cyber.harvard.edu/sites/default/files/2018-09/2018-09_AIHumanRightsSmall.pdf. One topic we have not touched on here but should acknowledge because it is widely discussed is that of algorithmic fairness, which involves the responsible use of big data and machine learning in many domains of life; see, for instance, Barocas, Solon and Selbst, Andrew D., “Big Data's Disparate Impact,” California Law Review 104 (2016).
22 Shanahan, Murray, The Technological Singularity (Cambridge, Mass.: MIT Press, 2015), p. 233.
24 Silver, David, Huang, Aja, Maddison, Chris J., Guez, Arthur, Sifre, Laurent, van den Driessche, George, Schrittwieser, Julian, et al. , “Mastering the Game of Go with Deep Neural Networks and Tree Search,” Nature 529 (2016), pp. 484–89.
25 At the core of reinforcement learning is backpropogation and the Monte Carlo Tree Search. “Backpropagation” is shorthand for “the backward propagation of errors.” An error is computed at the output and distributed backward throughout the neural network's layers. For more on backpropagation, see the “Backpropogation” page on the DeepAI website: deepai.org/machine-learning-glossary-and-terms/backpropagation. For the Monte Carlo Tree Search, see Martin Müller (2010), “Challenges in Monte Carlo Tree Search,” mcts.ai/.
26 Silver et al., “Mastering the Game of Go.”
28 Demis Hassabis and David Silver, “AlphaGo Zero: Learning from Scratch,” DeepMind, October 18, 2017, deepmind.com/blog/alphago-zero-learning-scratch/; Chris Duckett, “DeepMind AlphaGo Zero Learns on its Own without Meatbag Intervention,” ZDNet, October 19, 2017, www.zdnet.com/article/deepmind-alphago-zero-learns-on-its-own-without-meatbag-intervention/.
29 Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Žídek, et al., “De Novo Structure Prediction with Deep-Learning Based Scoring,” in Thirteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstracts), December 1–4, 2018, deepmind.com/blog/alphafold/.
32 Razvan Pascanu, Theophane Weber, Peter Battaglia, Yujia Li, Sébastien Recaniere, and David Reichert, “Agents That Imagine and Plan,” DeepMind, July 20, 2017, deepmind.com/blog/agents-imagine-and-plan/.
35 Vinge, “The Coming Technological Singularity.”
36 April Glaser, “Elon Musk Wants to Connect Computers to Your Brain So We Can Keep Up with Robots,” Recode, March 27, 2013, www.recode.net/2017/3/27/15079226/elon-musk-computers-technology-brain-ai-artificial-intelligence-neural-lace; see also “We Are Already Cyborgs / Elon Musk / Code Conference 2016,” YouTube video, 5:11, posted by Recode, June 2, 2016, www.youtube.com/watch?list=PLKof9YSAshgyPqlKUUYrHfIQaOzFPSL4&v=ZrGPuUQsDjo.
39 For explorations of such a blended world, see Marx Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (New York: Knopf, 2017).
40 See Moor, James H., “Four Kinds of Ethical Robots” Philosophy Now 72 (2009), pp. 12–14. See also Moor, James H. and Bynum, Terrell Ward, eds., CyberPhilosophy: The Intersection of Philosophy and Computing (Oxford: Basil Blackwell, 2002).
41 Floridi, Luciano and Sanders, J. W., “On the Morality of Artificial Agents,” Minds and Machines 14, no. 3 (2004), pp. 349–79. For an exploration of artificial morality and the agency of robots, see Misselhorn, Catrin, “Artificial Morality. Concepts, Issues and Challenges,” Society 55, no. 2 (April 2018), pp. 161–9.
42 For the philosophy of mind behind what is to come, see Braddon-Mitchell, David and Jackson, Frank, Philosophy of Mind and Cognition: An Introduction, 2nd ed. (Malden, Mass.: Blackwell, 2006); Carter, Matt, Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence (Edinburgh: Edinburgh University Press, 2007); and Heil, John, Philosophy of Mind: A Contemporary Introduction, 3rd ed. (New York: Routledge, 2012).
43 Nagel, Thomas, Mind & Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False (New York: Oxford University Press, 2012), p. 128. The emphasis on the differences between the subjective and the objective standpoint permeates Nagel's work, both in his political philosophy and in his philosophy of mind.
44 Chalmers, David J., The Conscious Mind: In Search of a Fundamental Theory (New York: Oxford University Press, 1996).
45 Dennett, Daniel C., Consciousness Explained (Boston: Back Bay Books, 1992); Dennett, Daniel C., From Bacteria to Bach and Back: The Evolution of Minds (New York: W. W. Norton, 2018), especially ch. 14.
46 Dennett, From Bacteria to Bach and Back, p. 400.
47 For the functionalist take on the mind, see Heil, Philosophy of Mind, ch. 6; for an early formulation of functionalism, see Putnam, Hilary, “Minds and Machines,” ch. 18 in Mind, Language, and Reality: Philosophical Papers, vol. 2 (Cambridge, U.K.: Cambridge University Press, 1975), pp. 362–385; for influential critical discussion, see Block, Ned, “Troubles with Functionalism,” in Block, Ned, ed., Readings in the Philosophy of Psychology, vols. 1 (Cambridge, Mass: Harvard University Press, 1980), pp. 268–305.
48 Harari, Yuval Noah, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2015), Kindle, Amazon ed., p. 293.
49 Zuboff, Age of Surveillance Capitalism.
50 Harari, Homo Deus, p. 314.
51 Elizabeth Glasure, “Artificial Intelligence Is the Next Big Player in Genomics,” Biospace, December 11, 2018, www.biospace.com/article/artificial-intelligence-is-the-next-big-player-in-genomics/; Himanshu Goenka, “Bioterrorism and Gene Editing: Can Crispr Tool Be Used as Biological Weapon in War?,” International Business Times, December 14, 2016, www.ibtimes.com/bioterrorism-gene-editing-can-crispr-tool-be-used-biological-weapon-war-2460102; see also Antonio Regalado, “Top U.S. Intelligence Official Calls Gene Editing a WMD Threat,” MIT Technology Review, February 9, 2016, www.technologyreview.com/s/600774/top-us-intelligence-official-calls-gene-editing-a-wmd-threat/. The H5N1 flu strain, for example, kills 60 percent of those it infects. Yet, among humans, it is not highly contagious. In 2011, researchers in the United States and Holland altered the H5N1 genome in a way that made its level of contagion high. A strain like this could “change world history if it were ever set free” by triggering a pandemic, “quite possibly with many millions of deaths.” Martin Enserink, “Scientists Brace for Media Storm around Controversial Flu Studies,” Science, November 23, 2011, www.sciencemag.org/news/2011/11/scientists-brace-media-storm-around-controversial-flu-studies.
53 Harari, Homo Deus, p. 4; emphasis added
54 Piketty, Thomas, Capital in the Twenty-First Century (Cambridge, Mass.: Harvard Belknap Press, 2013).
59 Pinker, Steven, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Penguin, 2017), Kindle, Amazon ed., p. 453.
61 Page, Benjamin I., Seawright, Jason, and Lacombe, Mathew J., Billionaires and Stealth Politics (Chicago: University of Chicago Press, 2018); Mayer, Jane, Dark Money: The Hidden History of the Billionaires behind the Rise of the Radical Right (New York: Doubleday Press, 2016).
62 The following paragraphs draw on Risse, “Human Rights and Artificial Intelligence.”
63 Hume, David, An Enquiry concerning the Principles of Morals, ed. by Schneewind, J. B. (London: Hackett Publishing, 1983).
64 Kant, Immanuel, Groundwork for the Metaphysics of Morals, ed. by Gregor, Mary and Timmermann, Jens, (Cambridge: Cambridge University Press, 2012).
65 Another way of thinking about the categorical imperative is that it requires us to always act in ways that would pass a generalization test. Certain actions would be rendered impermissible because they would not hold up if everybody were to take them, as, for instance, stealing and lying would not: there would be no property to begin with if everybody stole, and no communication if everybody reserved the right to lie.
66 Petersen, Steve, “Superintelligence as Superethical,” in Lin, Patrick, Abney, Keith, and Jenkins, Ryan, eds., Robot Ethics 2.0 (New York: Oxford University Press, 2017), pp. 332–7; Chalmers, David, “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies 17, nos. 9–10; see also “What Makes People Happy? / Daniel Kahneman,” YouTube video, 9:47, from a discussion with Professor Kahneman at the 2017 Asilomar conference, posted by the Future of Life Institute, January 30, 2017, www.youtube.com/watch?v=z1N96In7GUc.
67 Scanlon, T. M., “What is Morality?” in The Harvard Sampler: Liberal Education in the Twenty-First Century, Shephard, Jennifer M., Kosslyn, Stephen M., Hammonds, Evelynn M., eds. (Cambridge, Mass.: Harvard University Press, 2011), pp. 243–266
68 For speculation on what such mixed societies could be like, see Tegmark, Life 3.0, ch. 5.
69 For the point about Hobbes, see “Prof. Peter Railton — Machine Morality: Building or Learning?,” YouTube video, 33:56, posted by the Artificial Intelligence Channel, September 11, 2017, www.youtube.com/watch?v=SsPFgXeaeLI.