Skip to main content Accessibility help
×
×
Home

Artificial Intelligence and International Security: The Long View

  • Amandeep Singh Gill

Abstract

How will emerging autonomous and intelligent systems affect the international landscape of power and coercion two decades from now? Will the world see a new set of artificial intelligence (AI) hegemons just as it saw a handful of nuclear powers for most of the twentieth century? Will autonomous weapon systems make conflict more likely or will states find ways to control proliferation and build deterrence, as they have done (fitfully) with nuclear weapons? And importantly, will multilateral forums find ways to engage the technology holders, states as well as industry, in norm setting and other forms of controlling the competition? The answers to these questions lie not only in the scope and spread of military applications of AI technologies but also in how pervasive their civilian applications will be. Just as civil nuclear energy and peaceful uses of outer space have cut into and often shaped discussions on nuclear weapons and missiles, the burgeoning uses of AI in consumer products and services, health, education, and public infrastructure will shape views on norm setting and arms control. New mechanisms for trust and confidence-building measures might be needed not only between China and the United States—the top competitors in comprehensive national strength today—but also among a larger group of AI players, including Canada, France, Germany, India, Israel, Japan, Russia, South Korea, and the United Kingdom.

  • View HTML
    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Artificial Intelligence and International Security: The Long View
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Artificial Intelligence and International Security: The Long View
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Artificial Intelligence and International Security: The Long View
      Available formats
      ×

Copyright

Footnotes

Hide All
*

The views expressed in this article are the author's own.

Footnotes

References

Hide All

NOTES

1 The term “autonomous and intelligent systems” follows the practice of the Institute of Electrical and Electronics Engineers. The sense conveyed is that of augmentation of human capabilities, and not of their emulation.

2 For a full listing, see Neil Davison and Gilles Giacca, “Part III: Background Paper Prepared by the International Committee of the Red Cross,” in Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons (meeting report, International Committee of the Red Cross, Versoix, Switzerland, March 15–16, 2016), pp. 69–84, shop.icrc.org/autonomous-weapons-systems.html.

3 Scharre, Paul, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton, 2018), pp. 1113.

4 Report of the Artificial Intelligence Task Force, Government of India, Ministry of Commerce and Industry, dipp.gov.in/whats-new/report-task-force-artificial-intelligence.

5 Amandeep S. Gill, introduction to Perspectives on Lethal Autonomous Weapon Systems, UNODA Occasional Papers 30, November 2017, pp. 1–4, s3.amazonaws.com/unoda-web/wp-content/uploads/2017/11/op30.pdf.

6 Ananya Bhattacharya, “The Number of Smart Phone Users Will More Than Double in Four Years,” Quartz India, December 4, 2018, qz.com/india/1483368/indias-smartphone-internet-usage-will-surge-by-2022-cisco-says/; and Kiran Rathee, “How Digitally Enabled Government Saved Rs 90,000 Crore,” Financial Express, February 2, 2019, www.financialexpress.com/industry/technology/how-digitally-enabled-government-saved-rs-90000-crore/1472379/.

7 Chinese scholars use the term zonghe guoli (comprehensive national power) to refer to an aggregate of factors such as territory, availability of natural resources, military strength, economic clout, social conditions, domestic government, foreign policy, and other forms of wielding international influence. See Pillsbury, Michael, China Debates the Future Security Environment (Washington, D.C.: National Defense University Press, 2000), pp. 203–4.

8 Mark Purdy and Paul Daugherty, How AI Boosts Industry Profits and Innovation, Accenture, www.accenture.com/_acnmedia/Accenture/next-gen-5/insight-ai-industry-growth/pdf/Accenture-AI-Industry-Growth-Full-Report.pdf?la=en.

9 PwC, Sizing the Prize: What's the Real Value of AI for Your Business and How Can You Capitalise? available at www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf.

10 See Longmei Zhang and Sally Chen, “China's Digital Economy: Opportunities and Risks” (working paper 19/16, Asia Pacific Department, International Monetary Fund, January 2019), www.imf.org/~/media/Files/Publications/WP/2019/wp1916.ashx.

11 For an idea of the number of devices that are part of the Internet of Things, see Knud Lasse Lueth, “State of the IoT 2018: Number of IoT Devices Now at 7B—Market Accelerating,” IoT Analytics, August 8, 2018, iot-analytics.com/state-of-the-iot-update-q1-q2-2018-number-of-iot-devices-now-7b/; Japan's National Institute of Information and Communications Technology (NICT) estimates on the basis of scans of the Darknet that 54 percent of the attacks it detected in 2017 targeted Internet of Things devices.

12 For example, IBM researchers managed to conceal the known malware WannaCry in video-conferencing software and use an AI neural network to trigger it in response to use by a specific individual. See Mark Ph. Stoecklin, “DeepLocker: How AI Can Power a Stealthy New Breed of Malware,” SecurityIntelligence, August 8, 2018, securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/.

13 Translated text of the Plan available on the website of the Chinese Embassy in Finland: http://www.chinaembassy-fi.org/eng/kxjs/P020171025789108009001.pdf. For a reaction, see Gregory C. Allen, “Understanding China's AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security,” Center for a New American Security, February 6, 2019, www.cnas.org/publications/reports/understanding-chinas-ai-strategy.

14 John Gapper, “Huawei Is Too Great a Security Gamble for 5G Networks,” Financial Times, January 30, 2019, www.ft.com/content/40e68898-23b8-11e9-8ce6-5db4543da632.

15 Vladimir Putin, quoted in “‘Whoever Leads in AI Will Rule the World’: Putin to Russian Children on Knowledge Day,” RT, September 1, 2017, www.rt.com/news/401731-ai-rule-world-putin/.

16 Emmanuel Macron, quoted in Nicholas Thompson, “Emmanuel Macron Talks to WIRED about France's AI Strategy,” WIRED, March 31, 2018, www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy/.

17 “Executive Order on Maintaining American Leadership in Artificial Intelligence” (transcription of executive order, signed by Donald J. Trump, February 11, 2019), WhiteHouse.gov, www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/.

18 Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance our Security and Prosperity, Defense.gov, media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.

19 Lora Saalman, “Fear of False Negatives: AI and China's Nuclear Posture,” Bulletin of the Atomic Scientists, April 24, 2018, thebulletin.org/2018/04/fear-of-false-negatives-ai-and-chinas-nuclear-posture/.

20 Galliott, Jai, Military Robots: Mapping the Moral Landscape (Burlington, Vt.: Ashgate, 2015), pp. 165–86.

21 Remarks taken from Amandeep Singh Gill (SIPRI workshop, “Mapping the Impact of Machine Learning and Autonomy on Strategic Stability and Nuclear Risk,” Stockholm, May 22–23, 2018).

22 The arms control epistemic community is relatively closed; in contrast, other processes, such as the sherpas preparing for the Nuclear Security Summit, have been an experiment in opening up the community to other domain specialists. See Adler, Emanuel, “The Emergence of Cooperation: National Epistemic Communities and the International Evolution of the Idea of Nuclear Arms Control,” International Organization 46, no. 1 (winter 1992), pp. 101–45.

23 Conversely, AI could bolster and radically reduce the costs of traditional verification methods.

24 Except that the tech industry and its employees may be given a head start on these forums with an early vote on how military applications are pursued. The previously cited Project Maven triggered a controversy among Google employees. See Kelsey D. Atherton, “Targeting the Future of the DoD's Controversial Project Maven Initiative,” C4ISRNET, July 27, 2018, www.c4isrnet.com/it-networks/2018/07/27/targeting-the-future-of-the-dods-controversial-project-maven-initiative/.

25 See the official statements made by individual countries at a series of meetings held in in Geneva, at “2018 Group of Governmental Experts on Lethal Autonomous Weapon Systems,” under “Statements,” United Nations Office at Geneva,  www.unog.ch/80256EE600585943/(http://Pages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument.

26 Two such false analogies from the author's experience come in the question of whether existing international law, in particular international humanitarian law, applies to cyber conflict, involving a bane of discussions on cyber norms, or to analogizing nuclear stability to the cyber domain.

27 Joseph S. Nye Jr. cites the story of how the idea of “permissive action links” for securing nuclear weapons was leaked in an informal dialogue to the Soviets to help spread a good practice on nuclear security. Nye, Joseph S. Jr., “Nuclear Learning and U.S.-Soviet Security Regimes,” International Organization 41, no. 3 (summer 1987), pp. 371402.

28 Draft: Ethics Guidelines for Trustworthy AI, European Commission High-Level Expert Group on Artificial Intelligence, December 18, 2018, ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai.

29 Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Report of the 2018 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, October 23, 2018, p. 4, https://www.unog.ch/80256EDD006B8954/(httpAssets)/20092911F6495FA7C125830E003F9A5B/$file/CCW_GGE.1_2018_3_final.pdf.

30 A Glossary for Discussion of Ethics of Autonomous and Intelligent Systems, Version 1, IEEE Standards Association, October 2017, standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/eadv2_glossary.pdf.

31 António Guterres, “Remarks at ‘Web Summit’” (speech, Web Summit, Lisbon, November 5, 2018), www.un.org/sg/en/content/sg/speeches/2018-11-05/remarks-web-summit.

32 NotPetya, the ransomware that caused havoc in summer 2017 in the global supply chains of companies in fields as diverse as shipping, construction, pharmaceuticals, and food, used a penetration capability called EternalBlue that was allegedly developed by the National Security Agency in the United States but leaked in early 2017.

* The views expressed in this article are the author's own.

Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Ethics & International Affairs
  • ISSN: 0892-6794
  • EISSN: 1747-7093
  • URL: /core/journals/ethics-and-international-affairs
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×

Keywords

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed