Skip to main content
×
Home
    • Aa
    • Aa
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 5
  • Cited by
    This article has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Barrett, Anthony M. and Baum, Seth D. 2016. A model of pathways to artificial superintelligence catastrophe for risk and decision analysis. Journal of Experimental & Theoretical Artificial Intelligence, p. 1.


    Baum, Seth D. 2015. The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives. Futures, Vol. 72, p. 86.


    Dewey, Daniel 2015. Risks of Artificial Intelligence.


    Müller, Vincent 2015. Risks of Artificial Intelligence.


    Müller, Vincent C. 2014. Risks of general artificial intelligence. Journal of Experimental & Theoretical Artificial Intelligence, Vol. 26, Issue. 3, p. 297.


    ×

WHY WE NEED FRIENDLY AI

Abstract

Humans will not always be the most intelligent agents on Earth, the ones steering the future. What will happen to us when we no longer play that role, and how can we prepare for this transition?

Copyright
Linked references
Hide All

This list contains references from the content that can be linked to their source. For a full set of references and notes please see the PDF or HTML where available.

S. Armstrong, A. Sandberg, N. Bostrom, ‘Thinking inside the box: Using and controlling Oracle AI’, Minds and Machines, vol. 22, no. 4 (2012), 299324

Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Think
  • ISSN: 1477-1756
  • EISSN: 1755-1196
  • URL: /core/journals/think
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×