Lake et al. present an insightful survey of the state of the art in artificial intelligence (AI) and offer persuasive proposals for feasible future steps. Their ideas of “start-up software” and tools for rapid model learning (sublinguistic “compositionality” and “learning-to-learn”) help pinpoint the sources of general, flexible intelligence. Their concrete examples using the Character Challenge and Frostbite Challenge forcefully illustrate just how behaviorally effective human learning can be compared with current achievements in machine learning. Their proposal that such learning is the result of “metacognitive processes” integrating model-based and model-free learning is tantalizingly suggestive, pointing toward novel ways of explaining intelligence. So, in a sympathetic spirit, we offer some suggestions. The first set concerns casting a wider view of explananda and, hence, potential explanantia regarding intelligence. The second set concerns the need to confront ethical concerns as AI research advances.
Lake et al.'s title speaks of “thinking like humans” but most of the features discussed—use of intuitive physics, intuitive psychology, and relying on “models”—are features of animal thinking as well. Not just apes or mammals, but also birds and octopuses and many other animals have obviously competent expectations about causal links, the reactions of predators, prey and conspecifics, and must have something like implicit models of the key features in their worlds—their affordances, to use Gibson's (Reference Gibson1979) term.
Birds build species-typical nests they have never seen built, improving over time, and apes know a branch that is too weak to hold them. We think the authors' term intuitive physics engine is valuable because unlike “folk physics,” which suggests a theory, it highlights the fact that neither we, nor animals in general, need to understand from the outset the basic predictive machinery we are endowed with by natural selection. We humans eventually bootstrap this behavioral competence into reflective comprehension, something more like a theory and something that is probably beyond language-less animals.
So, once sophisticated animal-level intelligence is reached, there will remain the all-important step of bridging the gap to human-level intelligence. Experiments suggest that human children differ from chimpanzees primarily with respect to social knowledge (Herrmann et al. Reference Herrmann, Call, Hernandez-Lloreda, Hare and Tomasello2007; Reference Herrmann, Hernandez-Lloreda, Call, Hare and Tomasello2010). Their unique forms of imitation and readiness to learn from teachers suggest means by which humans can accumulate and exploit an “informational commonwealth” (Kiraly et al. Reference Kiraly, Csibra and Gergely2013; Sterelny Reference Sterelny2012; Reference Sterelny, Downes and Machery2013). This is most likely part of the story of how humans can become as intelligent as they do. But the missing part of that story remains internal mechanisms, which Lake et al. can help us focus on. Are the unique social skills developing humans deploy because of enriched models (“intuitive psychology” say), novel models (ones with principles of social emulation and articulation), or more powerful abilities to acquire and enrich models (learning-to-learn)? The answer probably appeals to some combination. But we suggest that connecting peculiarly human ways of learning from others to Lake et al.'s “learning-to-learn” mechanisms may be particularly fruitful for fleshing out the latter – and ultimately illuminating to the former.
The step up to human-style comprehension carries moral implications that are not mentioned in Lake et al.'s telling. Even the most powerful of existing AIs are intelligent tools, not colleagues, and whereas they can be epistemically authoritative (within limits we need to characterize carefully), and hence will come to be relied on more and more, they should not be granted moral authority or responsibility because they do not have skin in the game: they do not yet have interests, and simulated interests are not enough. We are not saying that an AI could not be created to have genuine interests, but that is down a very long road (Dennett Reference Dennett2017; Hurley et al. Reference Hurley, Dennett and Adams2011). Although some promising current work suggests that genuine human consciousness depends on a fundamental architecture that would require having interests (Deacon Reference Deacon2012; Dennett Reference Dennett2013), long before that day arrives, if it ever does, we will have AIs that can communicate with natural language with their users (not collaborators).
How should we deal, ethically, with these pseudo-moral agents? One idea, inspired in part by recent work on self-driving cars (Pratt Reference Pratt2016), is that instead of letting them be autonomous, they should be definitely subordinate: co-pilots that help but do not assume responsibility for the results. We must never pass the buck to the machines, and we should take steps now to ensure that those who rely on them recognize that they are strictly liable for any harm that results from decisions they make with the help of their co-pilots. The studies by Dietvorst et al. (Reference Dietvorst, Simmons and Massey2015; Reference Dietvorst, Simmons and Massey2016; see Hutson Reference Hutson2017) suggest that people not only tend to distrust AIs, but also want to exert control, and hence responsibility, over the results such AIs deliver. One way to encourage this is to establish firm policies of disclosure of all known gaps and inabilities in AIs (much like the long lists of side effects of medications). Furthermore, we should adopt the requirement that such language-using AIs must have an initiation period in which their task is to tutor users, treating them as apprentices and not giving any assistance until the user has established a clear level of expertise. Such expertise would not be in the fine details of the AIs' information, which will surely outstrip any human being's knowledge, but in the limitations of the assistance on offer and the responsibility that remains in the hands of the user. Going forward, it is time for evaluations of the state of AI to include consideration of such moral matters.
Lake et al. present an insightful survey of the state of the art in artificial intelligence (AI) and offer persuasive proposals for feasible future steps. Their ideas of “start-up software” and tools for rapid model learning (sublinguistic “compositionality” and “learning-to-learn”) help pinpoint the sources of general, flexible intelligence. Their concrete examples using the Character Challenge and Frostbite Challenge forcefully illustrate just how behaviorally effective human learning can be compared with current achievements in machine learning. Their proposal that such learning is the result of “metacognitive processes” integrating model-based and model-free learning is tantalizingly suggestive, pointing toward novel ways of explaining intelligence. So, in a sympathetic spirit, we offer some suggestions. The first set concerns casting a wider view of explananda and, hence, potential explanantia regarding intelligence. The second set concerns the need to confront ethical concerns as AI research advances.
Lake et al.'s title speaks of “thinking like humans” but most of the features discussed—use of intuitive physics, intuitive psychology, and relying on “models”—are features of animal thinking as well. Not just apes or mammals, but also birds and octopuses and many other animals have obviously competent expectations about causal links, the reactions of predators, prey and conspecifics, and must have something like implicit models of the key features in their worlds—their affordances, to use Gibson's (Reference Gibson1979) term.
Birds build species-typical nests they have never seen built, improving over time, and apes know a branch that is too weak to hold them. We think the authors' term intuitive physics engine is valuable because unlike “folk physics,” which suggests a theory, it highlights the fact that neither we, nor animals in general, need to understand from the outset the basic predictive machinery we are endowed with by natural selection. We humans eventually bootstrap this behavioral competence into reflective comprehension, something more like a theory and something that is probably beyond language-less animals.
So, once sophisticated animal-level intelligence is reached, there will remain the all-important step of bridging the gap to human-level intelligence. Experiments suggest that human children differ from chimpanzees primarily with respect to social knowledge (Herrmann et al. Reference Herrmann, Call, Hernandez-Lloreda, Hare and Tomasello2007; Reference Herrmann, Hernandez-Lloreda, Call, Hare and Tomasello2010). Their unique forms of imitation and readiness to learn from teachers suggest means by which humans can accumulate and exploit an “informational commonwealth” (Kiraly et al. Reference Kiraly, Csibra and Gergely2013; Sterelny Reference Sterelny2012; Reference Sterelny, Downes and Machery2013). This is most likely part of the story of how humans can become as intelligent as they do. But the missing part of that story remains internal mechanisms, which Lake et al. can help us focus on. Are the unique social skills developing humans deploy because of enriched models (“intuitive psychology” say), novel models (ones with principles of social emulation and articulation), or more powerful abilities to acquire and enrich models (learning-to-learn)? The answer probably appeals to some combination. But we suggest that connecting peculiarly human ways of learning from others to Lake et al.'s “learning-to-learn” mechanisms may be particularly fruitful for fleshing out the latter – and ultimately illuminating to the former.
The step up to human-style comprehension carries moral implications that are not mentioned in Lake et al.'s telling. Even the most powerful of existing AIs are intelligent tools, not colleagues, and whereas they can be epistemically authoritative (within limits we need to characterize carefully), and hence will come to be relied on more and more, they should not be granted moral authority or responsibility because they do not have skin in the game: they do not yet have interests, and simulated interests are not enough. We are not saying that an AI could not be created to have genuine interests, but that is down a very long road (Dennett Reference Dennett2017; Hurley et al. Reference Hurley, Dennett and Adams2011). Although some promising current work suggests that genuine human consciousness depends on a fundamental architecture that would require having interests (Deacon Reference Deacon2012; Dennett Reference Dennett2013), long before that day arrives, if it ever does, we will have AIs that can communicate with natural language with their users (not collaborators).
How should we deal, ethically, with these pseudo-moral agents? One idea, inspired in part by recent work on self-driving cars (Pratt Reference Pratt2016), is that instead of letting them be autonomous, they should be definitely subordinate: co-pilots that help but do not assume responsibility for the results. We must never pass the buck to the machines, and we should take steps now to ensure that those who rely on them recognize that they are strictly liable for any harm that results from decisions they make with the help of their co-pilots. The studies by Dietvorst et al. (Reference Dietvorst, Simmons and Massey2015; Reference Dietvorst, Simmons and Massey2016; see Hutson Reference Hutson2017) suggest that people not only tend to distrust AIs, but also want to exert control, and hence responsibility, over the results such AIs deliver. One way to encourage this is to establish firm policies of disclosure of all known gaps and inabilities in AIs (much like the long lists of side effects of medications). Furthermore, we should adopt the requirement that such language-using AIs must have an initiation period in which their task is to tutor users, treating them as apprentices and not giving any assistance until the user has established a clear level of expertise. Such expertise would not be in the fine details of the AIs' information, which will surely outstrip any human being's knowledge, but in the limitations of the assistance on offer and the responsibility that remains in the hands of the user. Going forward, it is time for evaluations of the state of AI to include consideration of such moral matters.