Hostname: page-component-77f85d65b8-v2srd Total loading time: 0 Render date: 2026-03-27T11:45:29.179Z Has data issue: false hasContentIssue false

Building on prior knowledge without building it in

Published online by Cambridge University Press:  10 November 2017

Steven S. Hansen
Affiliation:
Psychology Department, Stanford University, Stanford, CA 94305. sshansen@stanford.edu lampinen@stanford.edu mcclelland@stanford.edu https://web.stanford.edu/group/pdplab/
Andrew K. Lampinen
Affiliation:
Psychology Department, Stanford University, Stanford, CA 94305. sshansen@stanford.edu lampinen@stanford.edu mcclelland@stanford.edu https://web.stanford.edu/group/pdplab/
Gaurav Suri
Affiliation:
Psychology Department, San Francisco State University, San Francisco, CA 94132. rav.psych@gmail.com http://www.suriradlab.com/
James L. McClelland
Affiliation:
Psychology Department, Stanford University, Stanford, CA 94305. sshansen@stanford.edu lampinen@stanford.edu mcclelland@stanford.edu https://web.stanford.edu/group/pdplab/

Abstract

Lake et al. propose that people rely on “start-up software,” “causal models,” and “intuitive theories” built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.

Information

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable