Skip to main content Accessibility help
×
Home
Hostname: page-component-5c569c448b-t6r6x Total loading time: 1.1 Render date: 2022-07-03T19:31:34.774Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true } hasContentIssue true

Building on prior knowledge without building it in

Published online by Cambridge University Press:  10 November 2017

Steven S. Hansen
Affiliation:
Psychology Department, Stanford University, Stanford, CA 94305. sshansen@stanford.edulampinen@stanford.edumcclelland@stanford.eduhttps://web.stanford.edu/group/pdplab/
Andrew K. Lampinen
Affiliation:
Psychology Department, Stanford University, Stanford, CA 94305. sshansen@stanford.edulampinen@stanford.edumcclelland@stanford.eduhttps://web.stanford.edu/group/pdplab/
Gaurav Suri
Affiliation:
Psychology Department, San Francisco State University, San Francisco, CA 94132. rav.psych@gmail.comhttp://www.suriradlab.com/
James L. McClelland
Affiliation:
Psychology Department, Stanford University, Stanford, CA 94305. sshansen@stanford.edulampinen@stanford.edumcclelland@stanford.eduhttps://web.stanford.edu/group/pdplab/

Abstract

Lake et al. propose that people rely on “start-up software,” “causal models,” and “intuitive theories” built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bartunov, S. & Vetrov, D. P. (2016) Fast adaptation in generative models with generative matching networks. arXiv preprint 1612.02192.Google Scholar
Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition 28(1–2):371.CrossRefGoogle ScholarPubMed
Gülçehre, Ç. & Bengio, Y. (2016) Knowledge matters: Importance of prior information for optimization. Journal of Machine Learning Research 17(8):132.Google Scholar
Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z. & Hughes, M. (2016) Google's multilingual neural machine translation system: Enabling zero-shot translation. arXiv preprint 1611.04558. Available at: https://arxiv.org/abs/1611.04558.Google Scholar
Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Presented at the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, December 3–6, 2012. In: Advances in Neural Information Processing Systems 25 (NIPS 2012), ed. Pereira, F., Burges, C. J. C., Bottou, L. & Weinberger, K. Q., pp. 1097–105. Neural Information Processing Systems Foundation.Google Scholar
Marr, D. (1982) Vision: A computational investigation into the human representation and processing of visual information. MIT Press.Google Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. & Riedmiller, M. (2013) Playing Atari with deep reinforcement learning. arXiv preprint 1312.5602. Available at: https://arxiv.org/abs/1312.5602.Google Scholar
Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., & Lillicrap, T. (2016). One-shot learning with memory-augmented neural networks. arXiv preprint 1605.06065. Available at: https://arxiv.org/abs/1605.06065.Google Scholar
Vinyals, O., Blundell, C., Lillicrap, T. & Wierstra, D. (2016) Matching networks for one shot learning. Vinyals, O., Blundell, C., Lillicrap, T. Kavukcuoglu, K. & Wierstra, D. (2016). Matching networks for one shot learning. Presented at the 2016 Neural Information Processing Systems conference, Barcelona, Spain, December 5–10, 2016. In: Advances in Neural Information Processing Systems 29 (NIPS 2016), ed. Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I. & Garnett, R., pp. 3630–38. Neural Information Processing Systems Foundation.Google Scholar
Weston, J., Bordes, A., Chopra, S., Rush, A. M., van Merriënboer, B., Joulin, A. & Mikolov, T. (2015a) Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv preprint 1502.05698. Available at: https://arxiv.org/pdf/1502.05698.pdf.Google Scholar
2
Cited by

Save article to Kindle

To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Building on prior knowledge without building it in
Available formats
×

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

Building on prior knowledge without building it in
Available formats
×

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

Building on prior knowledge without building it in
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *