Hostname: page-component-848d4c4894-hfldf Total loading time: 0 Render date: 2024-05-17T11:47:21.671Z Has data issue: false hasContentIssue false

Even deeper problems with neural network models of language

Published online by Cambridge University Press:  06 December 2023

Thomas G. Bever
Affiliation:
Department of Linguistics, University of Arizona, Tucson, AZ, USA tgb@arizona.edu nchomsky3@gmail.com Sandiway@arizona.edu Massimo@arizona.edu https://bever.info https://chomsky.info https://sandiway.arizona.edu https://massimo.sbs.arizona.edu
Noam Chomsky
Affiliation:
Department of Linguistics, University of Arizona, Tucson, AZ, USA tgb@arizona.edu nchomsky3@gmail.com Sandiway@arizona.edu Massimo@arizona.edu https://bever.info https://chomsky.info https://sandiway.arizona.edu https://massimo.sbs.arizona.edu
Sandiway Fong
Affiliation:
Department of Linguistics, University of Arizona, Tucson, AZ, USA tgb@arizona.edu nchomsky3@gmail.com Sandiway@arizona.edu Massimo@arizona.edu https://bever.info https://chomsky.info https://sandiway.arizona.edu https://massimo.sbs.arizona.edu
Massimo Piattelli-Palmarini
Affiliation:
Department of Linguistics, University of Arizona, Tucson, AZ, USA tgb@arizona.edu nchomsky3@gmail.com Sandiway@arizona.edu Massimo@arizona.edu https://bever.info https://chomsky.info https://sandiway.arizona.edu https://massimo.sbs.arizona.edu

Abstract

We recognize today's deep neural network (DNN) models of language behaviors as engineering achievements. However, what we know intuitively and scientifically about language shows that what DNNs are and how they are trained on bare texts, makes them poor models of mind and brain for language organization, as it interacts with infant biology, maturation, experience, unique principles, and natural law.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bever, T. G., Fodor, J. A., & Garrett, M. (1968). A formal limitation of associationism. In Dixon, T. R. & Horton, D. L. (Eds.), Verbal behavior and general behavior theory (pp. 582585). Prentice Hall.Google Scholar
Chomsky, N. (in press). The miracle creed and SMT. In Greco, M. & Mocci, D. (Eds.), A Cartesian dream: A geometrical account of syntax: In honor of Andrea Moro. Rivista di Grammatica Generativa/Research in Generative Grammar. Lingbuzz Press.Google Scholar
Gleitman, L., & Landau, B. (2013). Every child an isolate: Nature's experiments in language learning. In Piattelli-Palmarini, M. & Berwick, R. C. (Eds.), Rich languages from poor inputs (pp. 91106). Oxford University Press.Google Scholar
Goodfellow, I., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In Bengio, Y. & LeCun, Y. (Eds.), 3rd International conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7–9.Google Scholar
Guasti, M. T. (2002). Language acquisition: The growth of grammar. MIT Press.Google Scholar
McDonough, J. K. (2022). A miracle creed: The principle of optimality in Leibniz's physics and philosophy. Oxford University Press.CrossRefGoogle Scholar
Musso, M., Moro, A., Glauche, V., Rijntjes, M., Reichenbach, J., Buechel, C., & Weiler, C. (2003). Broca's area and the language instinct. Nature Neuroscience, 6(7), 774781.CrossRefGoogle ScholarPubMed
Yang, C. (2011). Learnability. In Roeper, T. & de Villiers, J. (Eds.) Handbook of language acquisition (pp. 119154). Kluwer.Google Scholar