Hostname: page-component-848d4c4894-75dct Total loading time: 0 Render date: 2024-06-07T21:53:15.392Z Has data issue: false hasContentIssue false

Buzzsaws and blueprints: Commentary

Published online by Cambridge University Press:  16 November 2000

HELEN GOODLUCK
Affiliation:
Department of Linguistics, University of Ottawa

Abstract

The review article by Sabbagh & Gelman (S & G) on The emergence of language (EL) mentions several criticisms of strong emergentism, the view that language emerges through an interaction between domain-general learning mechanisms and the environment, without crediting the organism with innate knowledge of domain-specific rules, a view that successful connectionist modelling is taken to support. One criticism of this view and the support for it that connectionist modelling putatively provides has been made frequently, and is noted by S & G: it is arguable that connectionist simulations work only because the input to the network in effect contains a representation of the knowledge that the net seeks to acquire. I think it is worth adding to this another criticism that to my mind is a fundamental one, but which has not featured so strongly in critiques of connectionism. A primary goal of modern linguistics has been to account not merely for what patterns we do see in human languages, but for those that we do not. The concept of Universal Grammar is precisely a set of limitations on what constitutes a possible human language. The kind of example used in teaching Linguistics 101 is the fact that patterns of grammaticality are structurally, not linearly, determined: in English we form a yesno question by inverting the subject NP and auxiliary verb, not by inverting the first and second words of the equivalent declarative sentence, or the first and fifth words, or any number of conceivable non-structural operations. Could a connectionist mechanism learn such non-structural operations? Perhaps I have asked the wrong people, but when I have queried researchers doing connectionist modelling, the answer appears to be ‘yes’. If that's the case, then connectionist mechanisms as currently developed do not constitute an explanatory model of human language abilities: they are too powerful.

Type
REVIEW ARTICLE AND DISCUSSION
Copyright
© 2000 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Thanks to John Logan for his comments on a draft of these remarks; he bears no blame for the final product.