Hostname: page-component-cb9f654ff-pvkqz Total loading time: 0 Render date: 2025-08-18T18:20:50.200Z Has data issue: false hasContentIssue false
Accepted manuscript

How Linguistics Learned to Stop Worrying and Love the Language Models

Published online by Cambridge University Press:  24 July 2025

Richard Futrell
Affiliation:
University of California Irvine, USA; rfutrell@uci.edu
Kyle Mahowald
Affiliation:
The University of Texas at Austin, USA; kyle@utexas.edu
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Language models can produce fluent, grammatical text. Nonetheless, some maintain that language models don’t really learn language and also that, even if they did, that would not be informative for the study of human learning and processing. On the other side, there have been claims that the success of LMs obviates the need for studying linguistic theory and structure. We argue that both extremes are wrong. LMs can contribute to fundamental questions about linguistic structure, language processing, and learning. They force us to rethink arguments and ways of thinking that have been foundational in linguistics. While they do not replace linguistic structure and theory, they serve as model systems and working proofs of concept for gradient, usage-based approaches to language. We offer an optimistic take on the relationship between language models and linguistics.

Information

Type
Target Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press