Hostname: page-component-89b8bd64d-7zcd7 Total loading time: 0 Render date: 2026-05-08T00:42:41.720Z Has data issue: false hasContentIssue false

Reservoir computing and the Sooner-is-Better bottleneck

Published online by Cambridge University Press:  02 June 2016

Stefan L. Frank
Affiliation:
Centre for Language Studies, Radboud University Nijmegen, 6500 HD Nijmegen, The Netherlands. s.frank@let.ru.nl www.stefanfrank.info
Hartmut Fitz
Affiliation:
Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands. hartmut.fitz@mpi.nl www.mpi.nl/people/fitz-hartmut

Abstract

Prior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.”

Information

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable