Hostname: page-component-89b8bd64d-x2lbr Total loading time: 0 Render date: 2026-05-08T01:45:25.498Z Has data issue: false hasContentIssue false

Feature encoding by neural nets*

Published online by Cambridge University Press:  20 October 2008

Amanda Lathroum
Affiliation:
Harvard University

Extract

While the use of categorical features seems to be the appropriate way to express sound patterns within languages, these features do not seem adequate to describe the sounds actually produced by speakers. Examination of the speech signal fails to reveal objective, discrete phonological segments. Similarly, segments are not directly observable in the flow of articulatory movements, and vary slightly according to an individual speaker's articulatory strategies. Because of the lack of a reliable relationship between segments and speech sounds, a plausible transition from feature representation to the actual acoustic signal has proven elusive. This paper utilises a theory of information processing, known as PARALLEL DISTRIBUTED PROCESSING (PDP) NETWORKS (also called neural networks), to propose a model which begins to express this transition: translating the feature bundles indicated in a broad phonetic transcription into continuous, potentially variable articulator behaviour.

Information

Type
Thematic Papers
Copyright
Copyright © Cambridge University Press 1989

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable