Hostname: page-component-89b8bd64d-5bvrz Total loading time: 0 Render date: 2026-05-07T09:18:35.448Z Has data issue: false hasContentIssue false

Prosody perception and production by children with cochlear implants

Published online by Cambridge University Press:  18 October 2018

Daan J. VAN DE VELDE*
Affiliation:
Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden Leiden Institute for Brain and Cognition, Postbus 9600, 2300 RC, Leiden
Niels O. SCHILLER
Affiliation:
Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden Leiden Institute for Brain and Cognition, Postbus 9600, 2300 RC, Leiden
Claartje C. LEVELT
Affiliation:
Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden Leiden Institute for Brain and Cognition, Postbus 9600, 2300 RC, Leiden
Vincent J. VAN HEUVEN
Affiliation:
Department of Hungarian and Applied Linguistics, Pannon Egyetem, 10 Egyetem Ut., 8200 Veszprém, Hungary
Mieke BEERS
Affiliation:
Leiden University Medical Center, ENT Department, Postbus 9600, 2300 RC, Leiden
Jeroen J. BRIAIRE
Affiliation:
Leiden University Medical Center, ENT Department, Postbus 9600, 2300 RC, Leiden
Johan H. M. FRIJNS
Affiliation:
Leiden Institute for Brain and Cognition, Postbus 9600, 2300 RC, Leiden Leiden University Medical Center, ENT Department, Postbus 9600, 2300 RC, Leiden
*
*Corresponding author: Daan van de Velde, Leiden University Centre for Linguistics, Leiden University, Cleveringaplaats 1, 2311 BD, Leiden, the Netherlands. E-mail: d.j.van.de.velde@hum.leidenuniv.nl
Rights & Permissions [Opens in a new window]

Abstract

The perception and production of emotional and linguistic (focus) prosody were compared in children with cochlear implants (CI) and normally hearing (NH) peers. Thirteen CI and thirteen hearing-age-matched school-aged NH children were tested, as baseline, on non-verbal emotion understanding, non-word repetition, and stimulus identification and naming. Main tests were verbal emotion discrimination, verbal focus position discrimination, acted emotion production, and focus production. Productions were evaluated by NH adult Dutch listeners. All scores between groups were comparable, except a lower score for the CI group for non-word repetition. Emotional prosody perception and production scores correlated weakly for CI children but were uncorrelated for NH children. In general, hearing age weakly predicted emotion production but not perception. Non-verbal emotional (but not linguistic) understanding predicted CI children's (but not controls’) emotion perception and production. In conclusion, increasing time in sound might facilitate vocal emotional expression, possibly requiring independently maturing emotion perception skills.

Information

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2018. Published by Cambridge University Press
Figure 0

Table 1. Demographic and Implant Characteristics of Recipients.

Figure 1

Figure 1. Example of the waveform and intonation contour (scaled between 75 and 500 Hz) of stimuli in the Both condition of the Emotion perception test, produced using Praat. Shown are neutral, happy, and sad variants of Een blauwe auto ‘A blue car’. Total stimulus durations (1.93 s. for neutral, 2.01 s. for happy, and 2.24 s. for sad) as well as allophone durations were different between emotion conditions.

Figure 2

Figure 2. Mean Percentages of Phonemes Correct per syllable length (in number of syllables) and per participant group (CI or NH) in the Non-word repetition test. Percentages correct represent percentages of correctly repeated phonemes per non-word. Additions, omissions, and substitutions of phonemes counted as errors.

Figure 3

Figure 3. Mean d’ scores split by Phonetic parameter and by participant group (CI or NH) in the Emotion perception test. Participants judged if prerecorded utterances were pronounced with a happy or sad emotion. Phonetic parameters indicate which type of phonetic information was available in the stimulus.

Figure 4

Figure 4. Mean percentages correct per emotion and per participant group (CI or NH) of emotions conveyed in dummy phrases in the Emotion production test. Percentages correct were computed by averaging judgements of emotions perceived by a panel of ten naive adult Dutch listeners with normal hearing.

Figure 5

Table 2. Mean Percentages Correct and Standard Deviations (in Parentheses) per Emotion and per Participant Group (CI or NH) of Emotions Conveyed in Dummy Phrases in the Emotion Production Test.

Figure 6

Figure 5. Mean percentages correct per focus position and per participant group (CI or NH) of focus positions conveyed in dummy phrases in the Focus production test. Percentages correct were computed by averaging judgements of emotions perceived by a panel of ten naive adult Dutch listeners with normal hearing.

Figure 7

Table 3. Mean Percentages Correct and Standard Deviations (in Parentheses) per Focus Position and per Participant Group (CI or NH) of Focus Position Conveyed in Dummy Phrases in the Focus Production Test.