Hostname: page-component-77f85d65b8-t6st2 Total loading time: 0 Render date: 2026-03-29T06:05:04.839Z Has data issue: false hasContentIssue false

Effects of low-pass filtering on English speech-in-noise recognition in auditory-only and audiovisual modalities for late bilinguals and monolinguals

Published online by Cambridge University Press:  13 June 2025

Tiana Cowan*
Affiliation:
Center for Hearing Research, Boys Town National Research Hospital , Omaha, NE, USA
Emily Buss
Affiliation:
Department of Otolaryngology/Head and Neck Surgery, University of North Carolina, Chapel Hill, NC, USA
Lori Leibold
Affiliation:
Center for Hearing Research, Boys Town National Research Hospital , Omaha, NE, USA
Kaylah Lalonde
Affiliation:
Center for Hearing Research, Boys Town National Research Hospital , Omaha, NE, USA
*
Corresponding author: Tiana Cowan; Email: tiana.cowan@boystown.org
Rights & Permissions [Opens in a new window]

Abstract

The purpose of this study was to examine the impact of acoustic filtering and modality on speech-in-noise recognition for Spanish-English late bilinguals (who were exposed to English after their 5th birthday) and English monolinguals. All speech perception testing was conducted in English. Speech reception thresholds (SRTs) were estimated at 50% recognition accuracy in an open-set sentence recognition task in the presence of speech-shaped noise (SSN) in both low-pass and no-filter conditions. Consonant recognition was assessed in a closed-set identification task in SSN in four conditions: low-pass and no-filter stimuli presented in auditory-only (AO) and audiovisual (AV) modalities. Results indicated that monolinguals outperformed late bilinguals in all conditions. Late bilinguals and monolinguals were similarly impacted by acoustic filtering. Some data indicated that monolinguals may be more adept at integrating auditory and visual cues than late bilinguals. Theoretical and practical implications are discussed.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-ShareAlike licence (http://creativecommons.org/licenses/by-sa/4.0), which permits re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. Histogram of language dominance scores from the bilingual language profile. This figure shows the distribution of language dominance scores from the Bilingual Language Profile (BLP). The full range of possible scores is −218 to 218, with scores in this study ranging from −123.3 to 62.8. The line at 0 corresponds to a “balanced” Language Dominance score. The dashed line shows the mean Language Dominance Score for the sample, −31.8. In general, most late bilingual participants self-reported some degree of Spanish language dominance. Five out of 19 participants had scores corresponding to English language dominance.

Figure 1

Figure 2. Results for the sentence-in-noise recognition task. Panels A and B show psychometric function fits for the monolinguals and bilinguals, respectively, plotted in percent correct as a function of SNR. The color indicates fits to low-pass (green and purple) and no filter (blue and red) data. Lines indicate fits to group data, and shaded regions indicate the 95% confidence interval by participant. The mean performance at each SNR is shown with circles. Panel C shows all four functions, following plotting conventions of panels A and B. Panel D shows the distribution of SRT50 values for each group and condition, as indicated on the x-axis; horizontal lines indicate the medians, boxes span the 25th to 75th percentiles and whiskers span the full range of values. Estimates for individual participants are connected with light gray lines.

Figure 2

Table 1. Linear mixed model results assessing the effect of acoustic filter (no filter or low-pass filter), group (monolingual or bilingual), and SNR on sentence-in-noise recognition

Figure 3

Figure 3. Percent correct consonant recognition as a function of SNR.Panels A1–A4 show results plotted separately by condition. Lines indicate psychometric functions fitted to monolingual data, indicated with open circles and shaded regions show the 95% confidence interval around this fit, computed with bootstrap resampling (n = 1000). Blue lines show performance in low-pass filter conditions, and green lines show performance in the no-filter condition. Boxes show the distribution of bilingual data obtained at −8 dB SNR in each condition; horizontal lines indicate the medians, boxes span the 25th to 75th percentiles, and whiskers span the full range of values. Panel B shows results for bilingual participants in all four conditions, indicated on the x-axis; light gray lines connect points for individual participants. Panel C shows functions fitted to monolingual data in all four conditions. Plotting conventions in panels B and C follow those in panels A1-A4.

Figure 4

Table 2. Linear mixed model results assessing the effects of acoustic filter (no filter or low-pass filter), group (monolingual or bilingual) and modality (AO or AV) on consonant-in-noise recognition

Figure 5

Figure 4. The association between sentence recognition, consonant recognition, and Versant scores for late bilinguals.Panels A1 and A2 show sentence scores at −5.5 dB SNR (derived from function fits) plotted as a function of consonant recognition scores at −8 dB SNR, both represented in percent correct. Panel A1 shows performance in the no-filter condition, and panel A2 shows performance in the low-pass condition. Color gradient from white to black indicates each participant’s score on the Versant English test of language proficiency, as defined in Panel B. Panel B is a histogram of Versant scores for the late bilingual participants.

Figure 6

Table 3. The effect of English language proficiency, acoustic filter (no filter or low-pass filter) and SNR on sentence-in-noise recognition for late bilinguals

Figure 7

Table 4. The effect of English language proficiency, acoustic filtering (no filter or low-pass filter), modality (AO or AV) and SNR on consonant-in-noise recognition for late bilinguals