Subcortical plasticity and enhanced neural synchrony in multilingual adults

Whereas growing evidence supports the advantages of bilingualism for brain structure and function, no study has shown multilingual-related neuroplasticity in response to speech stimuli at the subcortical level. To investigate the impact of multilingualism on subcortical auditory processing, the speech auditory evoked response (speech-ABR) was recorded on 35 young adults. The multilingual group completed the language experience and proficiency questionnaire (LEAP-Q). The results were that multilingual participants demonstrated evidence of enhanced neural timing processing, including a shorter wave D latency and the V-A duration, and a sharper V-A slope compared to the monolinguals in silence. In the noise condition, the speech-ABR measures degraded in most components, and no significant difference was observed between the two groups. The association between the total proficiency score and several subcortical responses was significant. This shows subcortical evidence of stronger neural synchronization in multilinguals relative to monolinguals, correlated with the self-report of multilingual experience.


Introduction
Neuroplasticity is an inherent feature of the brain, which refers to neural reorganization in response to learning (Kolb & Whishaw, 1998).Experience during the lifespan can profoundly shape the structure and function of the brain.Experience-dependent plasticity is the brain's potential for change following environmental input and use and shows the brain's lifelong capacity for learning new behaviors.Learning a second language is a great example of experiencedependent plasticity that shapes a framework for understanding the mechanisms by which experience modulates the neural system (Jafari, Perani, Kolb & Mohajerani, 2021).Bilingualism is defined as the ability to communicate and use two languages regularly (Grundy, Anderson & Bialystok, 2017).Accumulating evidence supports the advantages of bilingualism for brain structure and function.For instance, neuroanatomical research indicates subcortical morphological differences (Burgaleta, Sanjuan, Ventura-Campos, Sebastian-Galles & Avila, 2016;Pliatsikas, DeLuca, Moschopoulou & Saddy, 2017), larger gray matter volume in language-related brain regions (Abutalebi & Green, 2016;Burgaleta et al., 2016;Del Maschio, Fedeli, Sulpizio & Abutalebi, 2019;Grundy et al., 2017;Mårtensson et al., 2012), and enhanced white matter integrity in the superior longitudinal fasciculus (which connects frontal and parietal components of the executive control network) (Gold, 2015) and the corpus callosum (Bubbico et al., 2019;Kim et al., 2019;Luk, Anderson, Craik, Grady & Bialystok, 2010) in bilinguals relative to monolinguals.Behavioral evidence also underscores the advantages of lifelong bilingualism for executive functions (Adesope, Lavin, Thompson & Ungerleider, 2010;Costa, Hernández & Sebastián-Gallés, 2008;Grundy, 2020;Stocco & Prat, 2014).
Multilingualism is defined as the use of more than two languages by a language user (De Bot, 2019).A few behavioral studies refer to the advantages of multilingualism relative to bilingualism (Chertkow et al., 2010;Kavé, Eyal, Shorek & Cohen-Mansfield, 2008).For instance, in the Kavé et al. study (2008) in older adults, the number of languages spoken was positively associated with cognitive test scores regardless of the impact of demographic factors (e.g., age, gender, place of birth, age at immigration, and education).The findings of the study referred to the idea that the use of more than two languages may improve cognitive flexibility and provide further opportunities to enhance specific aspects of inhibitory control.Little evidence, however, has shown how learning more languages drives subcortical and cortical plasticity, and whether the number of languages acquired is associated with the extent of brain structural and functional plasticity.
According to existing evidence, the potential for neuroplasticity is greatest at the cortical level, especially during the early years of life when development is more reliant upon environmental inputs (Kral, 2007).Past human and animal studies, however, indicate the high capacity of the auditory brainstem for plasticity in response to change such as following unilateral hearing deprivation or frequency discrimination training (Hayes, Warrier, Nicol, Zecker & Kraus, 2003).In connection with bilingualism, whereas structural and functional changes in cortical regions have been well investigated, fewer studies have found how this experience reshapes subcortical auditory neural pathways.Among auditory electrophysiological assessments, the auditory brainstem response to speech stimuli, speech-ABR, is a measure of auditory encoding strength and fidelity that shapes experience-dependent plasticity at the subcortical level (Krizman, Marian, Shook, Skoe & Kraus, 2012).Findings of speech-ABR studies in the past decade are in support of the contribution of bilingualism to enhanced experiencedependent plasticity in subcortical auditory processing.For instance, in the Krizman et al. (2012) study using a 170ms synthesized stimulus /da/ in both quiet and noise conditions, bilingualism was associated with both enhanced encoding of the fundamental frequency (F0), a feature underlying pitch perception and grouping of auditory objects, and executive function.
In another similar study of the same research group (Krizman, Skoe, Marian & Kraus, 2014), Spanish-English bilinguals showed more consistent brainstem and cortical responses (i.e., consistency referred to the correlation between the first trials and the last trials of speech-ABR recording) and enhanced attentional control relative to English monolinguals.In this study, strengthened neural consistency was interpreted as the outcome of enhanced attentional control.In a study on children, simultaneous bilinguals also had a larger F0 response to /ba/ and /ga/ stimuli and more consistent response to /ba/ stimulus relative to sequential bilinguals (Krizman, Slater, Skoe, Marian & Kraus, 2015).In addition, the Skoe, Burakiewicz, Figueiredo, and Hardin (2017) study on early bilingual adults with diversity in both first and second languages showed that enhanced neural responses to the F0 in bilinguals are a common feature of the central auditory nervous system (CANS), not dependent on the exposed languages.
Overall, current evidence supports the occurrence of subcortical plasticity throughout acquiring and mastering two languages and indicates that bilinguals have more efficient and flexible auditory processing (Krizman et al., 2012(Krizman et al., , 2014;;Skoe & Chandrasekaran, 2014).Nonetheless, no study has shown how the auditory brainstem is changing in a multilingual environment (i.e., three or more languages), and whether the subcortical alterations are associated with the scores of language experience.In this regard, the present study aimed to investigate language-driven subcortical neural plasticity in participants with multilingual experience compared to monolinguals using speech-ABR and seek the potential association between altered neural processing and multilingual experience scores.We also compared speech-ABR measures in noise and silence conditions to study the contribution of multilingualism as a form of cognitive enrichment that may modulate subcortical processing of auditory stimuli in difficult listening situations, such as in a noise condition (Krizman et al., 2015).We hypothesized that multilinguals show evidence of stronger subcortical temporal processing compared to monolinguals, and multilingual experience scores are correlated with electrophysiological markers of subcortical neuroplasticity.

Participants
A total of 35 young adults, aged 18 to 25 years old, took part in this study, including 19 (14 females) participants in the multilingual group and 16 (9 females) in the monolingual group.A case history was taken from all participants, and only those with no previous history of ear diseases, tinnitus, headache, and ear/ brain surgeries were included in the study.The hearing thresholds of all participants were within normal limits within the audiometric frequency range (0.25-8 kHz) (Katz, Medwetsky & Burkhard, 2009).Most participants were students from the University of Ottawa, Carleton University, Algonquin College, and Université du Québec en Outaouais.They were recruited through research flyers and the word of mouth.The ethical principles of the Declaration of Helsinki were followed throughout the study and the ethics committee of the University of Ottawa approved the study protocol (#H03-14-19).Participants were completely aware of the study's content and endorsed testimonials before participating.Experimental sessions were scheduled according to the participant's availability and took place in the hearing research laboratory at the University of Ottawa.

Study procedure
Testing language proficiency and experience in multilingualism The language experience and proficiency questionnaire (LEAP-Q) is a valid questionnaire for collecting self-reported proficiency and experience data from bilingual/multilingual speakers, ages 14 to 80 (Kaushanskaya, Blumenfeld & Marian, 2020).Using LEAP-Q, participants are considered multilingual if they indicate a subjective rating of at least five out of ten for their expressive competency (proficiency and fluency) in at least three languages.In this study, participants in the monolingual group spoke either French or English, while those in the multilingual group spoke French, English, and one or more additional languages.Amongst the participants in the multilingual group, ten spoke Arabic, three spoke Spanish, two spoke Italian, and one participant in each of the following languages: Mandarin/Chinese, Slovenian, Japanese, and Swedish.Table 1 shows L1, L2, and L3 languages, combinations of languages, as well as AoA and the proficiency level for L2 and L3 in the multilingual participants.

Electrophysiological recording
The Bio-logic Navigator Pro System (Natus Medical Inc., Mundelein, IL) using insert-earphones for stimulus presentation (ER-3A, Etymotic Research, Elk Grove Village, IL) was used for recording click-ABR and speech-ABR.The electrode array was Cz for non-inverting, ipsilateral earlobe for inverting, and contralateral earlobe for the ground electrode.The impedance of all electrodes was kept below 5KV and within 1.5KV of each other.The recording was started with click-ABR using a 100ms click stimulus with alternate polarity at 80 dB SPL (peak equivalent) at a rate of 13.3 Hz.Two blocks of 1500 artifact-free sweeps were collected for each participant.The click-ABR was performed to confirm normal hearing in participants and no significant difference was shown between the multilingual and monolingual groups in click-ABR measures.
The speech-ABR was elicited using a 40ms synthesized syllable /da/ in the right ear.The stimulus was presented at a rate of 10.9/s and a 75 dB SPL intensity level, in silence and noise conditions.In the noise condition, a white noise stimulus at 65 dB SL (i.e., +10 dB signal to noise ratio (SNR)) was presented ipsilaterally using an insert phone (Koravand, Thompson, Chénier & Kordjazi, 2019).Responses were averaged online via a 100 to 2000 Hz bandpass filter using 1024 digital sampling points over an 85.33ms epoch (including a 15ms pre-stimulus time window).Artifacts were rejected online at ± 23 μV and did not exceed 10% of the total number of sweeps.Two blocks of 2000 artifact-free sweeps were recorded in the right ear (Koravand,Al Osman,308 Zahra Jafari et al. Rivest & Poulin, 2017).The participants were asked to relax in a comfortable chair and close their eyes during recording click and speech-ABR.The tests were performed in a double-walled soundproof cabin with dimmed lights (Institute, 1999).
The speech-ABR is composed of seven peaks, i.e., V, A, C, D, E, F, and O.According to the source-filter model of speech acoustics, the sound source consists of the vibration of the vocal folds reacting to airflow from the lungs, and the filter is considered the collection of all parts that are involved in vocal production such as the vocal tract, oral cavity, tongue, lips, and jaw.In modeling the brainstem as a mediator between acoustic properties of speech and cortical processing streams, the transient components of speech-ABR (e.g., V, A, C, and O waves) and F1 (i.e., the first formant, a component of sustained FFR) belong to the "filter class" as part of the "where stream" in speech processing.The waves D, E, and F (i.e., transient FFR) and F0 (i.e., the fundamental frequency, a component of sustained FFR), however, are considered as the "source class" and part of the "what stream" in speech processing (Kraus & Nicol, 2005).In terms of the role of transient peaks of speech-ABR, waves V and A represent the onset of sound at the brainstem (specifically, the lateral lemniscus/inferior colliculus), wave C is regarded as a response to the vowel onset, and wave O is a response to the sound termination.Peaks D, E, and F, as the source response, reflect vibrations of the vocal folds and involve in encoding periodicity.The inter-peak intervals between D, E, and F waves correspond to the F0 wavelength of the utterance, and small high-frequency fluctuations between these waves are corresponding with the frequency of the first formant (F1) in the filter class (Kraus & Nicol, 2005).

Data processing
Two replications of the speech-ABR waves consisting of the onset (V and A), consonant-to-vowel transition (C), offset (O) and three sustained frequency following response (FFR) peaks (D, E, and F) were visually marked by two independent experts.Overall, there was a complete agreement between the reviewers in marking seven speech-ABR peaks.After marking wave V, the lowest point of the subsequent negative slope was considered wave A. The following waves were C, D, E, F, and O, which were labeled as the deepest negative peaks at their expected latencies.To measure the neural synchronization to the stimulus onset, V-A inter-peak latency, V-A peak to trough amplitude, and the V-A slope were analyzed.In addition to the temporal analysis, spectral analysis, F0 and F1, was performed on the sustained portion of the speech-ABR using the Brainstem Toolbox (Skoe & Kraus, 2010) under MATLAB v.8.1 (MathWorks, Natick, MA).To evaluate the spectral composition of the response, a fast Fourier transform (FFT) analysis of the response was carried out, with zero padding, in a range of approximately 11.4-40.5ms.The magnitude of frequency representation over the stimulus F0 (103-121 Hz) and F1 (454-720 Hz) was measured by taking the average of the amplitudes over the specified frequency ranges (Dhar et al., 2009).

Statistical analysis
All statistical analyses were performed using SPSS Statistics 26.0 at a significance level of 0.05 or better.Data were assessed for normality using the Kolmogorov-Smirnov (K-S) test.Given the normal distribution of data in all measures (p ≥ 0.068), parametric statistical tests were applied for data analysis.An analysis of variance (ANOVA) test with a family-wise error controlling method (i.e., Bonferroni correction) was conducted to compare the two groups in different parameters of the speech-ABR test (e.g., the latency, amplitude, and spectral characteristics).In both monolingual and multilingual groups, a repeated-measures ANOVA test was carried out to compare the speech-ABR parameters in silence and noise conditions.The F-values, p-values, estimations of the effect size (partial η 2 ), and observed power were reported for the statistical analyses.A bivariate correlational analysis was also used to determine the relationship between multilingual experience and speech-ABR parameters in silence.

Results
The latency (neural timing), amplitude (neural magnitude), and spectral components of FFR (e.g., F0 and F1 amplitudes) were compared between monolingual and multilingual groups in silence and (Fig 1A and 1B) noise conditions (Fig 1C).

The impact of noise on speech-ABR parameters
In each group, speech-ABR parameters were compared between silence and noise conditions.Background noise led to the Bilingualism: Language and Cognition 309 significant degradation of all parameters in the two groups (e.g., delayed latency and reduced amplitude) except for the C wave amplitude in the monolingual group (Table 2), as well as C and F wave amplitudes in the multilingual group (Table 3).

Discussion
In this study using speech-ABR, we aimed to investigate subcortical auditory processing in young multilinguals compared to monolinguals.Briefly, 1) multilingual participants showed shorter wave D latency and V-A duration and sharper V-A slope relative to the monolinguals.2) No significant difference was found between the two groups in the noise condition.Noise presentation, however, led to the degradation of the majority of speech-ABR components in both groups.
3) The total multilingual proficiency score was associated with the V-A slope.The  Bilingualism: Language and Cognition L2 AoA also was correlated with each wave D latency, V-A duration, and V-A slope.In the following, these findings are discussed in turn.

The impact of multilingualism on speech-ABR components
In our study, shorter duration and increased slope of V-A, as well as shorter wave D duration in multilinguals compared to monolinguals demonstrate that subcortical plasticity induced by multilingualism can drive improved processing of transient filter (e.g., V-A complex) and transient FFR (e.g., D wave) responses at the subcortical level.In our previous study with a similar research design, the findings (i.e., shorter latency of transient waves V, C, D, and F) also were evidence of improved neural synchrony/timing in bilinguals relative to monolingual young adults (Koravand et al., 2019).A few differences between the current study and our past study may refer to subtle differences between multilinguals and bilinguals on neural timing, which requires further studies.In addition, in two previous studies using two 170ms syllables "ba" and "ga", stronger encoding of the F0 was reported in children or adolescent bilinguals compared to monolinguals (Krizman et al., 2012(Krizman et al., , 2014)).These studies were able to spot such a difference in sustained FFR (F0) because of using speech stimuli with a longer duration (e.g., 170ms).In contrast, we were able to characterize the impact of multilingualism on transient components of speech-ABR due to using a shorter speech stimulus (e.g., 40ms), which let us mark seven distinct subcortical waves (e.g., V to O).
Considering the neural mechanisms underlying bilingualism/ monolingualism, current evidence supports the idea that learning a new language is associated with devoting more frontal resources to language processing, which are involved in the competition between the two or more languages.In the long term, along with increased bilingual experience, some brain regions are functionally remodeled (i.e., functional plasticity).Accumulating neuroimaging evidence has found that increased bilingual experience contributes to gradually less reliance on cerebral regions and networks implicated in cognitive control (e.g., anterior cingulate cortex and dorsolateral prefrontal cortices (DLPFC)) and further recruitment of posterior and subcortical regions, which are involved in perceptual and motor functions (Bialystok, 2017;DeLuca, Rothman, Bialystok & Pliatsikas, 2019;Grundy et al., 2017).The "bilingual anterior-to-posterior and subcortical shift (BAPSS)" is a model proposed based on this efficient brain recruitment in bilinguals.In light of this model, the reorganization of functional networks given long-term bi/multilingualism drives enormous subcortical plasticity (Grundy et al., 2017), which may contribute to increased neural synchronization in bi/ multilinguals.

The impact of background noise on speech-ABR measures
In this study, the components of speech-ABR were compared between the multilingual and monolingual groups during white noise presentation at +10 dB SNR.We found no significant difference between the two groups as they both showed significant degradation of the majority of speech-ABR measures.Thus, the degradation of all components (e.g., waves' latencies and amplitudes, V-A onset measures, and F0 and F1 amplitude) was observed except for wave C amplitude in monolinguals and waves C and F amplitude in multilinguals.This finding was consistent with our past study using a similar design on young bilinguals (Koravand et al., 2019).A recent study in normal-hearing adults using 40, 50, and 170ms speech stimuli with a background noise of two talker babbles at +10 dB SNR also reported decreased latency and reduced amplitude of all waves (except for the wave O amplitude) (BinKhamis et al., 2019).This study, however, demonstrated no effect of noise on F0.The negative impact of background noise on waves' latencies and amplitudes is in line with earlier studies on speech-ABR in noise using 40 and 170ms speech stimuli with some differences in terms of the affected latencies and/or amplitudes (Parbery-Clark, Marmel, Bair & Kraus, 2011;Russo, Nicol, Musacchia & Kraus, 2004;Song, Nicol & Kraus, 2011).For instance, in the Parbery-Clark et al.
(2011) study using the binaural presentation of a 170ms syllabus /da/, longer latencies of all peaks and reduced amplitude of onset peaks were reported in noise compared to the quiet condition, which might be influenced by more robust responses because of binaural presentation (Skoe & Kraus, 2010).In terms of the effect of background noise on sustained FFR measures (e.g., F0 and F1), the findings of studies are contradictory.Whereas both of our current and past studies (Koravand et al., 2019) using a 40ms syllabus /da/ show the reduced magnitude of F0 and F1 in noise in monolingual, bilingual, and multilingual participants, no such effect was observed in monolingual young adults in three studies using a 170ms or 250ms syllabus /da/ (BinKhamis et al., 2019;Li & Jeng, 2011;Song, Skoe, Banai & Kraus, 2011).In addition, using a 300ms syllabus /a/ on young monolingual adults, Al Osman, Giguère, and Dajani (2016) and Prévost, Laroche, Marcoux, and Dajani (2013) reported an enhanced FFR F0 magnitude in noise compared to the quiet condition.The discrepancy among studies over the noise effect on spectral components predominantly results from the speech stimulus durationnamely, the Fourier transform bin width.The frequency resolution of the Fourier transform equals 1/duration, in which a smaller number represents a higher frequency resolution (e.g., 1/300ms = 3.33 Hz frequency resolution in the Prévost et al. (2013) study with a longer speech stimulus) and a larger number reflects a lower frequency resolution (e.g., 1/30ms = 33.33Hz frequency  Bilingualism: Language and Cognition resolution in the Russo, Nicol, Musacchia, and Kraus ( 2004) study with a shorter speech stimulus).This difference can suggest the conclusion that, in studies using shorter speech stimuli, the amplitude at the bin (where the response at F0 is measured) is strongly affected by surrounding noisewhich leaks into the bin and intervenes in the amplitude calculation (Prévost et al., 2013).
Overall, a review of these studies demonstrates to what extent the stimulus duration contributes to the impact of noise on speech-ABR spectral magnitudes.

Association between transient measures and language experience
It is noteworthy that considering a high variability among bi/multilinguals in their language experience, and given the age of L2 AoA is continuously and dynamically modulated by L2 learning history and language experiences (Gullifer et al., 2018;Luk & Bialystok, 2013), "bilingual experience" is defined as a gradient and composite measure consisting of three primary features: the L2 AoA as a static factor, and the L2 proficiency and the language usage (i.e., the amount of daily use of both languages) as two dynamic factors (Luk & Bialystok, 2013;Sulpizio, Del Maschio, Del Mauro, Fedeli & Abutalebi, 2020).In this study, the LEAP questionnaire was used to rate the language experience and proficiency of multilingual participants.The correlational analysis exhibited a negative association between the total multilingual proficiency score and the V-A slopenamely, those with higher multilingual proficiency showed a sharper V-A slope.A higher L2 AoA also was correlated with a shorter wave D latency and V-A duration, as well as a sharper V-A slope.Our findings can be interpreted as showing that multilingual experience can modulate subcortical auditory timing processing, especially in brain regions associated with V-A onset measures (i.e., the lateral lemniscus/ inferior colliculus) (Kraus & Nicol, 2005).Whereas few studies have examined the relationship between bilingual/multilingual experience and speech processing at the subcortical level, accumulating evidence underscores the link between bilingual experience and brain measures at the cortical level (Jafari et al., 2021).In this regard, it has been found that simultaneous bilinguals show stronger brain functional connectivity relative to sequential bilinguals (Berken, Chai, Chen, Gracco & Klein, 2016), and the impact of AoA is modulated by language proficiency and language usage (Sulpizio et al., 2020).Greater diversity in daily social language use also is correlated with enhanced brain functional connectivity in brain areas involved in language and cognitive control, which may contribute to enhanced cognitive flexibility in response to novel stimuli (Gullifer et al., 2018).Further studies are necessary to determine the association between various aspects of bi/  314 Zahra Jafari et al.
multilingual experience and electrophysiological markers of speech processing at the subcortical level.
In our study, participants had expressive competency (proficiency and fluency) in at least three languages, including differences in exposed languages in L1, L2, and L3.In the Skoe et al. (2017) study using a 170ms syllable /da/, early bilingual adults with diversity in both L1 (e.g., English, Mandarin, Spanish, Tamil, and Telugu) and L2 (e.g., English, French, Mongolian, Portuguese, Punjabi, Runyankore, Spanish, and the Fuzhou dialect of Chinese) showed more robust FFRs to the F0 compared to English language monolinguals.This finding was in support of the idea that bilingual experience modulates the CANS in a similar way and primes the brain to respond to the F0, irrespective of the languages of exposure (Skoe et al., 2017).The findings of an fMRI study on young multilingual adults (Dutch, French, and English) using several language tasks also were evidence of overlapping regions of activation for different languages (Vingerhoets, Van Borsel, Tesink, van den Noort, Deblaere, Seurinck, Vandemaele & Achten, 2003).The findings of these studies may refer to the notion that regardless of diversity in languages of exposure, bi/multilingual experience can lead to brain subcortical and cortical plasticity similarly.Nonetheless, given a limited number of studies in this area, further studies are necessary to draw a conclusion about differences between bilinguals and multilinguals in biomarkers of brain plasticity as well as the impact of diversity in languages of exposure.
Strengths of this study were seeking an auditory biomarker of multilingual-associated subcortical plasticity for the first time, as well as doing correlational analysis to determine the link between evidence of subcortical plasticity and multilingual experience.We, however, did not objectively quantify language proficiency and language usage in the multilingual group.In addition, we didn't use background noise with speech content, and our experimental setup was only confined to a short speech stimulus (e.g., a 40ms syllabus /da/), which didn't let us characterize the likely changes in sustained FFR measures (e.g., F0 and F1).

Conclusions
In this study, we aimed to investigate the impact of multilingualism on subcortical auditory processing in response to speech stimuli in silence and background noise.The multilingual group showed superiority in timing processing, in which the wave D latency and the V-A complex duration were shorter, and the V-A slope was sharper compared to the monolingual group in silence.In the noise condition, the speech-ABR degraded in most components, and no significant difference was found between the two groups.In correlation analysis, a significant relationship was found between some speech-ABR measures and multilingual experience, in which the total proficiency score was correlated with the V-A slope.The L2 AoA was also associated with each wave D latency, V-A duration, and V-A slope.Our findings are in support of stronger neural timing (neural synchrony) in multilinguals compared to monolinguals at the subcortical level, which is also corroborated with the self-report of multilingual experience.Replication of our findings by including longer speech stimuli to mark the potential multilingualassociated enhancement in sustained FFR measures (e.g., F0 and F1), as well as quantifying language usage and language proficiency through standard assessments are recommended.Future studies also are suggested to investigate whether the number of languages acquired as well as the extent of lexical differences within languages spoken can influence electrophysiological responses to speech stimuli at subcortical and cortical levels, and whether they are associated with various aspects of cognitive performance.
Fig 1C compares the grand average of speech-ABR between silence and noise conditions in the monolingual (Fig 1C1) and multilingual (Fig 1C2) groups.Correlation between findingsA significant relationship was observed between the total proficiency score (the proficiency score average in speaking, understanding, and reading in 3 languages obtained by LEAP) and the V-A slope (r = -0.583,p = 0.011) in silence(Fig 2A).A significant association also was found between L2 age of acquisition (AoA) and each wave D latency (r = 0.548, p = 0.012), V-A duration (r = 0.462, p = 0.047), and V-A slope (r = 0.482, p = 0.043) in silence (Fig 2B-2D).

Fig 1 .
Fig 1.Comparison between the two groups in the grand average of speech ABR.A) The response average in silence in the two groups.B) The wave D latency (B1) and the V-A duration (B2) were shorter, and the V-A slope (B3) was sharper in the multilingual versus monolingual group, corresponding with sections A1 and A2, respectively.C) In the noise condition, the response significantly dropped in both groups.Graphs show means ± 2SE.Asterisks indicate *p < 0.05 or **p < 0.01.

Fig 2 .
Fig 2. Correlation between multilingual experience and speech ABR parameters in silence, including: A) A negative correlation between the total proficiency score in 3 languages and the V-A slope, and a positive relationship between L2 AoA and each wave D latency (B), V-A duration (C), and V-A slope (D).ABR, auditory brainstem response; AoA: age of acquisition; L2: second language.

Table 1 .
Descriptive data showing L1, L2, and L3 languages, combinations of languages, and AoA and the proficiency level for L2 and L3 in the multilingual group

Table 2 .
Comparison between speech ABR parameters in silence and noise conditions in the monolingual group

Table 3 .
Comparison between speech ABR parameters in silence and noise conditions in the multilingual group