Hostname: page-component-77f85d65b8-2tv5m Total loading time: 0 Render date: 2026-04-20T06:40:23.716Z Has data issue: false hasContentIssue false

Didn't hear that coming: Effects of withholding phonetic cues to code-switching

Published online by Cambridge University Press:  31 January 2020

Alice Shen*
Affiliation:
University of California Berkeley
Susanne Gahl
Affiliation:
University of California Berkeley
Keith Johnson
Affiliation:
University of California Berkeley
*
Address for correspondence: Alice Shen, E-mail: azshen@berkeley.edu
Rights & Permissions [Opens in a new window]

Abstract

Code-switching has been found to incur a processing cost in auditory comprehension. However, listeners may have access to anticipatory phonetic cues to code-switches (Piccinini & Garellek, 2014; Fricke et al., 2016), thus mitigating switch cost. We investigated effects of withholding anticipatory phonetic cues on code-switched word recognition by splicing English-to-Mandarin code-switches into unilingual English sentences. In a concept monitoring experiment, Mandarin–English bilinguals took longer to recognize code-switches, suggesting a switch cost. In an eye tracking experiment, the average proportion of all participants' looks to pictures corresponding to sentence-medial code-switches decreased when cues were withheld. Acoustic analysis of stimuli revealed tone-specific pitch contours before English-to-Mandarin code-switches, consistent with previous work on tonal coarticulation. We conclude that withholding anticipatory phonetic cues can negatively affect code-switched recognition: therefore, bilingual listeners use phonetic cues in processing code-switches under normal conditions. We discuss the implications of tonal coarticulation for mechanisms underlying phonetic cues to code-switching.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s) 2020
Figure 0

Fig. 1. Splicing auditory stimuli. The speaker recorded two sentence frames per experimental item: unilingual English sentences were recorded twice, and code-switched sentences were additionally recorded as unilingual English sentences. Target words were then cut from the unilingual or code-switched sentence frame and spliced into the fully English sentence frame.

Figure 1

Table 1. Speaker self-rated proficiency.

Figure 2

Table 2. Mean participant age of acquisition and self-rated proficiency. Standard deviations are in parentheses.

Figure 3

Table 3. Mean raw reaction times (ms), as a function of switch, splice, and position. Standard deviations are in parentheses.

Figure 4

Fig. 2. Log-transformed mean reaction times, by position, switch, and splicing conditions. Vertical lines represent standard errors.

Figure 5

Fig. 3. Empirical data (points) and model fit (lines) for looks to the target picture, across all participants and trials.

Figure 6

Table 4. Number of Mandarin words with tones 1–4 in each experiment.

Figure 7

Fig. 4. Looks to sentence-medial code-switched (Mandarin) tone 2 targets.

Figure 8

Fig. 5. Pitch contours for sentence-medial unspliced stimuli in Experiment 2, by tone. Contours for code-switched stimuli are labeled by tone, and unilingual stimuli are labeled as English.

Supplementary material: PDF

Shen et al. supplementary material

Shen et al. supplementary material

Download Shen et al. supplementary material(PDF)
PDF 3.7 MB