Hostname: page-component-89b8bd64d-ksp62 Total loading time: 0 Render date: 2026-05-07T14:40:08.954Z Has data issue: false hasContentIssue false

Prediction in challenging situations: Most bilinguals can predict upcoming semantically-related words in their L1 source language when interpreting

Published online by Cambridge University Press:  25 April 2022

Yiguang Liu
Affiliation:
Department of Linguistics, Zhejiang University, Hangzhou, China Research Center for Applied Mathematics and Machine Intelligence, Zhejiang Lab, Hangzhou, China
Florian Hintz
Affiliation:
Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
Junying Liang
Affiliation:
Department of Linguistics, Zhejiang University, Hangzhou, China
Falk Huettig*
Affiliation:
Department of Linguistics, Zhejiang University, Hangzhou, China Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands Centre for Language Studies, Radboud University, Nijmegen, Netherlands
*
Address for correspondence: Falk Huettig, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands E-mail: Falk.Huettig@mpi.nl
Rights & Permissions [Opens in a new window]

Abstract

Prediction is an important part of language processing. An open question is to what extent people predict language in challenging circumstances. Here we tested the limits of prediction by asking bilingual Dutch native speakers to interpret Dutch sentences into their English counterparts. In two visual world experiments, we recorded participants’ eye movements to co-present visual objects while they engaged in interpreting tasks (consecutive and simultaneous interpreting). Most participants showed anticipatory eye movements to semantically-related upcoming target words in their L1 source language during both consecutive and simultaneous interpretation. A quarter of participants during simultaneous interpretation however did not move their eyes, an extremely unusual participant behaviour in visual world studies. Overall, the findings suggest that most people predict in the source language under challenging interpreting situations. Further work is required to understand the causes of the absence of (anticipatory) eye movements during simultaneous interpretation in a substantial subset of individuals.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press
Figure 0

Table 1a. Descriptive results of NART and Peabody test in Experiment 1

Figure 1

Table 1b. Correlations between self-rating scores (reading, speaking, writing, understanding spoken language), NART score, and Peabody Score in Experiment 1

Figure 2

Fig. 1. Example display for the target noun apple with unrelated distractors.

Figure 3

Fig. 2. Trial design of Experiment 1. Each trial began with a central fixation dot presented for 2000ms. After that, a picture consisting of 4 objects (one target, three distractors) was displayed and then the playback of the sentence started. The presentation of the visual displays was timed to precede the onset of the spoken verb by one second. After the offset of the spoken sentence, participants were instructed to initiate their interpretation. The visual display remained in view until the end of the trial. The diagram of consecutive interpreting task used in Exp 1 is also shown.

Figure 4

Fig. 3. Eye-tracking results of Experiment 1. The proportion of fixations to the target object (solid lines) and to the averaged distractor objects (dashed lines) over time for both the predictable condition (green) and nonpredictable condition (red). The onset of target noun in the spoken sentence was at time zero. C_speech_onset = the onset of spoken sentence (M = -2035 ms, SD = 150), C_verb_onset = the onset of verb in the spoken sentence (M = -1481 ms, SD = 160), C_speech_offset = the offset of spoken sentence (M = 449 ms, SD = 122), P_speech_onset = the onset of the interpreted sentence (M = 1708 ms, SD = 419), P_verb_onset = the onset of verb in the interpreted sentence (M = 2521 ms, SD = 561), P_target_onset = the onset of target noun in the interpreted sentence (M = 3272 ms, SD = 691), P_speech_offset = the offset of the interpreted sentence (M = 4066 ms, SD = 1005).

Figure 5

Table 2a. Descriptive results of NART and Peabody test in Experiment 2

Figure 6

Table 2b. Correlations between self-rating scores (reading, speaking, writing, understanding spoken language), NART score, and Peabody Score in Experiment 2

Figure 7

Fig. 4. Trial design of Experiment 2. Each trial began with a central fixation dot presented for 2000ms. After that, a picture consisting of 4 objects (one target, three distractors) was displayed and then the playback of the sentence started. The presentation of the visual displays was timed to precede the onset of the spoken verb by one second. After hearing the spoken sentence, participants were asked to initiate their interpretation as soon as possible, and finish interpretation within a 2000ms window after the offset of sentence. The visual display remained in view until the end of the trial. The diagram of simultaneous interpreting task used in Exp 2 is also shown.

Figure 8

Fig. 5. Eye-tracking results of Experiment 2. The proportion of fixations to the target object (solid lines) and to the averaged distractor objects (dashed lines) over time for predictable (green) and nonpredictable (red) conditions. The shaded grey areas surrounding the lines represent 95% confidence intervals for each object and condition. The onset of target noun in the spoken sentence was at time zero. C_speech_onset = the onset of spoken sentence (M = -2035 ms, SD = 147), C_verb_onset = the onset of verb in the spoken sentence (M = -1480 ms, SD = 158), C_speech_offset = the offset of spoken sentence (M = 448 ms, SD = 122), P_speech_onset = the onset of interpreted sentence (M = -955 ms, SD = 662), P_verb_onset = the onset of verb in the interpreted sentence (M =  152 ms, SD = 854), P_target_onset = the onset of target noun in the interpreted sentence (M = 1147 ms, SD = 608), P_speech_offset = the offset of the interpreted sentence (M = 1909 ms, SD = 616).

Figure 9

Table 3. Linear mixed-effects model output for the analysis of eye gaze (log-transformed fixation ratios) in the two experiments (1, CI vs. 2, SI), predictable and non-predictable conditions, and participants’ NART and PPVT scores.

Figure 10

Table 4. Linear mixed-effects model output for the analysis of eye gaze (log-transformed fixation ratios) in the two present experiments (1, CI vs. 2, SI), compared to Experiment 1 in (Hintz et al., 2017).