Hostname: page-component-89b8bd64d-ktprf Total loading time: 0 Render date: 2026-05-12T16:56:39.262Z Has data issue: false hasContentIssue false

First language effects on incidental vocabulary learning through bimodal input

A multisite, preregistered, and close replication of Malone (2018)

Published online by Cambridge University Press:  27 June 2024

The TwiLex Group
Affiliation:
Yingzhao Chen (National University of Singapore), Jianwu Gao (Capital Normal University), Eva-Maria Hirzinger-Unterrainer (University of Innsbruck), Bronson Hui (University of Maryland, College Park), Benjamin Kremmel (University of Innsbruck), Peilin Li (The Hong Kong Polytechnic University), Shuang Ma (Capital Normal University), Ryo Maie (Tohoku University), Eva Puimège (KU Leuven), John Rogers (The Hong Kong Polytechnic University), and Micheline Wilson (University of Maryland, College Park)
Rights & Permissions [Opens in a new window]

Abstract

Despite accumulating evidence, accounts for the efficacy of reading-while-listening (RWL) in facilitating vocabulary learning are largely unexamined, hindering a thorough understanding of the reasons underlying the usefulness of such bimodal input. In this article, we report a close replication of Malone (2018), purposefully manipulating the participants’ native language background to shed light on whether the auditory component in RWL promotes spoken-written form mappings. One hundred and eighty-eight English learners from Austria, Belgium, Hong Kong, and Beijing read or read and listened to four stories containing target words for learning. They completed two surprise vocabulary tests and two assessments of working memory capacity. We only replicated a correlation between working memory capacity and the form recognition test reported in the initial study. Thanks to our manipulation, we discovered an important role of L1 background in the effectiveness of RWL for form recognition knowledge. We discuss the implications for RWL research.

Information

Type
Replication Study
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Table 1. Comparison between the initial and the current replication study.

Figure 1

Figure 1. Screenshot of the form-recognition posttest for the Beijing site.

Figure 2

Figure 2. Screenshot of the form-meaning posttest for the Beijing site.

Figure 3

Figure 3. Screenshot of the cloze test for the Beijing site.

Figure 4

Figure 1. Descriptive summary of participants’ performance on the vocabulary tests.Note: The error bars show 95% confidence intervals.

Figure 5

Figure 2. Cohen’s d of each group difference.Note: The error bars show 95% confidence intervals.

Figure 6

Table 2. Results of testing informative hypotheses for the vocabulary tests.

Figure 7

Figure 3. Correlation between the vocabulary tests and working memory scores.Note: The error bars indicate 95% credible intervals.

Figure 8

Table 3. Parameter estimates from GLMM for the form recognition test.

Figure 9

Figure 4. Model-based marginal means of the participants’ performance on the form recognition test (left panel) and conditional means as a function of working memory scores (right panel).

Figure 10

Table 4. Parameter estimates from GLMM for the form-meaning connection test.

Figure 11

Figure 5. Model-based marginal means of the participants’ performance on the form-meaning connection test (left panel) and conditional means as a function of WM scores (right panel).