Hostname: page-component-848d4c4894-wg55d Total loading time: 0 Render date: 2024-06-13T06:35:39.403Z Has data issue: false hasContentIssue false

Examining the effectiveness of bilingual subtitles for comprehension: An eye-tracking study

Published online by Cambridge University Press:  12 December 2022

Andi Wang*
Affiliation:
National Research Centre for Foreign Language Teaching Materials, School of English and International Studies, Beijing Foreign Studies University, Beijing, China
Ana Pellicer-Sánchez
Affiliation:
IOE, UCL’s Faculty of Education and Society, University College London, London, UK
*
*Corresponding author. E-mail: andi.wang@bfsu.edu.cn

Abstract

The present study examined the relative effectiveness of bilingual subtitles for L2 viewing comprehension, compared to other subtitling types. Learners’ allocation of attention to the image and subtitles/captions in different viewing conditions, as well as the relationship between attention and comprehension, were also investigated. A total of 112 Chinese learners of English watched an English documentary clip in one of four conditions (bilingual subtitles, captions, L1 subtitles, no subtitles) while their eye movements were recorded. The results revealed that bilingual subtitles were as beneficial as L1 subtitles for comprehension, which both outscored captions and no subtitles. Participants using bilingual subtitles spent significantly more time processing L1 than L2 lines. L1 lines in bilingual subtitles were processed significantly longer than in L1 subtitles, but L2 lines were processed significantly shorter than in captions. No significant relationship was found between the processing time and comprehension for either the L1 or L2 lines of bilingual subtitles.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67, 148.CrossRefGoogle Scholar
BBC. (2019). Subtitle guidelines (Version1.1.7). https://bbc.github.io/subtitle-guidelines/Google Scholar
Bilibili. (n.d.). 奇特的动物伙伴 1 [Odd Animal Couples 1]. https://www.bilibili.com/video/av21620515/Google Scholar
Bisson, M. J., van Heuven, W. J., Conklin, K., & Tunney, R. J. (2014). Processing of native and foreign language subtitles in films: An eye tracking study. Applied Psycholinguistics, 35, 399418.CrossRefGoogle Scholar
Consortium, BNC. (2007). The British National Corpus (XML edition; Oxford Text Archive). http://hdl.handle.net/20.500.12024/2554Google Scholar
Brooks, M. E., Kristensen, K., van Benthem, K. J., Berg, A. M. C. W., Nielsen, A., Skaug, H. J., … Bolker, B. M. (2017). glmmTMB balances speed and flexibility among packages for zero-inflated generalized linear mixed modeling. The R Journal, 9, 378400.CrossRefGoogle Scholar
Buck, G. (2001). Assessing listening. Cambridge University Press.CrossRefGoogle Scholar
Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8, 293332.CrossRefGoogle Scholar
Corel. (2018). Corel VideoStudio Pro 2018 [Computer software]. VideoStudio. https://www.videostudiopro.com/en/pages/videostudio-2018/Google Scholar
Cunnings, I. (2012). An overview of mixed-effects statistical models for second language researchers. Second Language Research, 28, 369382.CrossRefGoogle Scholar
Danan, M. (2004). Captioning and subtitling: Undervalued language learning strategies. Meta, 49, 6777.CrossRefGoogle Scholar
Dizon, G., & Thanyawatpokin, B. (2021). Language learning with Netflix: Exploring the effects of dual subtitles on vocabulary learning and listening comprehension. Computer Assisted Language Learning Electronic Journal, 22, 5265.Google Scholar
d’Ydewalle, G., Praet, C., Verfaillie, K., & Rensbergen, J. V. (1991). Watching subtitled television: Automatic reading behavior. Communication Research, 18, 650666.CrossRefGoogle Scholar
Field, A., Miles, J., & Field, Z. (2012). Discovering statistics using R. SAGE.Google Scholar
Gass, S. M., Winke, P., Isbell, D. R., & Ahn, J. (2019). How captions help people learn languages: A working-memory, eye-tracking study. Language Learning & Technology, 23, 84104.Google Scholar
Gesa Vidal, F. (2019). L1/L2 subtitled TV series and EFL learning: A study on vocabulary acquisition and content comprehension at different proficiency levels . ( Unpublished doctoral dissertation ). University of Barcelona, Barcelona, Spain.Google Scholar
Godfroid, A. (2020). Eye tracking in second language acquisition and bilingualism: A research synthesis and methodological guide. Routledge.Google Scholar
Haddock, C. K., Rindskopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues. Psychological Methods, 3, 339353.CrossRefGoogle Scholar
Hao, T., Sheng, H., Ardasheva, Y., & Wang, Z. (2021). Effects of dual subtitles on Chinese students’ English listening comprehension and vocabulary learning. The Asia-Pacific Education Researcher, 31, 529540.CrossRefGoogle Scholar
Hothorn, T., Bretz, F., & Westfall, P. (2008). Simultaneous inference in general parametric models. Biometrical Journal, 50, 346363.CrossRefGoogle ScholarPubMed
Keens-Soper, A., Revill, B., Collins, P., Ord, L., & Laurie, K. (Executive Producers). (2013). Animal odd couples [TV series]. British Broadcasting Corporation.Google Scholar
Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82, 126.CrossRefGoogle Scholar
Lee, M., & Révész, A. (2018). Promoting grammatical development through textually enhanced captions: An eye‐tracking studyModern Language Journal, 102, 557577.CrossRefGoogle Scholar
Lenth, R. (2020). emmeans: Estimated marginal means, aka least-squares means. (Version 1.4.5) [Computer software]. https://CRAN.R-project.org/package=emmeansGoogle Scholar
Liao, S., Kruger, J.-L., & Doherty, S. (2020). The impact of monolingual and bilingual subtitles on visual attention, cognitive load, and comprehension. The Journal of Specialised Translation, 33, 7098.Google Scholar
Lüdecke, D. (2020). sjPlot: Data visualization for statistics in social science. (Version 2.8.4) [Computer software]. https://CRAN.R-project.org/package=sjPlotGoogle Scholar
Lunin, M., & Minaeva, L. (2015). Translated subtitles language learning method: A new practical approach to teaching English. Procedia: Social and Behavioral Sciences, 199, 268275.Google Scholar
Lwo, L., & Lin, M. C.-T. (2012). The effects of captions in teenagers’ multimedia L2 learning. ReCALL, 24, 188208.CrossRefGoogle Scholar
Markham, P. L., Peter, L. A., & McCarthy, T. J. (2001). The effects of native language vs. target language captions on foreign language students’ DVD video comprehension. Foreign Language Annals, 34, 439445.CrossRefGoogle Scholar
Mayer, R. E. (2009a). Multimedia principle. In Mayer, R. E. (Ed.), Multimedia learning (2nd ed., pp. 223241). Cambridge University Press.CrossRefGoogle Scholar
Mayer, R. E. (2009b). Redundancy principle. In Mayer, R. E. (Ed.), Multimedia learning (2nd ed., pp. 118134). Cambridge University Press.CrossRefGoogle Scholar
Montero Perez, M., Peters, E., Clarebout, G., & Desmet, P. (2014). Effects of captioning on video comprehension and incidental vocabulary learning. Language Learning & Technology, 18, 118141.Google Scholar
Montero Perez, M., Van Den Noortgate, W., & Desmet, P. (2013). Captioned video for L2 listening and vocabulary learning: A meta-analysis. System, 41, 720739.CrossRefGoogle Scholar
Muñoz, C. (2017). The role of age and proficiency in subtitle reading. An eye-tracking study. System, 67, 7786.CrossRefGoogle Scholar
Nation, I. S. P., & Beglar, D. (2007). A vocabulary size test. The Language Teacher, 31, 913.Google Scholar
Nation, I. S. P., & Heatley, A. (2002). Range: A program for the analysis of vocabulary in texts (Version 3) [Computer software]. Lextutor. https://www.lextutor.ca/cgi-bin/range/texts/index.plGoogle Scholar
Navarro, D. J. (2015). Learning statistics with R: A tutorial for psychology students and other beginners (Version 0.5) [Lecture notes]. School of Psychology, University of Adelaide, Adelaide, Australia.Google Scholar
Paivio, A. (1986). Mental representations: A dual coding approach. Oxford University Press.Google Scholar
Paivio, A. (2014). Bilingual dual coding theory and memory. In Heredia, R. & Altarriba, J. (Eds.), Foundations of bilingual memory (pp. 4162). Springer.CrossRefGoogle Scholar
Pellicer-Sánchez, A., Conklin, K., Rodgers, M., & Parente, F. (2021). The effect of auditory input on multimodal reading comprehension: An examination of adult readers’ eye movements. The Modern Language Journal, 105, 936956.CrossRefGoogle Scholar
Pellicer-Sánchez, A., Tragant, E., Conklin, K., Rodgers, M. P. H., Serrano, R., & Llanes, Á. (2020). Young learners’ processing of multimodal input and its impact on reading comprehension: An eye-tracking study. Studies in Second Language Acquisition, 42, 577598.CrossRefGoogle Scholar
Peters, E. (2019). The effect of imagery and on-screen text on foreign language vocabulary learning from audiovisual input. TESOL Quarterly, 53, 10081032.CrossRefGoogle Scholar
Plonsky, L., & Oswald, F. L. (2014). How big is “Big”? Interpreting effect sizes in L2 research. Language Learning, 64, 878912.CrossRefGoogle Scholar
PortableSoft. (2012). SrtEdit (Version 6.3) [Computer software]. http://www.portablesoft.org/srtedit-portable/Google Scholar
Pujadas, G., & Muñoz, C. (2020). Examining adolescent EFL learners’ TV viewing comprehension through captions and subtitles. Studies in Second Language Acquisition, 42, 551575.CrossRefGoogle Scholar
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372422.CrossRefGoogle ScholarPubMed
Rayner, K., Chace, K. H., Slattery, T. J., & Ashby, J. (2009). Eye movements as reflections of comprehension processes in reading. Scientific Studies of Reading, 10, 241255.CrossRefGoogle Scholar
R Core Team. (2019). R: A language and environment for statistical computing (Version 3.6.1) [Computer software]. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/Google Scholar
Rodgers, M. P. H. (2013). English language learning through viewing television: An investigation of comprehension, incidental vocabulary acquisition, lexical coverage, attitudes, and captions. (Unpublished doctoral dissertation). Victoria University of Wellington, Wellington, New Zealand.Google Scholar
Rodgers, M. P. H. (2018). The images in television programs and the potential for learning unknown words: The relationship between on-screen imagery and vocabulary. ITL: International Journal of Applied Linguistics, 169, 191211.CrossRefGoogle Scholar
Rodgers, M. P. H., & Webb, S. (2011). Narrow viewing: The vocabulary in related television programs. TESOL Quarterly, 45, 689717.CrossRefGoogle Scholar
Schmitt, N., Schmitt, D., & Clapham, C. (2001). Developing and exploring the behaviour of two new versions of the Vocabulary Levels Test. Language Testing, 18, 5588.CrossRefGoogle Scholar
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12, 257285.CrossRefGoogle Scholar
Sweller, J. (2005). The redundancy principle in multimedia learning. In Mayer, R. (Ed.), The Cambridge handbook of multimedia learning (pp. 159168). Cambridge University Press.CrossRefGoogle Scholar
Tremblay, A., & Ransijn, J. (2020). LMERConvenienceFunctions: Model selection and post-hoc analysis for (G)LMER models. R package (Version 3.0). https://cran.r-project.org/web/packages/LMERConvenienceFunctions/LMERConvenienceFunctions.pdfGoogle Scholar
Wang, Y. (2019). Effects of L1/L2 captioned TV programs on students’ vocabulary learning and comprehension. CALICO Journal, 36, 204224.CrossRefGoogle Scholar
Wang, A., & Pellicer-Sánchez, A. (2021). Subtitles. Materials from “Incidental vocabulary learning from bilingual subtitled viewing: An eye-tracking study” [Collection: Stimuli and experiment files]. IRIS Database, University of York, UK. https://doi.org/10.48316/yk9k-sq16CrossRefGoogle Scholar
Wang, A. & Pellicer-Sánchez, A. (2022), Incidental vocabulary learning from bilingual subtitled viewing: An eye-tracking study, Language Learning, 72, 765805. https://doi.org/10.1111/lang.12495CrossRefGoogle Scholar
Webb, S., & Rodgers, M. P. H. (2009). The lexical coverage of movies. Applied Linguistics, 30, 407427.CrossRefGoogle Scholar
Winke, P., Gass, S. M., & Sydorenko, T. (2010). The effects of captioning videos used for foreign language listening activities. Language Learning & Technology, 14, 6586.Google Scholar
Winke, P., Gass, S. M., & Sydorenko, T. (2013). Factors influencing the use of captions by foreign language learners: An eye‐tracking study. The Modern Language Journal, 97, 254275.CrossRefGoogle Scholar
Xing, P., & Fulcher, G. (2007). Reliability assessment for two versions of Vocabulary Levels Tests. System, 35, 182191.CrossRefGoogle Scholar
Supplementary material: File

Wang and Pellicer-Sánchez supplementary material

Appendix

Download Wang and Pellicer-Sánchez supplementary material(File)
File 89.2 KB