Published online by Cambridge University Press: 14 May 2019
This research report examines the occurrence of listener visual cues during nonunderstanding episodes and investigates raters’ sensitivity to those cues. Nonunderstanding episodes (n = 21) and length-matched understanding episodes (n = 21) were taken from a larger dataset of video-recorded conversations between second language (L2) English speakers and a bilingual French-English interlocutor (McDonough, Trofimovich, Dao, & Abashidze, 2018). Episode videos were analyzed for the occurrence of listener visual cues, such as head nods, blinks, facial expressions, and holds. Videos of the listener’s face were manipulated to create three rating conditions: clear voice/clear face, distorted voice/clear face, and clear voice/blurred face. Raters in the same speech community (N = 66) were assigned to a video condition to assess the listener’s comprehension. Results revealed differences in the occurrence of listener visual cues between the understanding and nonunderstanding episodes. In addition, raters gave lower ratings of listener comprehension when they had access to the listener’s visual cues.
Funding for this study was provided through a grant awarded to the first two authors by the Social Sciences and Humanities Research Council of Canada (435-2105-1206). We would like to thank the research assistant who carried out the communicative tasks with the participants, Dave Dufour, and the RAs who monitored the eye-tracking system, Phung Dao, Malene Bodington, and Emily Sheepy. We also appreciate the hard work of the RAs who designed the rating stimuli and administered the rating tasks, Yang Gao and Ashley Montgomery, and who helped with data transcription and coding: Elissa Allaw, Helene Bramwell, Diana Chojczak, Anne Chretien, Emilie Ladouceur, Rachael Lindberg, Dana Martin, Florina Sylla, and Pakize Uludag.