Hostname: page-component-89b8bd64d-z2ts4 Total loading time: 0 Render date: 2026-05-06T06:08:47.252Z Has data issue: false hasContentIssue false

How first- and second-language emotion words influence emotion perception in Swedish–English bilinguals

Published online by Cambridge University Press:  19 January 2024

Marie-France Champoux-Larsson*
Affiliation:
Department of Psychology and Social Work, Mid Sweden University, Östersund, Sweden Department of Psychology, University of Chicago, Chicago, USA
Erik C. Nook
Affiliation:
Department of Psychology, Princeton University, Princeton, USA
*
Corresponding author: Marie-France Champoux-Larsson; E-mail: mfclarsson@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Emotional experiences are often dulled in one's second language. We tested whether emotion concepts are more strongly associated with first language (L1) than second language (L2) emotion words. Participants (140 L1-Swedish–L2-English bilinguals) saw a facial expression of an emotion (cue) followed by a target, which could either be another facial expression, an L1 emotion word, or an L2 emotion word. Participants indicated whether the cue and target represented the same or different emotions as fast as possible. Participants were faster and more accurate in both the L1 and L2 word conditions compared to the face condition. However, no significant differences emerged between the L1 and L2 word conditions, suggesting that emotion concepts are not more strongly associated with L1 than L2 emotion words. These results replicate prior research showing that L1 emotion words speed facial emotion perception and provide initial evidence that words (not only first language words) shape emotion perception.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1 Study design.Note. This figure illustrates examples of congruent and incongruent trials in the different conditions (i.e., Face, L1 Word, L2 Word). On each trial, participants first saw a fixation cross, a facial expression of emotion (called the cue), a blank screen inter-stimulus-interval (ISI), a facial expression of emotion or an emotion word in Swedish or English (called the target), and a blank screen inter-trial-interval (ITI). Participants indicated using button presses as fast as possible whether or not the emotions of the target and cue matched or did not match. Half of the trials in each condition were congruent (i.e., expressed the same emotion), and half were incongruent (i.e., expressed different emotions). This resulted in a 2 [Congruence: congruent vs. incongruent] x 3 [Context: Face, L1 word, L2 word] design assessing how each condition affects the speed and accuracy of emotion perception.

Figure 1

Figure 2 Overall Reaction Times and AccuracyNote. Panel A displays reaction times (RT) for each condition. Main effects indicated that participants were faster for congruent than incongruent trials and for both L1 and L2 word trials than face trials. Preregistered ANOVAs showed that the difference between congruent and incongruent reaction times was smaller for face trials and L1 word trials, indicating that emotion faces prime congruent L1 emotion words more than congruent emotion facial expressions, replicating prior work. Contrary to hypotheses, however, the impact of congruence on L1 and L2 words was similar, indicating similar effects for both L1 and L2 emotion words. Panel B displays sensitivity (d’), a signal detection measure of accuracy, for each condition. Participants were less able to correctly identify cue-target matches in the face condition than both the L1 and L2 word conditions. Error bars represent 95% confidence intervals adjusted for within-subject comparisons.

Figure 2

Figure 3 Relations with Self-Reported L2-proficiency and LexTALE ScoresNote. Panel A presents relations between reaction time (RT) and self-reported L2-proficiency (non-centered scores) for each condition. Preregistered analyses show that higher L2-proficiency is related to faster responding for congruent and incongruent trials in L1 and L2 word trials (dark orange, light orange, dark red, and pink lines). Panel B represents relations between RT and objectively assessed L2-proficiency (non-centered scores) using the LexTALE test. Results show that increasing LexTALE scores correlated with reductions in the difference between Face-Incongruent (light green) and Face-Congruent (dark green) trials. As such, individuals with lower L2-proficiency showed greater tendency for facial expressions to speed responding of congruent facial expressions. Panels C and D present relations between sensitivity (d’), a signal-detection measure of accuracy, and both self-reported and LexTALE L2-proficiency for face (green), L1 (orange), and L2 word (red) trials. Results for both analyses show that increased L2-proficiency is related to higher overall sensitivity. No interaction with context conditions emerged. Grey-shaded regions represent 95% confidence intervals.

Supplementary material: File

Champoux-Larsson and Nook supplementary material

Champoux-Larsson and Nook supplementary material
Download Champoux-Larsson and Nook supplementary material(File)
File 709.6 KB