Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-07T21:14:27.848Z Has data issue: false hasContentIssue false

Facial cues to anger affect meaning interpretation of subsequent spoken prosody

Published online by Cambridge University Press:  19 March 2024

Caterina Petrone
Affiliation:
CNRS, LPL, UMR 7309, Aix-Marseille Université, Aix-en-Provence, France
Francesca Carbone
Affiliation:
CNRS, LPL, UMR 7309, Aix-Marseille Université, Aix-en-Provence, France School of Psychology, University of Kent, Canterbury, UK
Nicolas Audibert*
Affiliation:
Laboratoire de Phonétique et Phonologie, CNRS & Sorbonne Nouvelle, Paris, France
Maud Champagne-Lavau
Affiliation:
CNRS, LPL, UMR 7309, Aix-Marseille Université, Aix-en-Provence, France
*
Corresponding author: Nicolas Audibert; Email: nicolas.audibert@sorbonne-nouvelle.fr
Rights & Permissions [Opens in a new window]

Abstract

In everyday life, visual information often precedes the auditory one, hence influencing its evaluation (e.g., seeing somebody’s angry face makes us expect them to speak to us angrily). By using the cross-modal affective paradigm, we investigated the influence of facial gestures when the subsequent acoustic signal is emotionally unclear (neutral or produced with a limited repertoire of cues to anger). Auditory stimuli spoken with angry or neutral prosody were presented in isolation or preceded by pictures showing emotionally related or unrelated facial gestures (angry or neutral faces). In two experiments, participants rated the valence and emotional intensity of the auditory stimuli only. These stimuli were created from acted speech from movies and delexicalized via speech synthesis, then manipulated by partially preserving or degrading their global spectral characteristics. All participants relied on facial cues when the auditory stimuli were acoustically impoverished; however, only a subgroup of participants used angry faces to interpret subsequent neutral prosody. Thus, listeners are sensitive to facial cues for evaluating what they are about to hear, especially when the auditory input is less reliable. These results extend findings on face perception to the auditory domain and confirm inter-individual variability in considering different sources of emotional information.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. Spectrogram, waveform, smoothed f0 contour (in blue), intensity contour (in yellow) and textgrid for the sentence Rends-le moi ‘Give it back to me’ in the (a) original, (b) morphing− and (c) morphing+ conditions. Textgrids contain the orthographic transcription (tier 1), the IPA transcription (tier 2), and the phonological annotation for the f0 contour within the French ToBI system (tier 3). The dashed lines indicate segmental boundaries. The three conditions match in terms of their intensity contour, f0 contours and phonological annotation (LHi LH* L-L%). The sentence in the example consists of one Intonational Phrase which contains one Accentual Phrase (the basic prosodic unit in French, composed of an early LHi and a late LH* rise, Jun & Fougeron, 2000).

Figure 1

Table 1. Means and standard deviations (in parentheses) of prosodic parameters for utterances from the original set and for stimuli in the morphing+ and morphing− conditions

Figure 2

Figure 2. Illustration of the experimental paradigm.

Figure 3

Figure 3. Boxplots of valence (a) and intensity (b) score across MORPHING, split by SPOKEN PROSODY and FACE.

Figure 4

Figure 4. Boxplots of valence (a) and intensity (b) scores across SPOKEN PROSODY, split by CLUSTER and FACE.

Supplementary material: File

Petrone et al. supplementary material

Petrone et al. supplementary material
Download Petrone et al. supplementary material(File)
File 83.1 KB