Hostname: page-component-89b8bd64d-j4x9h Total loading time: 0 Render date: 2026-05-10T08:50:42.194Z Has data issue: false hasContentIssue false

Representations of facial expressions since Darwin

Published online by Cambridge University Press:  28 April 2022

David Perrett*
Affiliation:
School of Psychology and Neuroscience, University of St Andrews, St Mary's Quad, St Andrews, Fife KY169JP, UK
*
*Corresponding author. E-mail: dp@st-andrews.ac.uk

Abstract

Darwin's book on expressions of emotion was one of the first publications to include photographs (Darwin, The expression of the emotions in Man and animals, 1872). The inclusion of expression photographs meant that readers could form their own opinions and could, like Darwin, survey others for their interpretations. As such, the images provided an evidence base and an ‘open source’. Since Darwin, increases in the representativeness and realism of emotional expressions have come from the use of composite images, colour, multiple views and dynamic displays. Research on understanding emotional expressions has been aided by the use of computer graphics to interpolate parametrically between different expressions and to extrapolate exaggerations. This review tracks the developments in how emotions are illustrated and studied and considers where to go next.

Information

Type
Review
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press
Figure 0

Figure 1. Images of health and sickness from the nineteenth and twenty-first centuries. (a) Galton's composite photographs of ‘health’ – a combination of 23 Royal Engineers, and ‘sickness’ – combinations of six and nine cases of tubercular disease (Galton, 1883). (b) Composite images of 22 individuals 2 hours after an injection of a placebo (left) or a bacterial endotoxin (right). Note the subtle change in expression after the toxin. Reproduced from Axelsson et al. (2018), Proceedings of the Royal Society B: Biological Sciences published under creative commons. Permissions for reproduction were obtained from https://www.copyright.com/. The author's permission was provided by email.

Figure 1

Figure 2. Composite faces posing happy, afraid and disgust expressions. Row 1: a composite of images of 35 males posing with neutral expression (0%) and a happy expression (100%). The differences in shape, colour and texture between the neutral and happy face images are used to transform the neutral image in 25% steps. This creates images where the happy expression gradually emerges in intensity. The 125 and 150% images represent an extrapolation of the series with caricature exaggeration of the happy expression by 25 and 50%. Row 2: as row 1 for the expression of fear. Row 3: as row 1 for the expression of disgust. Composite images and emotion transforms produced by the author.

Figure 2

Figure 3. Happy to angry facial expression continuum. Five steps are illustrated progressing from 100% happy to 100% angry. The central image is ambiguous showing both characteristics of happiness and of anger. The upper part of the figure illustrates the categorical boundary between the images being categorised as angry or happy before training (see text). The lower section illustrates that, post training, the boundary is shifted such that more ambiguous expressions are classified as happy. Reproduced from figure 1 in Penton-Voak et al. (2013) Psychological Science, 24, 688–697 with permission from the author.

Figure 3

Figure 4. A face with and without additional diagnostic colour information for the emotion of happiness. With the augmented colour information, the images were easier to classify as happy. Reproduced under Creative Commons License cropping the original image to show only the face pair from figure S6 from Benitez-Quiroz et al. (2018) Proceedings of the National Academy of Sciences, 115, 3581–3586. With permission from the author.

Figure 4

Figure 5. 3D faces varying in apparent trustworthiness. Frontal and half profile views of male and female 3D head models varying in apparent personality. The head models were constructed by averaging together the 3D surface shape and texture of male and female faces separately (middle row). A collection of 118 faces (male = 50, female = 68) were rated for how trustworthy they looked while being rotated to reveal their 3D structure. For each gender, an average 3D head shape was formed from those faces that appeared high in trustworthiness. Separately an average was formed from those that appeared low in trustworthiness. These two averages defined a trustworthiness trajectory in 3D shape space for men and for women. Male and female composite faces were then transformed in shape along this trajectory to decrease apparent trustworthiness (top row) or to increase apparent trustworthiness (bottom row). Methods for averaging and transforming have been presented elsewhere (Holzleitner et al., 2014). 3D head models and apparent trait transforms models produced by the author.

Figure 5

Figure 6. Disgust expression modified by context. An isolated facial expression of disgust was placed in a ‘disgust’ context (left) or in a ‘pride’ context (right). While the disgust expression was accurately categorised as negatively valanced in the disgust context, it was never categorised as having a negative valence in the pride context. Reproduced from figure 4a, Aviezer et al., (2008) Psychological Science, 19, 724–732 with permission from the author. Permissions for reproduction were obtained from https://www.copyright.com/. The author's permission was provided by email.

Figure 6

Figure 7. Comparing the representation of three expressions for one European (left) and one Chinese participant (right). The mouth region is more informative for the European and the eye region is more informative for the Chinese participant. Reproduced from Movie S2, Jack et al. (2012) Proceedings of the National Academy of Sciences, 109, 7241–7244 with permission from the author. For the dynamic movie see http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1200155109/-/DCSupplemental/sm02.avi.