Hostname: page-component-89b8bd64d-r6c6k Total loading time: 0 Render date: 2026-05-08T02:01:50.016Z Has data issue: false hasContentIssue false

Regularized Multilevel Multinomial Regression for Select-All-That-Apply Responses and High-Dimensional Predictors with Applications to Perception of Facial Expressions

Published online by Cambridge University Press:  22 April 2026

Nathaniel E. Helwig*
Affiliation:
Psychology and Statistics, University of Minnesota Twin Cities , USA
Keding Chen
Affiliation:
Psychology, University of Minnesota Twin Cities , USA
Stephen J. Guy
Affiliation:
Computer Science & Engineering, University of Minnesota Twin Cities , USA
Sofia Lyford-Pike
Affiliation:
Otolaryngology-Head & Neck Surgery, University of Minnesota Twin Cities , USA
*
Corresponding author: Nathaniel E. Helwig; Email: helwig@umn.edu
Rights & Permissions [Opens in a new window]

Abstract

This article develops an analysis pipeline for quantifying and relating mouth shape variation to the emotions perceived from facial expressions. We use open-source data that contains ratings from 802 fairgoers on 27 smile-like expressions. Each rater was given a list of seven emotions (happy, sad, anger, contempt, fear, surprise, and disgust) and asked to select all of the words that best described the facial expression. To develop a generalizable method for quantifying mouth shape variation, we leverage statistical shape analysis techniques to parameterize each mouth’s shape in terms of 30 systematically placed landmarks that outline the upper and lower lips. Furthermore, we demonstrate that a three-dimensional representation of these landmark coordinates produces an interpretable feature set that outperforms the original and full-dimensional feature sets in terms of predictive performance. To connect the mouth shape features to the emotion ratings, we develop a nonparametric multinomial regression model that is capable of shrinkage and selection with high-dimensional predictors. Our results demonstrate that the proposed method can produce easily interpretable model predictions that enhance our understanding of the nature in which subtle variations in mouth shape affect the perception of a facial expression.

Information

Type
Application and Case Studies - Original
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Psychometric Society
Figure 0

Figure 1 Reproduction of Figure 1 from Helwig et al. (2017).

Figure 1

Table 1 Number of raters and emotion probability estimates for each smile

Figure 2

Figure 2 The $L = 30$ landmarks for selected smiles.

Figure 3

Figure 3 The mouth shapes created by the landmarks for selected smiles.

Figure 4

Figure 4 The uncentered principal component scores for the 27 smiles.

Figure 5

Figure 5 The mouth shapes created by varying principal component scores.

Figure 6

Figure 6 Box plots showing the mean absolute error for each combination of random effects variance (rows) and feature set (columns). Within each panel, the box plots display the performance for each penalty and tuning method across 10 replications.

Figure 7

Figure 7 Explained multinomial deviance paths for each feature set and penalty. Within each panel, a vertical solid/dashed line denotes the value of $\lambda $ that minimizes the AIC/GCV.

Figure 8

Figure 8 Variable importance indices for each emotion and penalty.

Figure 9

Figure 9 Predicted probability of perceiving each emotion for the 27 smiles.

Figure 10

Figure 10 Two mechanisms for improving the perceptions of smile 7.

Figure 11

Figure 11 Violin plots of the random effects by rater’s gender and emotion. Made using the vioplot package (Adler et al., 2025).

Supplementary material: File

Helwig et al. supplementary material

Helwig et al. supplementary material
Download Helwig et al. supplementary material(File)
File 19 MB