Hostname: page-component-77f85d65b8-2tv5m Total loading time: 0 Render date: 2026-03-28T17:26:29.192Z Has data issue: false hasContentIssue false

Enhancing Empathic Accuracy: Penalized Functional Alignment Method to Correct Temporal Misalignment in Real-Time Emotional Perception

Published online by Cambridge University Press:  05 September 2025

Linh H. Nghiem
Affiliation:
School of Mathematics and Statistics, University of Sydney, Sydney, NSW, Australia
Jing Cao
Affiliation:
Department of Statistics and Data Science, Southern Methodist University, Dallas, TX, USA
Chrystyna D. Kouros
Affiliation:
Department of Psychology, Southern Methodist University, Dallas, TX, USA
Chul Moon*
Affiliation:
Department of Statistics and Data Science, Southern Methodist University, Dallas, TX, USA
*
Corresponding author: Chul Moon; Email: chulm@smu.edu
Rights & Permissions [Opens in a new window]

Abstract

Empathic accuracy (EA) is the ability to accurately understand another person’s thoughts and feelings, which is crucial for social and psychological interactions. Traditionally, EA is assessed by comparing a perceiver’s moment-to-moment ratings of a target’s emotional state with the target’s own self-reported ratings at corresponding time points. However, misalignments between these two sequences are common due to the complexity of emotional interpretation and individual differences in behavioral responses. Conventional methods often ignore or oversimplify these misalignments, for instance by assuming a fixed time lag, which can introduce bias into EA estimates. To address this, we propose a novel alignment approach that captures a wide range of misalignment patterns. Our method leverages the square-root velocity framework to decompose emotional rating trajectories into amplitude and phase components. To ensure realistic alignment, we introduce a regularization constraint that limits temporal shifts to ranges consistent with human perceptual capabilities. This alignment is efficiently implemented using a constrained dynamic programming algorithm. We validate our method through simulations and real-world applications involving video and music datasets, demonstrating its superior performance over traditional techniques.

Information

Type
Application and Case Studies - Original
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Psychometric Society
Figure 0

Figure 1 Example of real-time EA data collection procedure.

Figure 1

Figure 2 Illustration of different relations between a target’s rating, a perceiver’s latent rating, and the perceiver’s observed rating. Discrepancy A denotes the difference between a target’s rating and a perceiver’s observed rating. Disagreement B captures the inconsistency between target’s rating and perceiver’s latent rating, which is the true focus of EA. Misalignment C refers to the divergence between perceiver’s observed rating and their latent rating, often due to distortions in expressing their internal judgment. Discrepancy A can arise from both Disagreement B and Misalignment C. Most conventional EA methods mistakenly assess Discrepancy A, thereby conflating measurement error with genuine empathic inaccuracy.

Figure 2

Figure 3 (a) An example of misaligned rating sequences between a perceiver and a target. The solid red line represents the target’s ratings and the black dashed perceiver’s ratings. (b) Aligned ratings for the perceiver. The green dashed line shows the 6-second delay adjustment, and the purple dashed line shows the aligned ratings using the penalized SRVF representation.

Figure 3

Figure 4 Example target and perceiver’s emotion ratings of Devlin et al. (2014). (Left): target x (solid), perceiver’s observed response y (dash), and estimated perceiver’s response $\hat {y} = y \circ \hat {\gamma }$ (dot dash). (Right): estimated warping function $\hat {\gamma }$.

Figure 4

Table 1 Performance of different alignment methods in the simulation studies under different warping limits $\eta $, $d_a$ between the estimated perceiver $\hat {y}$ and the true latent perceiver a, and the $(10\times )$ bias of the estimated correlation between the true latent perceiver and the target.

Figure 5

Figure 5 The penalized SRVF alignment results under 21 different true warping limits $\eta \in \{0,0.8,\ldots ,8,\ldots ,15.2, 16\}$ seconds when the upper limit of alignment is $\nu =8$ seconds. The red dotted line in the mean bias plot marks the unbiased level.

Figure 6

Table 2 Comparison results based on $d_a$ between the estimated perceiver $\hat {y}$ and the true latent perceiver a and the $(10\times )$ bias of the estimated correlation between the true latent perceiver and the target.

Figure 7

Figure 6 Boxplots for the estimated amount of warping, as measured by the Fisher–Rao metric between the identity warping $\gamma _{id}$ and the estimated warping function using unpenalized SRVF and penalized SRVF method, with $\nu = 8$ seconds for each video.

Figure 8

Figure 7 Results for EA estimates in the social EA study.

Figure 9

Table 3 Estimated coefficients for Trait Positive Affect as a predictor of EA as measured by different alignment methods.

Figure 10

Figure 8 Boxplots of the metrics for the average amount of warping (top row) and goodness of fit (bottom row) of the estimated models for the three sets of music recordings. The penalized SRVF alignment was conducted using the threshold $\nu = 8s$.

Figure 11

Figure 9 Results for parameter estimates in the music EA study.

Supplementary material: File

Nghiem et al. supplementary material

Nghiem et al. supplementary material
Download Nghiem et al. supplementary material(File)
File 560.8 KB