Hostname: page-component-77c78cf97d-sp94z Total loading time: 0 Render date: 2026-04-25T10:33:16.379Z Has data issue: false hasContentIssue false

AUTOMATED DETECTION OF EMOTION IN CENTRAL BANK COMMUNICATION: A WARNING

Published online by Cambridge University Press:  21 February 2025

Nicole Baerg*
Affiliation:
Department of Government, University of Essex, Colchester, UK
Carola Binder
Affiliation:
School of Civic Leadership, University of Texas, Austin, TX, USA
*
Corresponding author: Nicole Baerg; Email: nicole.baerg@essex.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Central banks have increased their official communications. Previous literature measures complexity, clarity, tone and sentiment. Less explored is the use of fact versus emotion in central bank communication. We test a new method for classifying factual versus emotional language, applying a pretrained transfer learning model, fine-tuned with manually coded, task-specific and domain-specific data sets. We find that the large language models outperform traditional models on some occasions; however, the results depend on a number of choices. We therefore caution researchers from depending solely on such models even for tasks that appear similar. Our findings suggest that central bank communications are not only technically but also subjectively difficult to understand.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of National Institute Economic Review
Figure 0

Table 1. Human annotation of central bank statements for fact and feeling

Figure 1

Table 2. Performance metrics for FACT vs. FEEL classification with coder 1 gold standard

Figure 2

Table 3. Performance metrics for FACT vs. FEEL classification with coder 2 gold standard

Figure 3

Table 4. Performance metrics for FACT vs. FEEL classification with consensus ‘high’ certainty

Figure 4

Table 5. Performance metrics for FACT vs. FEEL classification with consensus ‘low certainty’