Hostname: page-component-77f85d65b8-g4pgd Total loading time: 0 Render date: 2026-03-30T00:19:19.442Z Has data issue: false hasContentIssue false

Measuring the Quality of Answers in Political Q&As with Large Language Models

Published online by Cambridge University Press:  16 July 2025

R. Michael Alvarez
Affiliation:
Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA.
Jacob Morrier*
Affiliation:
Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA.
*
Corresponding author: Jacob Morrier; Email: jmorrier@caltech.edu
Rights & Permissions [Opens in a new window]

Abstract

This article proposes a new approach for measuring the quality of answers in political question-and-answer sessions. We assess the quality of an answer based on how easily and accurately it can be recognized among a random set of candidate answers given the question’s text. This measure reflects the answer’s relevance and depth of engagement with the question. Drawing a parallel with semantic search, we can implement this approach by training a language model on the corpus of observed questions and answers without additional human-labeled data. We showcase and validate our methodology within the context of the Question Period in the Canadian House of Commons. Our analysis reveals that while some answers only have a weak semantic connection to questions, suggesting some evasion or obfuscation, they are generally at least moderately relevant, far exceeding what we would expect from random replies. We also find meaningful correlations between the quality of answers and the party affiliation of the members of Parliament asking the questions.

Information

Type
Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Political Methodology
Figure 0

Figure 1 Biencoder architecture.

Figure 1

Figure 2 Sentence-BERT encoder architecture.

Figure 2

Figure 3 Distribution of the cosine similarity between questions and answers.

Figure 3

Figure 4 Validity of the cosine similarity between questions and answers.

Figure 4

Table 1 Five exchanges with the lowest cosine similarity between questions and answers.

Figure 5

Table 2 Five exchanges with the highest cosine similarity between questions and answers.

Figure 6

Figure 5 Average cosine similarity between questions and answers and count by reply category.

Figure 7

Figure 6 Average cosine similarity between questions and answers by party and legislature.

Supplementary material: File

Alvarez and Morrier supplementary material

Alvarez and Morrier supplementary material
Download Alvarez and Morrier supplementary material(File)
File 3.3 MB