Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-07T21:46:07.845Z Has data issue: false hasContentIssue false

Motivated reasoning about artificial intelligence in public policy: comparative evidence from Germany

Published online by Cambridge University Press:  27 March 2025

Sebastian Hemesath
Affiliation:
Department of European Social Research, Institute of Political Science, Saarland University, Saarbrücken, Germany
Markus Tepe*
Affiliation:
Department of Political Science, SOCIUM Research Center Inequality and Social Policy, University of Bremen, Bremen, Germany
*
Corresponding author: Markus Tepe; Email: markus.tepe@uni-bremen.de
Rights & Permissions [Opens in a new window]

Abstract

This study tests whether citizens’ evaluations of the performance of artificial intelligence (AI) in public policies are subject to motivated reasoning. Specifically, we test whether respondents’ preferences for AI regulation or their subjective attitudes toward AI are sources of motivated reasoning across varying use cases, differing in nature, complexity, safety-criticality and normative considerations: AI in municipal services, self-driving cars and recidivism prediction. Experimental results from two preregistered studies conducted among German citizens reveal that subjective attitudes toward AI cause substantial and robust motivated reasoning across all three policy domains. Regulatory preferences are only a selective source for motivated reasoning about AI in public policy. Overall, the results point to the cognitive limitations of strategies that attempt to objectify the benefits of AI without considering the context of the application domain. Politicians and policymakers need to consider these limitations in their attempts to increase citizens’ appreciation of AI in public policy.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (http://creativecommons.org/licenses/by-nc/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.
Figure 0

Figure 1. Satisfaction with municipal service.

Note: Darker gray marks the numerically correct answer. Text marked in gray has not been reported to respondents.
Figure 1

Figure 2. Safety approval of new cars.

Note: Darker gray marks the numerically correct answer. Text marked in gray has not been reported to respondents.
Figure 2

Figure 3. Predicting recidivism in incarcerated person.

Note: Darker gray marks the numerically correct answer. Text marked in gray has not been reported to respondents.
Figure 3

Figure 4. AI regulation preference and AI attitudes.

Figure 4

Figure 5. Evaluation of an AI in allocating parking permits (Study 1).

Note: GLM estimates with robust SE clustered at the respondent level (Appendix Table 1).
Figure 5

Figure 6. Evaluation of AI in self-driving cars’ safety (Study 2).

Note: GLM estimates with robust SE clustered at the respondent level (Appendix Table 1).
Figure 6

Figure 7. Evaluation of an AI in predicting recidivism (Study 2).

Note: GLM estimates with robust SE clustered at the respondent level (Appendix Table 1).
Supplementary material: File

Hemesath and Tepe supplementary material

Hemesath and Tepe supplementary material
Download Hemesath and Tepe supplementary material(File)
File 1.1 MB