Hostname: page-component-89b8bd64d-j4x9h Total loading time: 0 Render date: 2026-05-07T23:43:31.289Z Has data issue: false hasContentIssue false

The Politics of Using AI in Policy Implementation: Evidence from a Field Experiment

Published online by Cambridge University Press:  10 March 2026

Yotam Margalit
Affiliation:
Department of Political Science, Tel-Aviv University, Tel Aviv, Israel Department of Political Economy, King’s College London, London, UK
Shir Raviv*
Affiliation:
Department of Political Science, Tel-Aviv University, Tel Aviv, Israel
*
Corresponding author: Shir Raviv; Email: shirraviv@tauex.tau.ac.il
Rights & Permissions [Opens in a new window]

Abstract

The use of AI by government agencies in guiding important decisions (for example, on policing, welfare, education) has triggered backlash and demands for greater public input in AI regulation. Yet it remains unclear what such input would reflect: general attitudes towards new technologies, personal experience with AI, or learning about its implications. We study this question experimentally by tracking the attitudes of over 1,500 workers whose task assignments were randomly determined by either a human or an AI ‘boss’, with task content and valence also randomized. Across a three-wave panel, we find that personal experience with AI-as-boss affected workers’ job performance but not their attitudes on using AI in public decision making. In contrast, exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants’ prior disposition or direct experience. The results highlight the promise of incorporating public input into AI governance.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press or the rights holder(s) must be obtained prior to any commercial use.
Copyright
© The Author(s), 2026. Published by Cambridge University Press
Figure 0

Figure 1. Exposure to new information – trajectories of preferences.Note: Three possible patterns of attitudinal change that may result from exposure to new information about AI and its implications. The vertical axis indicates the probability of favoring AI algorithms in the policy implementation.

Figure 1

Figure 2. Experiencing ADM – trajectories of preferences.Note: Three possible patterns of opinion change that could result from interacting directly with AI. The vertical axis indicates the probability of favoring AI in policy decisions.

Figure 2

Figure 3. Experimental design.

Figure 3

Figure 4. Screen capture of the reply message: a negative experience with ADM.

Figure 4

Figure 5. Attitudes towards AI in public policy, pre-treatment. We measured the responses on a seven-point scale and then classified them into five categories: strongly oppose (1), oppose (2–3), indifferent (4), support (5–6), and strongly support (7). The distribution calculation takes into account the indifferent category. Figure SI-1 shows the full distribution.

Figure 5

Table 1. Effects of experience on attitudes

Figure 6

Figure 6. Effects of experience on behaviors. The figure shows the predicted score of each behavioral outcome based on intention-to-treat (ITT) analyses that regress them on a binary indicator for ADM, an indicator for the type of experience with the decision maker, and their interaction. Models also control for informational treatments. The thin (90 per cent) and thick (95 per cent) error bars represent the confidence interval around the estimate, respectively. The estimate and SE are reported as well. The full results are reported in Table SI-7.

Figure 7

Table 2. Effects of information on attitudes

Figure 8

Figure 7. Effects of exposure to information treatments. The figure shows the results of OLS regressions. The IV are indicators for positive information on AI, negative information on AI, or placebo information about fashion. The left panel shows the estimated treatment effects relative to the effects associated with key covariates, excluding the pre-treatment outcome. The right panel shows the treatment effects estimated separately for each policy domain. Models include controls for pre-treatment covariates and pre-treatment outcomes, as well as indicators for the experience and valence of information. We limited the sample to workers who were assigned to the human treatments to ensure that the comparison was clean. See Table SI-9 for the results. Thin bars represent 95 per cent CI, and thick bars represent 90 per cent CI.

Figure 9

Figure 8. Treatment effects by pre-dispositions. The figure shows the predicted FA score of responses to the eight items in Wave 3, based on the interaction between the information treatment and pre-dispositions. Error bars show 95 per cent confidence interval. The model controls for decision maker and experience treatments, the pre-treatment outcome (as a continuous measure), and demographic covariates. Column 5 in Table SI-11 shows the full results. Data points correspond to individual raw observations.

Figure 10

Figure 9. Most persuasive comments among workers exposed to information about AI. The figure shows the percentage of individuals in the positive and negative treatment groups (left and right panels, respectively) who cited each comment as the most persuasive. The comments were identified based on key phrases extracted from participants’ open-ended responses.

Supplementary material: File

Margalit and Raviv supplementary material

Margalit and Raviv supplementary material
Download Margalit and Raviv supplementary material(File)
File 624 KB
Supplementary material: Link

Margalit and Raviv Dataset

Link