Hostname: page-component-77c78cf97d-rv6c5 Total loading time: 0 Render date: 2026-04-26T22:52:10.650Z Has data issue: false hasContentIssue false

Boosting intelligence analysts’ judgment accuracy: What works, what fails?

Published online by Cambridge University Press:  01 January 2023

David R. Mandel*
Affiliation:
Intelligence, Influence and Collaboration Section, Toronto Research Centre, Defence Research and Development Canada
Christopher W. Karvetski
Affiliation:
BlackSwan Technologies Ltd
Mandeep K. Dhami
Affiliation:
Department of Psychology, Middlesex University
Rights & Permissions [Opens in a new window]

Abstract

A routine part of intelligence analysis is judging the probability of alternative hypotheses given available evidence. Intelligence organizations advise analysts to use intelligence-tradecraft methods such as Analysis of Competing Hypotheses (ACH) to improve judgment, but such methods have not been rigorously tested. We compared the evidence evaluation and judgment accuracy of a group of intelligence analysts who were recently trained in ACH and then used it on a probability judgment task to another group of analysts from the same cohort that were neither trained in ACH nor asked to use any specific method. Although the ACH group assessed information usefulness better than the control group, the control group was a little more accurate (and coherent) than the ACH group. Both groups, however, exhibited suboptimal judgment and were susceptible to unpacking effects. Although ACH failed to improve accuracy, we found that recalibration and aggregation methods substantially improved accuracy. Specifically, mean absolute error (MAE) in analysts’ probability judgments decreased by 61% after first coherentizing their judgments (a process that ensures judgments respect the unitarity axiom) and then aggregating their judgments. The findings cast doubt on the efficacy of ACH, and show the promise of statistical methods for boosting judgment quality in intelligence and other organizations that routinely produce expert judgments.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2018] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Table 1: Informational features of experimental task. Values represent cue likelihoods.

Figure 1

Figure 1: Accuracy of probability judgments by group size and aggregation method.

Figure 2

Figure 2: Probability of improvement achieved by increasing group size by one member. Bars show 95% confidence intervals from 1,000 bootstrap samples. The reference line shows the probability of improvement by chance.

Supplementary material: File

Mandel et al. supplementary material

Mandel et al. supplementary material
Download Mandel et al. supplementary material(File)
File 3.5 KB