National security is one of many fields where experts make vague probability assessments when evaluating high-stakes decisions. This practice has always been controversial, and it is often justified on the grounds that making probability assessments too precise could bias analysts or decision makers. Yet these claims have rarely been submitted to rigorous testing. In this paper, we specify behavioral concerns about probabilistic precision into falsifiable hypotheses which we evaluate through survey experiments involving national security professionals. Contrary to conventional wisdom, we find that decision makers responding to quantitative probability assessments are less willing to support risky actions and more receptive to gathering additional information. Yet we also find that when respondents estimate probabilities themselves, quantification magnifies overconfidence, particularly among low-performing assessors. These results hone wide-ranging concerns about probabilistic precision into a specific and previously undocumented bias that training may be able to correct.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.
* Views captured on Cambridge Core between 25th September 2017 - 17th October 2017. This data will be updated every 24 hours.