Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-09T12:10:17.270Z Has data issue: false hasContentIssue false

Beyond analytic bounds: Re-evaluating predictive power in risky decision models

Published online by Cambridge University Press:  27 November 2024

Or David Agassi
Affiliation:
The Faculty of Data and Decision Sciences, Technion – Israel Institute of Technology, Haifa, Israel
Ori Plonsky*
Affiliation:
The Faculty of Data and Decision Sciences, Technion – Israel Institute of Technology, Haifa, Israel
*
Corresponding author: Ori Plonsky; Email: plonsky@technion.ac.il
Rights & Permissions [Opens in a new window]

Abstract

Research in behavioral decision-making has produced many models of decision under risk. To improve our understanding of choice under risk, it is essential to perform rigorous model comparisons over large sets of decision settings to find which models are most useful. Recently, such large-scale comparisons have produced conflicting conclusions: A variant of cumulative prospect theory (CPT) was the best model in a study by He, Analytis, and Bhatia (2022), whereas variants of the model BEAST were the best in two choice prediction competitions. This study delves into these contradictions to identify and explore the underlying reasons. We replicate and extend the analysis by He et al., this time incorporating BEAST, which was previously excluded because it cannot be analytically estimated. Our results show that while CPT excels in systematically hand-crafted tasks, BEAST—originally designed for broader decision-making contexts—matches or even surpasses CPT’s performance when choice tasks are randomly selected, and predictions are made for new, unknown decision makers. This success of BEAST, very different from classical decision models—as it does not assume, for example, subjective transformations of outcomes and probabilities—puts into question previous conclusions concerning the underlying psychological mechanisms of choice under risk. Our results challenge the field to expand beyond established evaluating techniques and highlight the importance of an inclusive approach toward nonanalytic models, like BEAST, to achieve more objective insights into decision-making behavior.

Information

Type
Empirical Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association for Decision Making
Figure 0

Figure 1 Average prediction error (MSE) for in-sample individuals, for the (a) mixed domain and (b) gain domain. Bar colors indicate the usage of nonlinear payoff and probability transformations by each of the models. Arrows mark the relative ranking of BEAST (thin arrow) and AdaBEAST (thick arrow).

Figure 1

Figure 2 Prediction error (MSE) for in-sample individuals, by dataset. Each bar represents the prediction error for one of the models, with BEAST and AdaBEAST highlighted. Names of the datasets follow (He et al., 2022), and background colors correspond to the domain of choice tasks.

Figure 2

Figure 3 Average prediction error (MSE) for out-of-sample individuals in the (a) mixed domain and (b) gain domain. Bar colors indicate the usage of nonlinear payoff and probability transformations by each of the models. Arrows mark the relative ranking of BEAST (thin arrow) and AdaBEAST (thick arrow).

Supplementary material: File

Agassi and Plonsky supplementary material

Agassi and Plonsky supplementary material
Download Agassi and Plonsky supplementary material(File)
File 700 KB