Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-07T09:51:00.344Z Has data issue: false hasContentIssue false

The effectiveness of imperfect weighting in advice taking

Published online by Cambridge University Press:  01 January 2023

Peter Bednarik*
Affiliation:
International Institute for Applied Systems Analysis (IIASA), Schlossplatz 1, A-2361 Laxenburg, Austria
Thomas Schultze
Affiliation:
Institute of Psychology, Georg-August-University Goettingen
*
*Correspondence concerning this paper should be addressed to bednarik@ iiasa.ac.at
Rights & Permissions [Opens in a new window]

Abstract

We investigate decision-making in the Judge-Advisor-System where one person, the “judge”, wants to estimate the number of a certain entity and is given advice by another person. The question is how to combine the judge’s initial estimate and that of the advisor in order to get the optimal expected outcome. A previous approach compared two frequently applied strategies, taking the average or choosing the better estimate. In most situations, averaging produced the better estimates. However, this approach neglected a third strategy that judges frequently use, namely a weighted mean of the judges’ initial estimate and the advice. We compare the performance of averaging and choosing to weighting in a theoretical analysis. If the judge can, without error, detect ability differences between judge and advisor, a straight-forward calculation shows that weighting outperforms both of these strategies. More interestingly, after introducing errors in the perception of the ability differences, we show that such imperfect weighting may or may not be the optimal strategy. The relative performance of imperfect weighting compared to averaging or choosing depends on the size of the actual ability differences as well as the magnitude of the error. However, for a sizeable range of ability differences and errors, weighting is preferable to averaging and more so to choosing. Our analysis expands previous research by showing that weighting, even when imperfect, is an appropriate advice taking strategy and under which circumstances judges benefit most from applying it.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2015] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Figure 1: Plots of relative improvement r of accuracy (i.e., reduction of variance) depending on the ability ratio m after considering the advisor’s advice using three different methods: Choosing the better estimate (red plain), averaging both estimates equally (blue dotted), and weighting the estimates according to ability ratio (green dashed). Since r is measuring the change of variance compared to the initial estimate, r < 0 means an improvement while r > 1 means worsening of the initial estimate. Both axes are in logarithmic scale.

Figure 1

Figure 2: Contour plot of the relative difference k of averaging/weighting (a) and choosing/weighting (b). The two methods are equally efficient at the thick black lines. In the green region weighting is more efficient while in the blue region averaging (a) / choosing (b) are more efficient. Again, efficiency is measured in the reduction of variance compared to the initial estimate: if weighting reduces more variance than averaging/choosing, it is more efficient. At the thick black line, k = 1. Contour lines represent steps of 10%, i.e., k = 0.6,0.7,...,1.4,1.5

Figure 2

Figure 3: Comparing weighting to the combination of choosing and averaging.