The weighing of evidence and the formation of belief are basic elements of human thought. The question of how to evaluate evidence and assess confidence has been addressed from a normative perspective by philosophers and statisticians; it has also been investigated experimentally by psychologists and decision researchers. One of the major findings that has emerged from this research is that people are often more confident in their judgments than is warranted by the facts. Overconfidence is not limited to lay judgment or laboratory experiments. The well-publicized observation that more than two-thirds of small businesses fail within 4 years (Dun & Bradstreet, 1967) suggests that many entrepreneurs overestimate their probability of success (Cooper, Woo, & Dunkelberg, 1988). With some notable exceptions, such as weather forecasters (Murphy & Winkler, 1977), who receive immediate frequentistic feedback and produce realistic forecasts of precipitation, overconfidence has been observed in judgments of physicians (Lusted, 1977), clinical psychologists (Oskamp, 1965), lawyers (Wagenaar & Keren, 1986), negotiators (Neale & Bazerman, 1990), engineers (Kidd, 1970), and security analysts (Staël von Holstein, 1972). As one critic described expert prediction, “often wrong, but rarely in doubt.”
Overconfidence is common, but not universal. Studies of calibration have found that with very easy items, overconfidence is eliminated and underconfidence is often observed (Lichtenstein, Fischhoff, & Phillips, 1982). Furthermore, studies of sequential updating have shown that posterior probability estimates commonly exhibit conservatism or underconfidence (Edwards, 1968).