Published online by Cambridge University Press: 05 January 2013
On any given policy issue, one is likely to be able to find economists offering professional opinions on all sides, many of them with quantitative models to support their opinions. Though our discipline is in places as quantitative and mathematically deep as many of the physical sciences, we do not ordinarily resolve the important policy issues even with the most difficult and intriguing of our mathematical tools. Yet economists often speak as if their models and conclusions were imprecise only in the same sense that a structural engineer's finite-element model of a beam is imprecise - the model is a finite-dimensional approximation to an infinite-dimensional ideal model, and the ideal model itself ignores certain random imperfections in the beam. The public and noneconomist users of economic advice understand that the uncertainty about an economic model is not so straightforward and therefore rightly take the professional opinions of economists who pretend otherwise with many grains of salt.
The problem is not simply that our best models are too sophisticated for the layman to understand. David Freedman (1985), a prominent statistician, has recently examined in a series of papers some actual applications of the statistical method in economics and emerged with broad and scathing criticisms. Whereas there are effective counterarguments to some of Freedman’s criticisms, they cannot be made within the classical statistical framework of most econometrics textbooks or within the profession’s conventional rhetorical style of presenting controversial opinions in the guise of assumptions supposedly drawn from “theory.” Quantitatively oriented scientists outside the social sciences who make a serious effort to understand economic research will often have Freedman's reaction.