Hostname: page-component-848d4c4894-hfldf Total loading time: 0 Render date: 2024-05-15T00:17:04.062Z Has data issue: false hasContentIssue false

A Contractarian Solution to the Experimenter’s Regress

Published online by Cambridge University Press:  01 January 2022

Abstract

Debiasing procedures are experimental methods aimed at correcting errors arising from the cognitive biases of the experimenter. We discuss two of these methods, the predesignation rule and randomization, showing to what extent they are open to the experimenter’s regress: there is no metarule to prove that, after implementing the procedure, the experimental data are actually free from biases. We claim that, from a contractarian perspective, these procedures are nonetheless defensible since they provide a warrant of the impartiality of the experiment: we only need proof that the result has not been intentionally manipulated for prima facie acceptance.

Type
General Philosophy of Science
Copyright
Copyright © The Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This article has been funded by the Spanish Ministry of Science research grant FFI2011-28835. María Jiménez-Buedo and Jesús Zamora made very valuable suggestions at every stage in the development of this article.

References

Baker, Lisa, and Dunbar, Kevin. 2000. “Experimental Design Heuristics for Scientific Discovery: The Use of Baseline and Known Controls.” International Journal of Human Computer Studies 53:335–49.CrossRefGoogle Scholar
Basu, Debabrata. 1980. “Randomization Analysis of Experimental Data: The Fisher Randomization Test.” Journal of the American Statistical Association 75:585–81.Google Scholar
Berry, Scott, and Kadane, Joseph. 1997. “Optimal Bayesian Randomization.” Journal of the Royal Statistical Society B 59:813–19.Google Scholar
Collins, Harry. 1981. “Son of Seven Sexes: The Social Destruction of a Physical Phenomenon.” Social Studies of Science 11:3362.CrossRefGoogle Scholar
Collins, Harry, and Pinch, Trevor. 2005. Dr. Golem: How to Think about Medicine. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Ferreira, José Luis, and Zamora, Jesús. 2006. “An Economic Model of Scientific Rules.” Economics and Philosophy 22:191212.CrossRefGoogle Scholar
Franklin, Allan. 2002. Selectivity and Discord: Two Problems of Experiment. Pittsburgh: University of Pittsburgh Press.Google Scholar
Jenkins, Stephen H. 2004. How Science Works: Evaluating Evidence in Biology and Medicine. Oxford: Oxford University Press.Google Scholar
Kadane, Joseph, and Seidenfeld, Teddy. 1990. “Randomization in a Bayesian Perspective.” Journal of Statistical Planning and Inference 35:329–45.Google Scholar
Mayo, Deborah G. 1996. Error and the Growth of Experimental Knowledge. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Nickerson, Raymond S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.” Review of General Psychology 2:175220.CrossRefGoogle Scholar
Spanos, Aris. 2010. “Is Frequentist Testing Vulnerable to the Base-Rate Fallacy?Philosophy of Science 77:565–83.CrossRefGoogle Scholar
Staley, Kent W. 2002. “What Experiment Did We Just Do? Counterfactual Error Statistics and Uncertainties about the Reference Class.” Philosophy of Science 69:279–99.CrossRefGoogle Scholar
Staley, Kent W. 2004. The Evidence for the Top Quark: Objectivity and Bias in Collaborative Experimentation. Cambridge: Cambridge University Press.Google Scholar
Urbach, Peter. 1985. “Randomization and the Design of Experiments.” Philosophy of Science 85:256–73.Google Scholar
Zamora, Jesús. 2002. “Scientific Inference and the Pursuit of Fame: A Contractarian Approach.” Philosophy of Science 69:300323.CrossRefGoogle Scholar
Zamora, Jesús 2006. “Science Studies and the Theory of Games.” Perspectives on Science 14 (4): 525–59..Google Scholar