Hostname: page-component-8448b6f56d-t5pn6 Total loading time: 0 Render date: 2024-04-24T10:32:32.091Z Has data issue: false hasContentIssue false

Can the Biomedical Research Cycle be a Model for Political Science?

Published online by Cambridge University Press:  28 December 2016

Abstract

In sciences such as biomedicine, researchers and journal editors are well aware that progress in answering difficult questions generally requires movement through a research cycle: Research on a topic or problem progresses from pure description, through correlational analyses and natural experiments, to phased randomized controlled trials (RCTs). In biomedical research all of these research activities are valued and find publication outlets in major journals. In political science, however, a growing emphasis on valid causal inference has led to the suppression of work early in the research cycle. The result of a potentially myopic emphasis on just one aspect of the cycle reduces incentives for discovery of new types of political phenomena, and more careful, efficient, transparent, and ethical research practices. Political science should recognize the significance of the research cycle and develop distinct criteria to evaluate work at each of its stages.

Type
Reflections Symposium
Copyright
Copyright © American Political Science Association 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angrist, Joshua D. and Pischke, Jörn-Steffen. 2014. Mastering ’Metrics: The Path from Cause to Effect. Princeton, NJ: Princeton University Press.Google Scholar
Ankeny, Rachel A. 2011. “Using Cases to Establish Novel Diagnoses: Creating Generic Facts by Making Particular Facts Travel Together.” In How Well Do Facts Travel? The Dissemination of Reliable Knowledge. New York: Cambridge University Press.Google Scholar
Dunning, Thad. 2012. Natural Experiments in the Social Sciences: A Design-Based Approach. Cambridge University Press.CrossRefGoogle Scholar
Gehlbach, Scott. 2015. “The Fallacy of Multiple Methods.” Comparative Politics Newsletter 25(2): 1112.Google Scholar
Gerring, John. 2012a. “Mere Description.” British Journal of Political Science 42(04): 721–46.CrossRefGoogle Scholar
Gerring, John. 2012b. Social Science Methodology: A Unified Framework. Cambridge; New York: Cambridge University Press.Google Scholar
Glennerster, Rachel. 2013. Running Randomized Evaluations: A Practical Guide. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Humphreys, Macartan, Sanchez de as Sierra, Raul, and Van der Windt, Peter. 2013. “Fishing, Commitment, and Communication: A Proposal for Comprehensive Nonbinding Research Registration.” Political Analysis 21(1): 120.CrossRefGoogle Scholar
King, Gary, Keohane, Robert, and Verba, Sidney. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Kovesdy, Csaba P. and Kalantar-Zadeh, Kamyar. 2012. “Observational Studies versus Randomized Controlled Trials: Avenues to Causal Inference in Nephrology.” Advances in Chronic Kidney Disease 19(1): 1118.CrossRefGoogle Scholar
Munck, Gerardo L. 1998. “Canons of Research Design in Qualitative Analysis.” Studies in Comparative International Development 33(3): 1845.CrossRefGoogle Scholar
Paluck, Elizabeth. 2010. “The Promising Integration of Qualitative Methods and Field Experiments.” Annals of the American Academy of Political and Social Science 628(1): 59.CrossRefGoogle Scholar
Rubin, Donald. B. 1974. “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies.” Journal of Educational Psychology 66(5): 688701.CrossRefGoogle Scholar