Like a fashion trend traveling from New York to the heartland, soul-searching about causality has made its way from empirical research in economics to that in political science with the usual lag. Gone are the days when it was enough to have a nice theory, a conditional correlation, and some rhetoric about the implausibility of competing explanations while implying but assiduously avoiding the “c” word. Editors, reviewers, and search committees are beginning to look for more explicit and careful empirical treatments of causality. That is, following a definition of causality tracing back to Mill (1848), researchers are expected to lay out a set of possible outcomes, or counterfactuals, generated by a set of determinants, and demonstrate that holding all possible determinants except one at a constant level, the manipulation of that determinant is associated with a specific change in outcome, which can be deemed a causal effect (Heckman 2005).
It does not take much soul-searching to realize, however, that the observational studies that make up the vast majority of empirical explorations in comparative politics are deeply flawed when held up to the experimental ideal. Where does this leave us? In the extreme view, if we cannot do randomized field experiments or perhaps survey experiments, we should do nothing. The opposite extreme position holds that this would remake comparative politics into an arid subfield of program evaluation, turning a blind eye to the interesting and important questions that animated the field in its golden era (definitions vary).
Email your librarian or administrator to recommend adding this book to your organisation's collection.