Hostname: page-component-8448b6f56d-t5pn6 Total loading time: 0 Render date: 2024-04-23T18:33:30.162Z Has data issue: false hasContentIssue false

Omitted Variables, Countervailing Effects, and the Possibility of Overadjustment*

Published online by Cambridge University Press:  04 November 2016

Abstract

The effect of conditioning on an additional covariate on confounding bias depends, in part, on covariates that are unobserved. We characterize the conditions under which the interaction between a covariate that is available for conditioning and one that is not can affect bias. When the confounding effects of two covariates, one of which is observed, are countervailing (in opposite directions), conditioning on the observed covariate can increase bias. We demonstrate this possibility analytically, and then show that these conditions are not rare in actual data. We also consider whether balance tests or sensitivity analysis can be used to justify the inclusion of an additional covariate. Our results indicate that neither provide protection against overadjustment.

Type
Original Articles
Copyright
© The European Political Science Association 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Kevin A. Clarke is an Associate Professor in the Department of Political Science, University of Rochester, Rochester, NY 14627-0146 (kevin.clarke@rochester.edu). Brenton Kenkel is an Assistant Professor in the Department of Political Science, Vanderbilt University, Nashville, TN 37203 (brenton.kenkel@vanderbilt.edu). Miguel R. Rueda is an Assistant Professor in the Department of Political Science, Emory University, Atlanta, GA 30233 (miguel.rueda@emory.edu). A previous version of this paper was given at the 27th Annual Summer Meeting of the Society for Political Methodology. The authors thank the participants for their comments. The authors thank Jake Bowers, John Jackson, Michael Peress, the editor, and two anonymous reviewers for helpful comments and discussion. Brad Smith provided excellent research assistance. Errors remain the authors own. To view supplementary material for this article, please visit https://doi.org/10.1017/psrm.2016.46

References

Arceneaux, Kevin, Gerber, Alan S., and Green, Donald P.. 2006. ‘Comparing Experimental and Matching Methods Using a Large-Scale Voter Mobilization Experiment’. Political Analysis 14(1):3762.Google Scholar
Arceneaux, Kevin, Gerber, Alan S., and Green, Donald P.. 2010. ‘A Cautionary Note on the Use of Matching to Estimate Causal Effects: An Empirical Example Comparing Matching Estimates to an Experimental Benchmark’. Sociological Methods and Research 39(2):256282.Google Scholar
Bhattacharya, Jay, and Vogt, William B.. 2007. ‘Do Instrumental Variables Belong in Propensity Scores?’. Technical Report No. 343, NBER, MA.Google Scholar
Clarke, Kevin A. 2005. ‘The Phantom Menace: Omitted Variable Bias in Econometric Research’. Conflict Management and Peace Science 22(4):341352.CrossRefGoogle Scholar
Clarke, Kevin A. 2009. ‘Return of the Phantom Menace: Omitted Variable Bias in Econometric Research’. Conflict Management and Peace Science 26(1):4666.Google Scholar
Cornfield, Jerome, Haenszel, William, Hammond, E. Cuyler, Lilienfeld, Abraham M., Shimkin, Michael B., and Wynder, Ernst L.. 1959. ‘Smoking and Lung Cancer: Recent Evidence and a Discussion of Some Questions’. Journal of the National Cancer Institute 22(1):173203.Google Scholar
Dehejia, Rajeev H., and Wahba, Sadek. 1999. ‘Causal Effects in Non-Experimental Studies: Reevaluating the Evaluation of Training Programs’. Journal of the American Statistical Association 94(448):10531062.Google Scholar
DeLuca, Giuseppe, Magnus, Jan R., and Peracchi, Franco. 2015. ‘On the Ambiguous Consequences of Omitting Variables’. Technical report, Einaudi Institute for Economics and Finance Rome, Italy.Google Scholar
Ho, Daniel E., Imai, Kosuke, King, Gary, and Stuart, Elizabeth A.. 2007. ‘Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference’. Political Analysis 15(3):199236.Google Scholar
Hosman, Carrie A., Hansen, Ben B., and Holland, Paul W.. 2010. ‘The Sensitivity of Linear Regression Coefficients’ Confidence Limits to the Omission of a Confounder’. The Annals of Applied Statistics 4(2):849870.Google Scholar
Iacus, Stefano M. 2007. ‘rrp: Random Recursive Partitioning’. R package version 0.7. Available at http://cran.r-project.org/web/packages/rrp/.Google Scholar
LaLonde, Robert. 1986. ‘Evaluating the Econometric Evaluations of Training Programs With Experimental Data’. American Economic Review 76(4):604620.Google Scholar
Mackenzie, David, Gibson, John, and Stillman, Steven. 2010. ‘How Important is Selection? Experimental Vs Non-Experimental Measures of Income Gains from Migration’. Journal of the European Economic Association 8(4):913945.Google Scholar
Morris, Errol. 2010. ‘The Anosognosic’s Dilemma: Something’s Wrong But You’ll Never Know What It Is (Part 1)’. The New York Times. Available at http://opinionator.blogs.nytimes.com/2010/06/20/the-anosognosics-dilemma-1/?_r=0, accessed 20 June 2010.Google Scholar
Pearl, Judea. 2010. ‘On a Class of Bias-Amplifying Variables That Endanger Effect Estimates’. In Peter Grunwald and Peter Spirtes (eds), Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-10), 417–24. Corvallis, OR: AUAI Press.Google Scholar
Pearl, Judea. 2011. ‘Invited Commentary: Understanding Bias Amplification’. American Journal of Epidemiology 174(11):12231227.Google Scholar
Rosenbaum, Paul R. 1988. ‘Sensitivity Analysis for Matching With Multiple Controls’. Biometrika 75(3):577581.Google Scholar
Rosenbaum, Paul R., and Rubin, Donald B.. 1983a. ‘The Central Role of the Propensity Score in Observational Studies for Causal Effect’. Biometrika 70(1):4155.Google Scholar
Rosenbaum, Paul R., and Rubin, Donald B.. 1983b. ‘Assessing Sensitivity to an Unobserved Binary Covariate in an Observational Study With Binary Outcome’. Journal of the Royal Statistical Society, Series B 45(2):212218.Google Scholar
Rubin, Donald B. 2009. ‘Should Observational Studies be Designed to Allow Lack of Balance in Covariate Distributions Across Treatment Groups?’. Statistics in Medicine 28(9):14201423.Google Scholar
White, Halbert, and Lu, Xun. 2011. ‘Causal Diagrams for Treatment Effect Estimation With Application to Efficient Covariate Selection’. The Review of Economics and Statistics 93(4):14531459.Google Scholar
Wooldridge, J. 2009. ‘Should Instrumental Variables be Used as Matching Variables?’. Technical report, Michigan State University, Lansing, MI.Google Scholar
Supplementary material: Link
Link