Skip to main content
×
×
Home

Bias Amplification and Bias Unmasking

  • Joel A. Middleton (a1), Marc A. Scott (a2), Ronli Diakow (a3) and Jennifer L. Hill (a4)
Abstract

In the analysis of causal effects in non-experimental studies, conditioning on observable covariates is one way to try to reduce unobserved confounder bias. However, a developing literature has shown that conditioning on certain covariates may increase bias, and the mechanisms underlying this phenomenon have not been fully explored. We add to the literature on bias-increasing covariates by first introducing a way to decompose omitted variable bias into three constituent parts: bias due to an unobserved confounder, bias due to excluding observed covariates, and bias due to amplification. This leads to two important findings. Although instruments have been the primary focus of the bias amplification literature to date, we identify the fact that the popular approach of adding group fixed effects can lead to bias amplification as well. This is an important finding because many practitioners think that fixed effects are a convenient way to account for any and all group-level confounding and are at worst harmless. The second finding introduces the concept of bias unmasking and shows how it can be even more insidious than bias amplification in some cases. After introducing these new results analytically, we use constructed observational placebo studies to illustrate bias amplification and bias unmasking with real data. Finally, we propose a way to add bias decomposition information to graphical displays for sensitivity analysis to help practitioners think through the potential for bias amplification and bias unmasking in actual applications.

    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Bias Amplification and Bias Unmasking
      Available formats
      ×
      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Bias Amplification and Bias Unmasking
      Available formats
      ×
      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Bias Amplification and Bias Unmasking
      Available formats
      ×
Copyright
Corresponding author
e-mail: joel.middleton@berkeley.edu (corresponding author)
Footnotes
Hide All

Edited by Prof. R. Michael Alvarez

Authors’ note: For replication files, see Middleton (2016). Supplementary Materials for this article are available on the Political Analysis Web site.

Footnotes
References
Hide All
Angrist, J. D., Imbens, G., and Rubin, D. 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434):444–55.
Angrist, J. D., and Pischke, J. 2009. Mostly harmless econometrics. Princeton, NJ: Princeton University Press.
Austin, P., Grootendorst, P., and Anderson, G. 2007. A comparison of the ability of different propensity score models to balance measured variables between treated and untreated subjects: a Monte Carlo study. Statistics in Medicine 26:734–53.
Breen, R., Karlson, K., and Holm, A. 2013. Total, direct, and indirect effects in logit and probit models. Sociological Methods and Research 42(2):164191.
Brookhart, M., Sturmer, T., Glynn, R., Rassen, J., and Schneeweiss, S. 2010. Confounding control in healthcare database research. Medical Care 48:S11420.
Bhattacharya, J., and Vogt, W. 2007. Do instrumental variables belong in propensity scores? NBER Working Paper 343, National Bureau of Economic Research, MA.
Carnegie, N. B., Hill, J., and Harada, M. 2014a. Assessing sensitivity to unmeasured confounding using simulated potential confounders. Unpublished manuscript.
Carnegie, N. B., Hill, J., and Harada, M. 2014b. Package: TreatSens. http://www.R-project.org.
Clarke, K. A. 2005. The phantom menace. Conflict Management and Peace Science 22:341352.
Clarke, K. A. 2009. Return of the phantom menace. Conflict Management and Peace Science 26:4666.
Cole, S. R., Platt, R. W., Schisterman, E. F., Chu, H., Westreich, D., Richardson, D., and Poole, C. 2010. Illustrating bias due to conditioning on a collider. International Journal of Epidemiology 39(2):417420.
D'Agostino, R. Jr. 1998. Propensity score methods for bias reduction in the comparison of treatment to non-randomized control group. Statistics in Medicine 17:314–16.
Ding, P., and Miratrix, L., 2014. To adjust or not to adjust? Sensitivity analysis of M-bias and butterfly-bias. Journal of Causal Inference 2:217.
Dunning, T., and Nilekani, J. 2013. Ethnic quotas and political mobilization: caste, parties, and distribution in Indian village councils. American Political Science Review 107:3556.
Freedman, D. A. 2008. Randomization does not justify logistic regression. Statistical Science 23(2):237–49.
Frisell, T., Oberg, S., Kuja-Halkola, R., and Sjolander, A. 2012. Sibling comparison designs: bias from non-shared confounders and measurement error. Epidemiology 23(5):713–20.
Greene, W. H. 2000. Econometric analysis, 4th ed. Prentice Hall, Upper Saddle River, NJ.
Greenland, S. 2002. Quantifying biases in causal models: classical confounding vs. collider-stratification bias. Epidemiology 14:300306.
Hill, J. 2007. Discussion of research using propensityscore matching: comments on “A critical appraisal of propensityscore matching in the medical literature between 1996 and 2003” by Peter Austin. Statistics in Medicine 27(12):2055–61.
Heckman, J., and Robb, R. 1985. Alternative methods for estimating the impact of interventions. In Longitudinal analysis of labor market data, eds. Heckman, J. J. and Singer, B. Cambridge University Press, Cambridge, UK.
Heckman, J., and Robb, R. 1986. Alternative methods for solving the problem of selection bias in evaluating the impact of treatments on outcomes. In Drawing inferences from self-selected samples, ed. Wainer, H. New Jersey: Lawrence Erlbaum Associates.
Imbens, G. W. 2003. Sensitivity to exogeneity assumptions in program evaluation. Recent Advances in Econometric Methodology 93(2):126–32.
Lechner, M. 2001. Identification and estimation of causal effects of multiple treatments under the conditional independence assumption. In Econometric evaluations of active labor market policies in Europe, eds. Lechner, M. and Pfeiffer, F., Heidelberg: Physica.
Liu, W., Brookhart, M. A., Schneeweiss, S., Mi, X., and Setoguchi, S. 2012. Implications of M-bias in epidemiologic studies: a simulation study. American Journal of Epidemiology 176:938–48.
Middleton, J. A. 2016. Replication data for: bias amplification and bias unmasking. http://dx.doi.org/10.791/DVN/UO5WQ4, Harvard Dataverse.
Myers, J.A., Rassen, J. A., Gagne, J. J., Huybrechts, K. F., Schneeweiss, S., Rothman, K. J., Joffe, M. M., and Glynn, R. J. 2011. Effects of adjusting for instrumental variables on bias and precision of effect estimates. American Journal of Epidemiology 174(11):1213–22.
Pearl, J. 2000. Causality. Cambridge, New York, NY.
Pearl, J. 2009. Myth, confusion, and science in causal analysis. Technical report.
Pearl, J. 2010. On a class of bias-amplifying variables that endanger effect estimates. Proceedings of UAI, pp. 417–24.
Pearl, J. 2011. Invited commentary: Understanding bias amplification. American Journal of Epidemiology 174(11):1223–27.
Rosenbaum, P. R. 2002. Observational studies. Springer, New York, NY.
Rosenbaum, P. R., and Rubin, D. B. 1983. Assessing sensitivity to an unobserved binary covariate in an observational study with binary outcome. Journal of the Royal Statistical Society Series B (Methodological) 45:212218.
Rubin, D. B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66:688.
Rubin, D. B. 1978. Bayesian inference for causal effects: the role of randomization. The Annals of Statistics 6(1):3458.
Rubin, D. B. 2002. Using propensity scores to help design observational studies: application to the tobacco litigation. Health Services and Outcomes Research Methodology 2:169–88.
Shaw, D. R., Green, D. P., Gimpel, J. G., and Gerber, A. S. 2012. Do robotic calls from credible sources influence voter turnout or vote choice? Evidence from a randomized field experiment. Journal of Political Marketing 11(4):231–45.
Schisterman, E. F., Cole, S. R., and Platt, R. W. 2009. Overadjustment bias and unnecessary adjustment in epidemiologic studies. Epidemiology 20:488–95.
Steiner, P. M., and Kim, Y. 2016. The mechanisms of omitted variable bias: bias amplification and cancellation of offsetting biases. unpublished manuscript.
Sjlander, A. 2009. Propensity scores and M-structures. Statistics in Medicine 28:141620.
Sobel, M. E. 2006. What do randomized studies of housing mobility demonstrate? Causal inference in the face of interference. Journal of the American Statistical Association 101:1398–407.
Wooldridge, J. 2009. Should instrumental variables be used as matching variables? Unpublished manuscript.
Wyss, R., Lunt, M., Brookhart, M. A., Glynn, R. J., and Strürmer, T. 2014. Reducing bias amplification in the presence of unmeasured confounding through out-of-sample estimation strategies for the disease risk score. Journal of Causal Inference 2(2):131–46.
VanderWeele, T. J. 2015. Explanation in causal inference: Methods for mediation and interaction. Oxford University Press New York, NY.
VanderWeele, T. J., and Arahc, O. A. 2011. Unmeasured confounding for general outcomes, treatments, and confounders: bias formulas for sensitivity analysis. Epidemiology 22(1):4252.
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Political Analysis
  • ISSN: 1047-1987
  • EISSN: 1476-4989
  • URL: /core/journals/political-analysis
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×
MathJax
Type Description Title
PDF
Supplementary materials

Middleton et al. supplementary material
Supplementary Material

 PDF (114 KB)
114 KB
UNKNOWN
Supplementary materials

Middleton et al. supplementary material
Appendix

 Unknown (12 KB)
12 KB

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed