Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-23T15:23:51.163Z Has data issue: false hasContentIssue false

What Can Instrumental Variables Tell Us About Nonresponse In Household Surveys and Political Polls?

Published online by Cambridge University Press:  29 January 2019

Coady Wing*
Affiliation:
Indiana University, School of Public and Environmental Affairs, 1315 East Tenth Street, Room 339A, Bloomington, IN 47405, USA. Email: cwing@indiana.edu

Abstract

This paper introduces an instrumental variables framework for analyzing how external factors that affect survey response rates can also affect the composition of the sample of respondents. The method may be useful for studying survey representativeness, and for assessing the effectiveness of some of the conventional corrections for survey nonresponse bias.

The paper applies the method to data collected in the 2011 Swiss Electoral Study (SES), in which survey participation incentives were randomly assigned across members of the original survey sample. The empirical analysis shows that the incentives increased response rates substantially. Estimates of a new instrumental variable parameter called the Complier Average Survey Response (CASR) suggest that the incentives induced participation among people with more nationalist political opinions than those who would have participated without the incentives. Weighting the respondent data to match the covariate distribution in the target population did not account for the discrepancy in attitudes between the two groups, suggesting that the weights would not succeed in removing nonresponse bias.

Type
Articles
Copyright
Copyright © The Author(s) 2019. Published by Cambridge University Press on behalf of the Society for Political Methodology. 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Author’s note: Thanks are due to Doug Wolf, Peter Steiner, John Mullahy, Austin Nichols, Vivian Wong, Ted Joyce, Oliver Lipps, Seth Freedman, Alex Hollingsworth, Jeanette Samyn, and Patrick Carlin who provided helpful comments on early drafts of the paper. Comments from the editor and reviewers also improved the paper substantially. Replication files for the results presented in the paper are available as Wing (2018) at doi:10.7910/DVN/ILTOGF.

Contributing Editor: R. Michael Alvarez

References

Abadie, Alberto. 2003. Semiparametric instrumental variable estimation of treatment response models. Journal Of Econometrics 113(2):231263.Google Scholar
Achen, Christopher H. 1986. The statistical analysis of quasi-experiments . Berkeley, CA: University of California Press.Google Scholar
Angrist, Joshua D. 2004. Treatment effect heterogeneity in theory and practice. The Economic Journal 114(494):C52C83.Google Scholar
Angrist, Joshua D., and Pischke, Jorn-Steffen. 2009. Mostly harmless econometrics: an empiricist’s companion . Princeton, NJ: Princeton University Press.Google Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B.. 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434):444455.Google Scholar
Ansolabehere, Stephen, and Hersh, Eitan. 2012. Validation: What big data reveal about survey misreporting and the real electorate. Political Analysis 20(4):437459.Google Scholar
Bertanha, Marinho, and Imbens, Guido W. 2014. External validity in fuzzy regression discontinuity designs, Technical report, National Bureau of Economic Research.Google Scholar
Brehm, John. 1994. Stubbing our toes for a foot in the door? prior contact, incentives and survey response. International Journal of Public Opinion Research 6(1):4563.Google Scholar
Brehm, John O. 1993. The phantom respondents: opinion surveys and political representation . Ann Arbor, MI: University of Michigan Press.Google Scholar
Czajka, John L., and Beyler, Amy. 2016. Declining response rates in federal surveys: Trends and implications . Mathematica Policy Research.Google Scholar
Davern, Michael, Rockwood, Todd H., Sherrod, Randy, and Campbell, Stephen. 2003. Prepaid monetary incentives and data quality in face-to-face interviews: Data from the 1996 survey of income and program participation incentive experiment. The Public Opinion Quarterly 67(1):139147.Google Scholar
De Leeuw, Edith, and De Heer, Wim. 2002. Trends in household survey nonresponse: A longitudinal and international comparison. In Survey nonresponse , ed. Groves, R. M. et al. . New York: John Wiley & Sons, pp. 4154.Google Scholar
Dillman, Don A., Smyth, Jolene D., and Christian, Leah Melani. 2014. Internet, phone, mail, and mixed-mode surveys: the tailored design method . Hoboken, NJ: John Wiley & Sons.Google Scholar
Doody, Michele Morin, Sigurdson, Alice S., Kampa, Diane, Chimes, Kathleen, Alexander, Bruce H., Ron, Elaine, Tarone, Robert E., and Linet, Martha S.. 2003. Randomized trial of financial incentives and delivery methods for improving response to a mailed questionnaire. American Journal Of Epidemiology 157(7):643651.Google Scholar
Gaines, Brian J., Kuklinski, James H., and Quirk, Paul J.. 2006. The logic of the survey experiment reexamined. Political Analysis 15(1):120.Google Scholar
Gelman, Andrew, Goel, Sharad, Rivers, Douglas, and Rothschild, David et al. . 2016. The mythical swing voter. Quarterly Journal of Political Science 11(1):103130.Google Scholar
Groves, Robert M., and Peytcheva, Emilia. 2008. The impact of nonresponse rates on nonresponse bias a meta-analysis. Public opinion quarterly 72(2):167189.Google Scholar
Groves, Robert M., Singer, Eleanor, and Corning, Amy. 2000. Leverage-saliency theory of survey participation: description and an illustration. The Public Opinion Quarterly 64(3):299308.Google Scholar
Groves, Robert M., Couper, Mick P., Presser, Stanley, Singer, Eleanor, Tourangeau, Roger, Acosta, Giorgina Piani, and Nelson, Lindsay. 2006. Experiments in producing nonresponse bias. Public Opinion Quarterly 70(5):720736.Google Scholar
Groves, Robert M., Fowler, Floyd J. Jr, Couper, Mick P., Lepkowski, James M., Singer, Eleanor, and Tourangeau, Roger. 2011. Survey methodology . Hoboken, NJ: John Wiley & Sons.Google Scholar
Hausman, Jerry A. 1978. Specification tests in econometrics. Econometrica 46(6):12511271.Google Scholar
Heckman, James. 1974. Shadow prices, market wages, and labor supply. Econometrica 42(4):679694.Google Scholar
Heckman, James. 1979. Sample selection as a specification error. Econometrica 47(1):153161.Google Scholar
Heckman, James, and Navarro-Lozano, Salvador. 2004. Using matching, instrumental variables, and control functions to estimate economic choice models. Review of Economics and statistics 86(1):3057.Google Scholar
Imbens, Guido W., and Angrist, Joshua D.. 1994. Identification and estimation of local average treatment effects. Econometrica 62(2):467475.Google Scholar
Imbens, Guido W., and Rubin, Donald B.. 1997. Estimating outcome distributions for compliers in instrumental variables models. The Review of Economic Studies 64(4):555574.Google Scholar
Kalton, Graham, and Kasprzyk, Daniel. 1986. The treatment of missing survey data. Survey Methodology 12(1):116.Google Scholar
Kohut, Andrew, Keeter, Scott, Doherty, Caffol, Dimock, Michael, and Leah, Christian. Assessing the representativeness of public opinion surveys. The Pew Research Center, 1997. Available online at http://assets.pewresearch.org/wp-content/uploads/sites/5/legacy-pdf/Assessing%20the%20Representativeness%20of%20Public%20Opinion%20Surveys.pdf.Google Scholar
Lacy, Dean. 2001. A theory of nonseparable preferences in survey responses. American Journal of Political Science 45(2):239258.Google Scholar
Lipps, Oliver, and Pekari, Nicolas. 2016. Sample representation and substantive outcomes using web with and without incentives compared to telephone in an election survey. Journal of Official Statistics 32(1):165186.Google Scholar
Little, Roderick J. A. 1982. Models for nonresponse in sample surveys. Journal Of The American Statistical Association 77(378):237250.Google Scholar
Little, Roderick J. A. 1988. A test of missing completely at random for multivariate data with missing values. Journal Of The American Statistical Association 83(404):11981202.Google Scholar
Little, Roderick J. A. 1993. Post-stratification: a modeler’s perspective. Journal of the American Statistical Association 88(423):10011012.Google Scholar
Little, Roderick J. A., and Rubin, Donald B.. 2014. Statistical analysis with missing data, vol. 333 . Hoboken, NJ: John Wiley & Sons.Google Scholar
Maddala, Gangadharrao S. 1983. Limited-dependent and qualitative variables in econometrics, vol. 3 . New York: Cambridge University Press.Google Scholar
Meyer, Bruce D., Mok, Wallace K. C., and Sullivan, James X.. 2015. Household surveys in crisis. The Journal of Economic Perspectives 29(4):199226.Google Scholar
Mishra, Vinod, Barrere, Bernard, Hong, R., and Khan, S.. 2008. Evaluation of bias in hiv seroprevalence estimates from national household surveys. Sexually Transmitted Infections 84(Suppl 1):i63i70.Google Scholar
Morgan, Stephen L., and Winship, Christopher. 2014. Counterfactuals and causal inference . New York: Cambridge University Press.Google Scholar
Rebitzer, James B., and Taylor, Lowell J.. 2011. Extrinsic rewards and intrinsic motives: standard and behavioral approaches to agency and labor markets. Handbook of labor economics 4:701772.Google Scholar
Rubin, Donald B. 1976. Inference and missing data. Biometrika 63(3):581592.Google Scholar
Singer, Eleanor, and Ye, Cong. 2013. The use and effects of incentives in surveys. The ANNALS of the American Academy of Political and Social Science 645(1):112141.Google Scholar
Sniderman, Paul M. 2011. The logic and design of the survey experiment. In Cambridge handbook of experimental political science , ed. Druckman, J. N. et al. . New York: Cambridge University Press, pp. 102114.Google Scholar
Sniderman, Paul M., Brody, Richard A., and Tetlock, Phillip E.. 1993. Reasoning and choice: explorations in political psychology . New York: Cambridge University Press.Google Scholar
Steeh, Charlotte G. 1981. Trends in nonresponse rates, 1952–1979. Public Opinion Quarterly 45(1):4057.Google Scholar
Wing, Coady. 2018. Replication data for: What can instrumental variables tell us about nonresponse in household surveys and political polls? https://doi.org/10.7910/DVN/ILTOGF, Harvard Dataverse, V1.Google Scholar