Skip to main content Accessibility help
Hostname: page-component-768dbb666b-9qwsl Total loading time: 0.256 Render date: 2023-02-07T03:26:53.547Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true

Assessing the Impact of Non-Random Measurement Error on Inference: A Sensitivity Analysis Approach

Published online by Cambridge University Press:  16 January 2017


Many commonly used data sources in the social sciences suffer from non-random measurement error, understood as mis-measurement of a variable that is systematically related to another variable. We argue that studies relying on potentially suspect data should take the threat this poses to inference seriously and address it routinely in a principled manner. In this article, we aid researchers in this task by introducing a sensitivity analysis approach to non-random measurement error. The method can be used for any type of data or statistical model, is simple to execute, and straightforward to communicate. This makes it possible for researchers to routinely report the robustness of their inference to the presence of non-random measurement error. We demonstrate the sensitivity analysis approach by applying it to two recent studies.

Original Articles
© The European Political Science Association 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)



Max Gallop is a Lecturer, Department of Government and Public Policy, University of Strathclyde, 16 Richmond St., Glasgow G1 1XQ ( Simon Weschle is a Junior Research Fellow, Carlos III-Juan March Institute, Calle Madrid 135, Building 18, 28903 Getafe, Madrid ( For their helpful comments and suggestions, the authors are thankful to Florian Hollenbach, Kosuke Imai, Jack Paine, Jan Pierskalla, Michael Ward, Natalie Jackson, Nils Weidmann, participants of the Annual Summer Meeting of the Society for Political Methodology at the University of Georgia in 2014, and the PSRM reviewers and editors. To view supplementary material for this article, please visit


Acemoglu, Daron, Johnson, Simon H., and Robinson, James A.. 2001. ‘The Colonial Origins of Comparative Development: An Empirical Investigation’. American Economic Review 91(5):13691401.CrossRefGoogle Scholar
Acemoglu, Daron, Johnson, Simon H., and Robinson, James A.. 2012. ‘The Colonial Origins of Comparative Development: An Empirical Investigation: Reply’. American Economic Review 102(6):30773110.CrossRefGoogle Scholar
Albouy, David Y. 2012. ‘The Colonial Origins of Comparative Development: An Empirical Investigation: Comment’. American Economic Review 102(6):30593076.CrossRefGoogle Scholar
Ansolabehere, Stephen, and Hersh, Eitan. 2012. ‘Validation: What Big Data Reveal About Survey Misreporting and the Real Electorate’. Political Analysis 20:437459.CrossRefGoogle Scholar
Anthopolos, Rebecca, and Becker, Charles M.. 2009. ‘Global Infant Mortality: Correcting for Undercounting’. World Development 38(4):467481.CrossRefGoogle Scholar
Bernstein, Robert, Chandha, Anita, and Montjoy, Robert. 2001. ‘Overreporting Voting: Why it Happens and Why it Matters’. Public Opinion Quarterly 65(1):2244.CrossRefGoogle ScholarPubMed
Betz, Timm. 2013. ‘Robust Estimation With Nonrandom Measurement Error and Weak Instruments’. Political Analysis 23(1):8696.CrossRefGoogle Scholar
Blackwell, Matthew. 2014. ‘A Selection Bias Approach to Sensitivity Analysis for Causal Effects’. Political Analysis 22(2):169182.CrossRefGoogle Scholar
Blackwell, Matthew, Honaker, James, and King, Gary. 2015. ‘A Unified Approach to Measurement Error and Missing Data: Overview and Applications.’ Sociological Methods and Research Scholar
Carroll, Raymond J., Ruppert, David, Stefanski, Leonard A., and Crainiceanu, Ciprian M.. 2006. Measurement Error in Nonlinear Models. A Modern Perspective. Boca Raton, FL: Chapman & Hall/CRC.CrossRefGoogle Scholar
Carroll, Raymond J., and Stefanski, Leonard A.. 1990. ‘Approximate Quasilikelihood Estimation in Models With Surrogate Predictors’. Journal of the American Statistical Association 85(411):652663.CrossRefGoogle Scholar
Cook, James R., and Stefanski, Leonard A.. 1994. ‘Simulation-Extrapolation Estimation in Parametric Measurement Error Models’. Journal of the American Statistical Association 89(428):13141328.CrossRefGoogle Scholar
Curtin, Philip D. 1989. Death by Migration: Europe’s Encounter With the Tropical World in the 19th Century. New York: Cambridge University Press.CrossRefGoogle Scholar
Dafoe, Allan, and Lyall, Jason. 2015. ‘From Cell Phones to Conflict? Reflections on the Emerging ICT-Political Conflict Research Agenda’. Journal of Peace Research 52(3):401413.CrossRefGoogle Scholar
Fariss, Christopher J. 2014. ‘Respect for Human Rights has Improved Over Time: Modeling the Changing Standard of Accountability’. American Political Science Review 108(2):297318.CrossRefGoogle Scholar
Gohdes, Anita, and Price, Megan. 2013. ‘First Things First: Assessing Data Quality Before Model Quality’. Journal of Conflict Resolution 57(6):10901108.CrossRefGoogle Scholar
Guolo, Annamaria. 2008. ‘Robust Techniques for Measurement Error Correction: A Review’. Statistical Methods in Medical Research 17:555580.CrossRefGoogle ScholarPubMed
Hausman, Jerry A., Abrevaya, Jason, and Scott-Morton, Fiona M.. 1998. ‘Misclassification of the Dependent Variable in a Discrete-Response Setting’. Journal of Econometrics 87(2):239269.CrossRefGoogle Scholar
Herrera, Yoshiko M., and Kapur, Devesh. 2007. ‘Improving Data Quality: Actors, Incentives, and Capabilities’. Political Analysis 15(4):365386.CrossRefGoogle Scholar
Hill, Daniel W., Moore, Will H., and Mukherjee, Bumba. 2013. ‘Information Politics Versus Organizational Incentives: When are Amnesty International’s “Naming and Shaming” Reports Biased?’. International Studies Quarterly 57(2):219232.CrossRefGoogle Scholar
Hollyer, James R., Rosendorff, B. Peter, and Vreeland, James Raymond. 2011. ‘Democracy and Transparency’. Journal of Politics 73(4):11911205.CrossRefGoogle Scholar
Horowitz, Joel L., and Manski, Charles F.. 1995. ‘Identification and Robustness With Contaminated and Corrupted Data’. Econometrica 63(2):281302.CrossRefGoogle Scholar
Hug, Simon. 2010. ‘The Effect of Misclassifications in Probit Models: Monte Carlo Simulations and Applications’. Political Analysis 18(1):78102.CrossRefGoogle Scholar
Humphreys, Macartan, Masters, William A., and Sandbu, Martin E.. 2006. ‘The Role of Leaders in Democratic Deliberations: Results from a Field Experiment in São Tomé and Príncipe’. World Politics 58(4):583622.CrossRefGoogle Scholar
Imai, Kosuke, and Yamamoto, Teppei. 2010. ‘Causal Inference With Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis’. American Journal of Political Science 54(2):543560.CrossRefGoogle Scholar
Imbens, Guido W. 2003. ‘Sensitivity to Exogeneity Assumptions in Program Evaluation’. American Economic Review 93(2):126132.CrossRefGoogle Scholar
Jensen, Nathan M., Li, Quan, and Rahman, Aminur. 2010. ‘Understanding Corruption and Firm Responses in Cross-National Firm-Level Surveys’. Journal of International Business Studies 41:14811504.CrossRefGoogle Scholar
Katz, Jonathan N., and Katz, Gabriel. 2010. ‘Correcting for Survey Misreports Using Auxiliary Information With an Application to Estimating Turnout’. American Journal of Political Science 54(3):815835.CrossRefGoogle Scholar
Kipnis, Victor, Midthune, Douglas, Freedman, Laurence S., Bingham, Sheila, Schatzkin, Arthur, Subar, Amy, and Carroll, Raymond J.. 2001. ‘Empirical Evidence of Correlated Biases in Dietary Assessment Instruments and its Implications’. American Journal of Epidemiology 153(4):394403.CrossRefGoogle ScholarPubMed
Kipnis, Victor, Carroll, Raymond J., Freedman, Laurence S., and Li, Li. 1999. ‘Implications of a New Dietary Measurement Error Model for Estimation of Relative Risk: Application to Four Calibration Studies’. American Journal of Epidemiology 150(6):642651.CrossRefGoogle ScholarPubMed
Kreider, Brent, Pepper, John V., Gundersen, Craig, and Jolliffe, Dean. 2012. ‘Identifying the Effects of SNAP (Food Stamps) on Child Health Outcomes When Participation is Endogenous and Misreported’. Journal of the American Statistical Association 107(499):958975.CrossRefGoogle Scholar
Kuklinski, James H., Cobb, Michael D., and Gilens, Martin. 1997. ‘Racial Attitudes and the “New South”’. Journal of Politics 59(2):323349.CrossRefGoogle Scholar
Lacina, Bethany, and Gleditsch, Nils Petter. 2013. ‘The Waning of War is Real: A Response to Gohdes and Price’. Journal of Conflict Resolution 57(6):11091127.CrossRefGoogle Scholar
Pierskalla, Jan H., and Hollenbach, Florian M.. 2013. ‘Technology and Collective Action: The Effect of Cell Phone Coverage on Political Violence in Africa’. American Political Science Review 107(2):207224.CrossRefGoogle Scholar
Rosenbaum, Paul R., and Rubin, Donald B.. 1983. ‘Assessing Sensitivity to an Unobserved Binary Covariate in an Observational Study With Binary Outcome’. Journal of the Royal Statistical Society. Series B (Methodological) 45(2):212218.CrossRefGoogle Scholar
Sundberg, Ralph, and Melander, Erik. 2013. ‘Introducing the UCDP Georeferenced Event Dataset’. Journal of Peace Research 50(4):523532.CrossRefGoogle Scholar
Tokdar, Surya, Grossmann, Iris, Kadane, Joseph, Charest, Anne-Sophie, and Small, Mitchell. 2011. ‘Impact of Beliefs About Atlantic Tropical Cyclone Detection on Conclusions About Trends in Tropical Cyclone Numbers’. Bayesian Analysis 6(4):547572.CrossRefGoogle Scholar
Wallace, Jeremy L. 2016. ‘Juking the Stats? Authoritarian Information Problems in China’. British Journal of Political Science 46(1):1129.CrossRefGoogle Scholar
Weidmann, Nils B. 2016. ‘A Closer Look at Reporting Bias in Conflict Event Data’. American Journal of Political Science 60(1):206218.CrossRefGoogle Scholar
Supplementary material: PDF

Gallop and Weschle supplementary material

Online Appendix

Download Gallop and Weschle supplementary material(PDF)
Supplementary material: Link

Gallop and Weschle Dataset

Cited by

Save article to Kindle

To save this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the or variations. ‘’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Assessing the Impact of Non-Random Measurement Error on Inference: A Sensitivity Analysis Approach
Available formats

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

Assessing the Impact of Non-Random Measurement Error on Inference: A Sensitivity Analysis Approach
Available formats

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

Assessing the Impact of Non-Random Measurement Error on Inference: A Sensitivity Analysis Approach
Available formats

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *