Published online by Cambridge University Press: 16 January 2017
Many commonly used data sources in the social sciences suffer from non-random measurement error, understood as mis-measurement of a variable that is systematically related to another variable. We argue that studies relying on potentially suspect data should take the threat this poses to inference seriously and address it routinely in a principled manner. In this article, we aid researchers in this task by introducing a sensitivity analysis approach to non-random measurement error. The method can be used for any type of data or statistical model, is simple to execute, and straightforward to communicate. This makes it possible for researchers to routinely report the robustness of their inference to the presence of non-random measurement error. We demonstrate the sensitivity analysis approach by applying it to two recent studies.
Max Gallop is a Lecturer, Department of Government and Public Policy, University of Strathclyde, 16 Richmond St., Glasgow G1 1XQ (firstname.lastname@example.org). Simon Weschle is a Junior Research Fellow, Carlos III-Juan March Institute, Calle Madrid 135, Building 18, 28903 Getafe, Madrid (email@example.com). For their helpful comments and suggestions, the authors are thankful to Florian Hollenbach, Kosuke Imai, Jack Paine, Jan Pierskalla, Michael Ward, Natalie Jackson, Nils Weidmann, participants of the Annual Summer Meeting of the Society for Political Methodology at the University of Georgia in 2014, and the PSRM reviewers and editors. To view supplementary material for this article, please visit https://doi.org/10.1017/psrm.2016.53