Luke W. Miratrix, Jasjeet S. Sekhon, Alexander G. Theodoridis and Luis F. Campos
"Worth Weighting? How to Think About and Use Weights in Survey Experiments”
Political Analysis Volume 25 Issue 1
Selection committee: Pablo Babera (LSE), Jennifer Pan (Stanford), and Jeff Gill (American University)
On behalf of the prize committee (Pablo Babera, Jennifer Pan, and Jeff Gill) I’m delighted to announce the winner of the Society for Political Methodology’s 2019 Miller Prize for the best paper published in Political Analysis. This year the prize goes to the article "Worth Weighting? How to Think About and Use Weights in Survey Experiment” by Luke W. Miratrix, Jasjeet S. Sekhon, Alexander G. Theodoridis, and Luis F. Campos (https://doi.org/10.1017/pan.2018.1). Please join us in congratulating the authors for this excellent piece of scholarship. The abstract is pasted below.
The popularity of online surveys has increased the prominence of using sampling weights to enhance claims of representativeness. Yet, much uncertainty remains regarding how these weights should be employed in survey experiment analysis: should they be used? If so, which estimators are preferred? We offer practical advice, rooted in the Neyman–Rubin model, for researchers working with survey experimental data. We examine simple, efficient estimators, and give formulas for their biases and variances. We provide simulations that examine these estimators as well as real examples from experiments administered online through YouGov. We find that for examining the existence of population treatment effects using high-quality, broadly representative samples recruited by top online survey firms, sample quantities, which do not rely on weights, are often sufficient. We found that sample average treatment effect (SATE) estimates did not appear to differ substantially from their weighted counterparts, and they avoided the substantial loss of statistical power that accompanies weighting. When precise estimates of population average treatment effects (PATE) are essential, we analytically show poststratifying on survey weights and/or covariates highly correlated with outcomes to be a conservative choice. While we show substantial gains in simulations, we find limited evidence of them in practice.