Skip to main content
×
×
Home

A Pairwise Comparison Framework for Fast, Flexible, and Reliable Human Coding of Political Texts

  • DAVID CARLSON (a1) and JACOB M. MONTGOMERY (a1)
Abstract

Scholars are increasingly utilizing online workforces to encode latent political concepts embedded in written or spoken records. In this letter, we build on past efforts by developing and validating a crowdsourced pairwise comparison framework for encoding political texts that combines the human ability to understand natural language with the ability of computers to aggregate data into reliable measures while ameliorating concerns about the biases and unreliability of non-expert human coders. We validate the method with advertisements for U.S. Senate candidates and with State Department reports on human rights. The framework we present is very general, and we provide free software to help applied researchers interact easily with online workforces to extract meaningful measures from texts.

Copyright
Corresponding author
David Carlson is a PhD candidate, Washington University in St. Louis, Department of Political Science, Campus Box 1063, One Brookings Drive, St. Louis, MO 63130-4899 (carlson.david@wustl.edu).
Jacob M. Montgomery is an Associate Professor, Washington University in St. Louis, Department of Political Science, Campus Box 1063, One Brookings Drive, St. Louis, MO 63130-4899 (jacob.montgomery@wustl.edu).
Footnotes
Hide All

We thank Burt Monroe, John Freeman, and Brandon Stewart for providing comments on a previous version of this paper. We are indebted to Ryden Butler, Dominic Jarkey, Jon Rogowski, Erin Rossiter, and Michelle Torres for their assistance with this project. We particularly wish to thank Matt Dickenson for his programming assistance. We also appreciate the assistance in the R package development from David Flasterstein, Joseph Ludmir, and Taishi Muraoka. We are grateful for the financial support provided by the Weidenbaum Center on the Economy, Government, and Public Policy. Finally, we wish to thank the partner-workers at Amazon’s Mechanical Turk who make this research possible.

Footnotes
References
Hide All
Benoit, Kenneth, Conway, Drew, Lauderdale, Benjamin E., Laver, Michael, and Mikhaylov, Slava. 2016. “Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data.” American Political Science Review 110 (2): 278–95.
Berinsky, Adam J., Huber, Gergory A., and Lenz, Gabriel S.. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk.” Political Analysis 20 (3): 329–50.
Carpenter, Bob, Gelman, Andrew, Hoffman, Matt, Lee, Daniel, Goodrich, Ben, Betancourt, Michael A., Brubaker, Michael, Guo, Jiqiang, Li, Peter, and Riddell, Allen. 2016. “Stan: A probabilistic programming language.” Journal of Statistical Software 20: 137.
Hathaway, Oona A. 2002. “Do Human Rights Treaties Make a Difference?The Yale Law Journal 111 (8): 19352042.
Henderson, John A. 2015. “Using experiments to improve ideal point estimation in text with an application to political ads.” Unpublished manuscript.
Hitlin, Paul. 2016. Research in the crowdsourcing age, a case study. www.pewinternet.org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/: Pew Research Center.
King, Gary. 2007. “An Introduction to the Dataverse Network as an Infrastructure for Data Sharing.” Sociological Methods and Research 36 (2): 173–99.
King, Gary, Murray, Christopher J. L., Salomon, Joshua A., and Tandon, Ajay. 2004. “Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research.” American Political Science Review 98 (1): 191207.
Neumayer, Eric. 2005. “Do International Human Rights Treaties Improve Respect for Human Rights?Journal of Conflict Resolution 49 (6): 925–53.
Oishi, Shigehiro, Hahn, Jungwon, Schimmack, Ulrich, Radhakrishan, Phanikiran, Dzokoto, Vivian, and Ahadi, Stephen. 2005. “The Measurement of Values Across Cultures: A Pairwise Comparison Approach.” Journal of Research in Personality 39 (2): 299305.
Sheng, Victor S., Provost, Foster, and Ipeirotis, Panagiotis G.. 2008. “Get Another Label? Improving Data Quality and Data Mining using Multiple, Noisy Labelers.” In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 614–22.
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

American Political Science Review
  • ISSN: 0003-0554
  • EISSN: 1537-5943
  • URL: /core/journals/american-political-science-review
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×
Type Description Title
UNKNOWN
Supplementary materials

Carlson et.al. Dataset
Dataset

 Unknown
PDF
Supplementary materials

Carlson and Montgomery supplementary material
Carlson and Montgomery supplementary material 1

 PDF (496 KB)
496 KB

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 48
Total number of PDF views: 557 *
Loading metrics...

Abstract views

Total abstract views: 2098 *
Loading metrics...

* Views captured on Cambridge Core between 5th September 2017 - 22nd July 2018. This data will be updated every 24 hours.