Skip to main content Accessibility help
×
Home
Hostname: page-component-684899dbb8-plzwj Total loading time: 0.219 Render date: 2022-05-20T21:45:39.357Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true }

A Pairwise Comparison Framework for Fast, Flexible, and Reliable Human Coding of Political Texts

Published online by Cambridge University Press:  05 September 2017

DAVID CARLSON*
Affiliation:
Washington University in St. Louis
JACOB M. MONTGOMERY*
Affiliation:
Washington University in St. Louis
*
David Carlson is a PhD candidate, Washington University in St. Louis, Department of Political Science, Campus Box 1063, One Brookings Drive, St. Louis, MO 63130-4899 (carlson.david@wustl.edu).
Jacob M. Montgomery is an Associate Professor, Washington University in St. Louis, Department of Political Science, Campus Box 1063, One Brookings Drive, St. Louis, MO 63130-4899 (jacob.montgomery@wustl.edu).

Abstract

Scholars are increasingly utilizing online workforces to encode latent political concepts embedded in written or spoken records. In this letter, we build on past efforts by developing and validating a crowdsourced pairwise comparison framework for encoding political texts that combines the human ability to understand natural language with the ability of computers to aggregate data into reliable measures while ameliorating concerns about the biases and unreliability of non-expert human coders. We validate the method with advertisements for U.S. Senate candidates and with State Department reports on human rights. The framework we present is very general, and we provide free software to help applied researchers interact easily with online workforces to extract meaningful measures from texts.

Type
Research Article
Copyright
Copyright © American Political Science Association 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

We thank Burt Monroe, John Freeman, and Brandon Stewart for providing comments on a previous version of this paper. We are indebted to Ryden Butler, Dominic Jarkey, Jon Rogowski, Erin Rossiter, and Michelle Torres for their assistance with this project. We particularly wish to thank Matt Dickenson for his programming assistance. We also appreciate the assistance in the R package development from David Flasterstein, Joseph Ludmir, and Taishi Muraoka. We are grateful for the financial support provided by the Weidenbaum Center on the Economy, Government, and Public Policy. Finally, we wish to thank the partner-workers at Amazon’s Mechanical Turk who make this research possible.

References

Benoit, Kenneth, Conway, Drew, Lauderdale, Benjamin E., Laver, Michael, and Mikhaylov, Slava. 2016. “Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data.” American Political Science Review 110 (2): 278–95.CrossRefGoogle Scholar
Berinsky, Adam J., Huber, Gergory A., and Lenz, Gabriel S.. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk.” Political Analysis 20 (3): 329–50.CrossRefGoogle Scholar
Carpenter, Bob, Gelman, Andrew, Hoffman, Matt, Lee, Daniel, Goodrich, Ben, Betancourt, Michael A., Brubaker, Michael, Guo, Jiqiang, Li, Peter, and Riddell, Allen. 2016. “Stan: A probabilistic programming language.” Journal of Statistical Software 20: 137.Google Scholar
Hathaway, Oona A. 2002. “Do Human Rights Treaties Make a Difference?The Yale Law Journal 111 (8): 19352042.CrossRefGoogle Scholar
Henderson, John A. 2015. “Using experiments to improve ideal point estimation in text with an application to political ads.” Unpublished manuscript.Google Scholar
Hitlin, Paul. 2016. Research in the crowdsourcing age, a case study. www.pewinternet.org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/: Pew Research Center.Google Scholar
King, Gary. 2007. “An Introduction to the Dataverse Network as an Infrastructure for Data Sharing.” Sociological Methods and Research 36 (2): 173–99.CrossRefGoogle Scholar
King, Gary, Murray, Christopher J. L., Salomon, Joshua A., and Tandon, Ajay. 2004. “Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research.” American Political Science Review 98 (1): 191207.CrossRefGoogle Scholar
Neumayer, Eric. 2005. “Do International Human Rights Treaties Improve Respect for Human Rights?Journal of Conflict Resolution 49 (6): 925–53.CrossRefGoogle Scholar
Oishi, Shigehiro, Hahn, Jungwon, Schimmack, Ulrich, Radhakrishan, Phanikiran, Dzokoto, Vivian, and Ahadi, Stephen. 2005. “The Measurement of Values Across Cultures: A Pairwise Comparison Approach.” Journal of Research in Personality 39 (2): 299305.CrossRefGoogle Scholar
Sheng, Victor S., Provost, Foster, and Ipeirotis, Panagiotis G.. 2008. “Get Another Label? Improving Data Quality and Data Mining using Multiple, Noisy Labelers.” In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 614–22.CrossRefGoogle Scholar
Supplementary material: Link

Carlson et.al. Dataset

Link
Supplementary material: PDF

Carlson and Montgomery supplementary material

Carlson and Montgomery supplementary material 1

Download Carlson and Montgomery supplementary material(PDF)
PDF 496 KB
13
Cited by

Save article to Kindle

To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

A Pairwise Comparison Framework for Fast, Flexible, and Reliable Human Coding of Political Texts
Available formats
×

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

A Pairwise Comparison Framework for Fast, Flexible, and Reliable Human Coding of Political Texts
Available formats
×

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

A Pairwise Comparison Framework for Fast, Flexible, and Reliable Human Coding of Political Texts
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *