Hostname: page-component-848d4c4894-8bljj Total loading time: 0 Render date: 2024-06-13T23:52:19.689Z Has data issue: false hasContentIssue false

When Experts Disagree: Response Aggregation and its Consequences in Expert Surveys

Published online by Cambridge University Press:  16 November 2018

René Lindstädt
School of Law and Politics, Cardiff University, Cardiff, UK
Sven-Oliver Proksch
Cologne Center for Comparative Politics, University of Cologne, Germany
Jonathan B. Slapin*
Department of Government, University of Essex, UK
*Corresponding author. Email:


Political scientists use expert surveys to assess the latent features of political actors. Experts, though, are unlikely to be equally informed and assess all actors equally well. The literature acknowledges variance in measurement quality but pays little attention to the implications of uncertainty for aggregating responses. We discuss the nature of the measurement problem in expert surveys. We then propose methods to assess the ability of experts to judge where actors stand and to aggregate expert responses. We examine the effects of aggregation for a prominent survey in the literature on party politics and EU integration. Using a Monte Carlo simulation, we demonstrate that it is better to aggregate expert responses using the median or modal response, rather than the mean.

Research Notes
Copyright © The European Political Science Association 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)


Adams, James, Ezrow, LawrenceSomer-Topcu, Zeynep (2014) Do Voters Respond to Party Manifestos or to a Wider Information Environment? An Analysis of Mass-Elite Linkages on European Integration. American Journal of Political Science 58(4), 967978.CrossRefGoogle Scholar
Aldrich, John HMcKelvey, Richard D (1977) A Method of Scaling with Applications to the 1968 and 1972 Presidential Elections. American Political Science Review 71(01), 111130.CrossRefGoogle Scholar
Bakker, Ryan, Vries, Catherine de, Edwards, Erica, Hoogh, Liesbet, Jolly, Seth, Marks, Gary, Polk, Jonathan, Rovny, Jan, Steenbergen, MarcoVachudova, Milada Anna (2015) Measuring Party Positions in Europe: The Chapel Hill Expert Survey Trend File, 1999-2010. Party Politics 21(1), 143152.CrossRefGoogle Scholar
Bakker, Ryan, Edwards, Erica, Jolly, Seth, Polk, Jonathan, Rovny, JanSteenbergen, Marco (2014) Anchoring the Experts: Using Vignettes to Compare Party Ideology Across Countries. Research & Politics 1(3), 2053168014553502.CrossRefGoogle Scholar
Benoit, KennethLaver, Michael (2006) Party Policy in Modern Democracies. New York, NY: Routledge.CrossRefGoogle Scholar
Bertens, LCM, Broekhuizen, BDL, Naaktgeboren, CA, Rutten, FH, Hoes, AWvan Mourik, Y (2013) Use of Expert Panels to Define the Reference Standard in Diagnostic Research: A Systematic Review of Published Methods and Reporting. PLoS Med 10(10), e1001531.CrossRefGoogle ScholarPubMed
Blackwell, Matthew, Honaker, JamesKing, Gary (2015) A Unified Approach to Measurement Error and Missing Data: Overview and Applications. Sociological Methods and Research doi: 10.1177/0049124115589052.CrossRefGoogle Scholar
Clinton, Joshua DLewis, David E (2008) Expert Opinion, Agency Characteristics, and Agency Preferences. Political Analysis 16(1), 320.CrossRefGoogle Scholar
Druckman, James NWarwick, Paul V (2005) The Missing Piece: Measuring Portfolio Salience in Western European Parliamentary Democracies. European Journal of Political Research 44(1), 1742.CrossRefGoogle Scholar
Finn, RH (1970) A Note on Estimating the Reliability of Categorical Data. Educational and Psychological Measurement 30, 7176.CrossRefGoogle Scholar
Gray, JuliaSlapin, Jonathan B (2012) How Effective Are Preferential Trade Agreements? Ask the Experts. The Review of International Organizations 7(3), 309333.CrossRefGoogle Scholar
Hare, Christopher, Armstrong, David, Bakker, Ryan, Carroll, RoycePoole, Keith T (2015) Using Bayesian Aldrich-McKelvey Scaling to Study Citizens’ Ideological Preferences and Perceptions. American Journal of Political Science 59(3), 759774.CrossRefGoogle Scholar
Hooghe, Liesbet, Bakker, Ryan, Brigevich, Anna, Vries, Catherine De, Edwards, Erica, Marks, Gary, Rovny, Jan, Steenbergen, MarcoVachudova, Milada (2010) Reliability and Validity of the 2002 and 2006 Chapel Hill Expert Surveys on Party Positioning. European Journal of Political Research 49(5), 687703.CrossRefGoogle Scholar
James, Lawrence R, Demaree, Robert GWolf, Gerrit (1984) Assessing Within-Group Interrater Reliability With and Without Response Bias. Journal of Applied Psychology 69(1), 8598.CrossRefGoogle Scholar
Kozlowski, Steve WJHattrup, Keith (1992) A Disagreement About Within-Group Agreement: Disentangling Issues of Consistency Versus Consensus. Journal of Applied Psychology 77(2), 161167.CrossRefGoogle Scholar
LeBreton, James MSenter, Jenell L (2008) Answers to 20 Questions about Interrater Reliability and Interrater Agreement. Organizational Research Methods 11(4), 815852.CrossRefGoogle Scholar
Martinez i Coma, FerranHam, Carolien (2015) Can Experts Judge Elections? Testing the Validity of Expert Judgments for Measuring Election Integrity. European Journal of Political Research 54(2), 305325.CrossRefGoogle Scholar
Norris, Pippa, Frank, Richard WMartinez i Coma, Ferran (2013) Assessing the Quality of Elections. Journal of Democracy 24(4), 124135.CrossRefGoogle Scholar
Proksch, Sven-OliverLo, James (2012) Reflections on the European Integration Dimension. European Union Politics 13(2), 317333.CrossRefGoogle Scholar
van der Eijk, Cees (2001) Measuring Agreement in Ordered Rating Scales. Quality and Quantity 35(3), 325341.CrossRefGoogle Scholar
Supplementary material: PDF

Lindstädt et al. supplementary material


Download Lindstädt et al. supplementary material(PDF)
PDF 216.7 KB
Supplementary material: Link

Lindstädt et al. Dataset