Social scientists have increasingly turned to expert judgments to generate data for difficult-to-measure concepts, but getting access to and response from highly expert informants can be costly and challenging. We examine how informant selection and post-survey response aggregation influence the validity and reliability of measures built from informant observations. We draw upon three surveys with parallel survey questions of candidate characteristics to examine the trade-off between expanding the size of the local informant pool and the pool's level of expertise. We find that a “wisdom-of-crowds” effect trumps the benefits associated with the expertise of individual informants when the size of the rater pool is modestly increased. We demonstrate that the benefits of expertise are best realized by prescreening potential informants for expertise rather than post-survey weighting by expertise.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.
* Views captured on Cambridge Core between 4th January 2017 - 20th October 2017. This data will be updated every 24 hours.