Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-wq484 Total loading time: 0 Render date: 2024-04-26T07:17:41.963Z Has data issue: false hasContentIssue false

1 - Defining hard-to-survey populations

Published online by Cambridge University Press:  05 September 2014

Roger Tourangeau
Affiliation:
Westat Research Organisation, Maryland
Roger Tourangeau
Affiliation:
Westat Research Organisation, Maryland
Brad Edwards
Affiliation:
Westat Research Organisation, Maryland
Timothy P. Johnson
Affiliation:
University of Illinois, Chicago
Kirk M. Wolter
Affiliation:
University of Chicago
Nancy Bates
Affiliation:
US Census Bureau
Get access

Summary

Introduction

This book is about populations that are hard to survey in different ways. It focuses on populations of people rather than establishments or institutions. In an era of falling response rates for surveys (Brick & Williams, 2013; Curtin, Presser, & Singer, 2005; de Leeuw & de Heer, 2002), it may seem that all household populations are hard to survey, but some populations present special challenges of various sorts that make them harder to survey than the general population. Some of these hard-to-survey populations are rare; others are hidden; some are difficult to find or contact; still others are unlikely to cooperate with survey requests. This chapter tries to distinguish the major challenges that make populations hard to survey and reviews attempts to quantify how hard to survey different populations are.

One way to classify the various sources of difficulty is by what survey operation they affect. In this chapter, we distinguish populations that are hard to sample, those whose members who are hard to identify, those that are hard to find or contact, those whose members are hard to persuade to take part, and those whose members are willing to take part but nonetheless hard to interview. These distinctions reflect the main steps in many surveys. First, a sample is selected. Often, the next operation is identifying members of the target population, for example, through screening interviews. Then, the sample members must be found and contacted. Once contact is made, sample members have to be persuaded to do the survey. And, finally, the willing respondents have to have whatever abilities are needed to provide the requested data or special steps have to be taken to accommodate them. As we shall see, with any given population, problems can arise with each of these operations, making the population hard to survey. And, as will become clear, some hard-to-survey populations present combinations of several kinds of trouble.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2014

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abraham, K. G., Helms, S., & Presser, S. (2009). How social process distort measurement: the impact of survey nonresponse on estimates of volunteer work in the United States. American Journal of Sociology, 114(4), 1129–65.CrossRefGoogle Scholar
Abraham, K. G., Maitland, A., & Bianchi, S. M. (2006). Nonresponse in the American Time Use Survey: who is missing from the data and how much does it matter. Public Opinion Quarterly, 70(5), 676–703.CrossRefGoogle Scholar
Ardilly, P., & Le Blanc, D. (2001). Sampling and weighting a survey of homeless persons: a French example. Survey Methodology, 27(1), 109–18.Google Scholar
Bates, N., & Mulry, M. H. (2011). Using a geographic segmentation to understand, predict, and plan for census and survey mail nonresponse. Journal of Official Statistics, 27(4), 601–18.Google Scholar
Blakely, E. J., & Snyder, M. G. (1997). Fortress America: Gated Communities in the United States. Washington, D.C.: The Brookings Institute.Google Scholar
Blumberg, S. J., & Luke, J. V. (2012). Wireless Substitution: Early Release of Estimates from the National Health Interview Survey, July–December 2011. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Health Statistics.Google Scholar
Brick., J. M., Montaquila, J. M., Han, D., & Williams, D. (2012). Improving response rates for Spanish speakers in two-phase mail surveys. Public Opinion Quarterly, 76(4), 721–32.CrossRefGoogle Scholar
Brick, J. M., & Williams, D. (2013). Reasons for increasing nonresponse in U.S. household surveys. ANNALS of the American Academy of Political and Social Science, 645, 36–59.CrossRefGoogle Scholar
Brown, J. J., Diamond, I. D., Chambers, R. L., Buckner, L. J., & Teague, A. D. (1999). A methodological strategy for a one-number census in the U.K.Journal of the Royal Statistical Society: Series A (Statistics in Society), 162(2), 247–67.CrossRefGoogle Scholar
Bruce, A., & Robinson, J. G. (2003). Tract Level Planning Database with Census 2000 Data. Washington, DC: US Census Bureau.Google Scholar
Coleman, J. S. (1958–59). Relational analysis: the study of social organizations with survey methods. Human Organization, 17(4), 2–36.CrossRefGoogle Scholar
Couper, M. P., & Ofstedal, M. B. (2009). Keeping in contact with mobile sample members. In Lynn, P. (ed.), Methodology of Longitudinal Surveys (pp. 183–203). Chichester: John Wiley & Sons.CrossRefGoogle Scholar
Curtin, R., Presser, S., & Singer, E. (2005). Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly, 69(1), 87–98.CrossRefGoogle Scholar
de Leeuw, E. D., & de Heer, W. (2002). Trends in household survey nonresponse: a longitudinal and international comparison. In Groves, R., Dillman, D. A., Eltinge, J. L., & Little, R. J. A. (eds.), Survey Nonresponse (pp. 41–54). New York: John Wiley & Sons.Google Scholar
Edwards, S., Fraser, S., & King, H. (2011). CHIS 2009 Methodology Series: Report 2 – Data Collection Methods. Los Angeles, CA: UCLA Center for Health Policy Research.Google Scholar
Goodman, L. A. (2011). Comment: on respondent-driven sampling and snowball sampling in hard-to-reach populations and snowball sampling not in hard-to-reach populations. Sociological Methodology, 41(1), 347–53.CrossRefGoogle Scholar
Groves, R. M., & Couper, M. P. (1998). Nonresponse in household surveys. New York: John Wiley & Sons.CrossRefGoogle Scholar
Groves, R. M., Couper, M. P., Presser, S., Singer, E., Tourangeau, R., Acosta, G. P. et al. (2006). Experiments in producing nonresponse bias. Public Opinion Quarterly, 70(5), 720–36.CrossRefGoogle Scholar
Groves, R. M., Presser, S., & Dipko, A. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68(1), 2–31.CrossRefGoogle Scholar
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-salience theory of survey participation. Public Opinion Quarterly, 64(3), 299–308.CrossRefGoogle ScholarPubMed
Heckathorn, D. D. (1997). Respondent-driven sampling: a new approach to the study of hidden populations. Social Problems, 44(2), 174–79.CrossRefGoogle Scholar
Heckathorn, D. D. (2007). Extensions of respondent-driven sampling: analyzing continuous variables and controlling for differential recruitment. In Xie, Y. (ed.), Sociological Metholodogy (pp. 151–207). Boston, MA: Blackwell.Google Scholar
Heckathorn, D. D. (2011). Comment: snowball versus respondent-driven sampling. Sociological Methodology, 41(1), 355–66.CrossRefGoogle Scholar
Horrigan, M., Moore, W., Pedlow, S., & Wolter, K. (1999). Undercoverage in a large national screening survey for youths? In Joint Statistical Meetings Proceedings, Survey Research Methods Section (pp. 570–75). Alexandria, VA: American Statistical Association.Google Scholar
Judkins, D., DiGaetano, R., Chu, A., & Shapiro, G. (1999). Coverage in screening surveys at Westat. In Joint Statistical Meetings Proceedings, Survey Research Methods Section (pp. 581–86). Alexandria, VA: American Statistical Association.Google Scholar
Kalton, G. (2009). Methods for oversampling rare subpopulations in social surveys. Survey Methodology, 35(2), 125–41.Google Scholar
Kalton, G., & Anderson, D. W. (1986). Sampling rare populations. Journal of the Royal Statistical Society: Series A (General), 149(1), 65–82.CrossRefGoogle Scholar
Keeter, S., Smith, G., Kennedy, C., Turakhia, C., Schulman, M., & Brick, J. M. (2008). Questionnaire and fieldwork challenges in a probability sample survey of Muslim Americans. .CrossRef
Kreuter, F., McCulloch, S. K., Presser, S., & Tourangeau, R. (2011). The effects of asking filter questions in interleafed versus grouped format. Sociological Methods and Research, 40(1), 88–104.CrossRefGoogle Scholar
Lee, S., Mathiowetz, N. A., & Tourangeau, R. (2007). Measuring disability in surveys: consistency over time and across respondents. Journal of Official Statistics, 23(2), 163–84.Google Scholar
Lepkowski, J., & Couper, M. P. (2002). Nonresponse in the second wave of longitudinal household surveys. In Groves, R., Dillman, D. A., Eltinge, J. L., & Little, R. J. A. (eds.), Survey nonresponse (pp. 259–72). New York: John Wiley & Sons.Google Scholar
Lohr, S. L., & Rao, J. N. K. (2000). Inference from dual frame surveys. Journal of the American Statistical Association, 95(449), 271–80.CrossRefGoogle Scholar
Martin, E. A. (1999). Who knows who lives here? Within-household disagreements as a source of survey coverage error. Public Opinion Quarterly, 63(2), 220–36.CrossRefGoogle Scholar
Moore, J. C. (1988). Self-proxy response status and survey response quality. Journal of Official Statistics, 4(2), 155–72.Google Scholar
Passel, J. S. (2006). The size and characteristics of the unauthorized migrant population in the U.S.: estimates based on the March 2005 Current Population Survey. Pew Hispanic Center Research Report. Washington, DC: Pew Hispanic Center.Google Scholar
Robinson, J. G., Johanson, C., & Bruce, A. (2007, July – August). The Planning Database: Decennial Data for Historical, Real-time, and Prospective Analysis. Paper presented at the 2007 Joint Statistical Meetings, Salt Lake City, UT.
Shapiro, G., Diffendal, G., & Cantor, D. (1993). Survey undercoverage: major causes and new estimates of magnitude. In Proceedings of the 1993 U.S. Bureau of the Census Annual Research Conference. Washington, DC: US Department of Commerce.Google Scholar
Skidmore, S., Barrett, K., Wright, D., & Gardner, J. (2012). Conducting Surveys with Proxies: Evaluating a Standardized Measure of Need. Working Paper. Princeton, NJ: Mathematica Policy Research.Google Scholar
Steeh, C., Kirgis, N., Cannon, B., & DeWitt, J. (2001). Are they really as bad as they seem? Nonresponse rates at the end of the twentieth century. Journal of Official Statistics, 17(2), 227–47.Google Scholar
Sudman, S., Sirken, M. G., & Cowan, C. D. (1988). Sampling rare and elusive populations. Science, 240, 991–96.CrossRefGoogle ScholarPubMed
Tourangeau, R., Groves, R. M., & Redline, C. D. (2010). Sensitive topics and reluctant respondents: demonstrating a link between nonresponse bias and measurement error. Public Opinion Quarterly, 74(3), 413–32.CrossRefGoogle Scholar
Tourangeau, R., Kreuter, F., & Eckman, S. (2012). Motivated underreporting in screening surveys. Public Opinion Quarterly, 76(3), 453–69.CrossRefGoogle Scholar
Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The Psychology of Survey Response. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Tourangeau, R., Shapiro, G., Kearney, A., & Ernst, L. (1997). Who lives here? Survey undercoverage and household roster questions. Journal of Official Statistics, 13(1), 1–18.Google Scholar
US Census Bureau (2006). Design and Methodology: Current Population Survey. Technical Paper 66. Washington, DC: US Census Bureau and US Bureau of Labor Statistics.Google Scholar
West, J., Denton, K., & Germino Hausken, E. (2000). America’s Kindergartners (NCES 2000–070). Washington, DC: US Department of Education, National Center for Education Statistics.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×