Q&A on the use of Google Consumer Surveys for social science research, with Lie Philip Santoso, Robert Stein, and Randy Stevenson

There’s a great deal of discussion these days about survey and polling methodology.  A series of close and surprising elections, worldwide, have led many to wonder about the state of polling and survey methodology — Brexit, the Colombian peace referendum, and the 2016 U.S. presidential election all come to mind.

Traditional approaches for surveying and polling the public are being questioned.  We know that response rates to traditional telephone surveys have declined dramatically, which has led to a great deal of research about using other methods of contacting potential survey respondents, including contacting them on mobile phones, using robo-calls, and surveying people online.  New methods for sampling and conducting surveys are being developed and implemented, in particular online polls and surveys.  Researchers are constantly on the lookout for new methods, and new ways to collect opinion and behavior data from the public, as well as for new ways to conduct larger-scale experiments.

One of these new survey methods is Google Consumer Surveys, and it presents researchers with a potentially useful tool for their work.  Recently, in Political Analysis we published a paper by Lie Philip Santoso, Robert Stein, and Randy Stevenson, “Survey Experiments with Google Consumer Surveys:  Promise and Pitfalls for Academic Research in Social Science” (free access until the end of May 2017).  Their paper takes a close look at the use of Google Consumer Surveys for academic research, in particular how it might be an effective platform for survey experiments.

I recently conducted with Q&A with the authors, which provides a bit of background about their research and their results.  The questions are mine, the answers are theirs.  My thanks to the authors for participating in this Q&A!

– R. Michael Alvarez, Editor, Political Analysis

1. What are Google Consumer Surveys?

Google Consumer Surveys (GCS) is a new survey tool developed by Google. Much like the familiar “pay wall” that asks users to pay for certain kinds of premium content, Google has created a “survey wall” that asks users to take a short survey in order to access such content. These surveys are between 1-10 short questions and are created and added to a pool of active surveys by researchers. The specific survey a user sees is randomly selected, with some demographic targeting as responses come in.

2. What are the strengths of using GCS for social science research, and the weaknesses?

Strengths of using GCS for Social Science research:

1.    GCS is significantly less expensive than other online survey platforms for surveys with a small number of questions (i.e. 10 questions or fewer). The cost structure makes the 10 question survey particularly attractive (several times less expensive than any other offering currently available) and it is also this length that appears to offer the most flexibility and usefulness to social scientists.

2.    In our paper, we argue that the platform should be particularly attractive for survey experimenters. The process of randomly matching respondents to surveys (net demographic targeting) produces good balance across different treatment groups (we show this in our paper for both Google’s “inferred demographics” and self-reported demographic and opinion variables).  Likewise, GCS provides free information that allows for both explorations of heterogeneity and inclusion of manipulation checks, despite the limited information on respondents (e.g., GCS provides respondent demographics (age, gender, income) inferred from browsing histories and locations, as well as response times).

Weaknesses of using GCS for Social Science research:

1.    While we find little evidence that GCS’s non-randomly selected panel is significantly less representative (after demographic targeting) than other non-random internet samples (e.g., from opt-in panels), it does differ from these other options in that it relies on the survey wall to sample respondents. This environment interrupts respondents’ browsing activities with survey questions and “coerces” them to provide a response to survey questions before allowing them to proceed to their target content. As respondents have not agreed beforehand to participate in the survey, they could choose not to respond (which leads to a high rate of non-response), or refuse to substantially engage the questions by not reading the questions carefully. When they do this with respect to treatment questions in a survey experiments, there is a possibility that the treatment is not delivered and the experimental effects will be weakened.

2.    GCS is keenly aware of the imposition the survey wall places on potential GCS respondents. Given the likely low level of motivation of such respondents, Google has concluded that it is essential for survey questions to be short and simple. This imposes significant restrictions on how researchers can construct their questions. Although there are ways to work around this limitation, it is difficult for researchers to faithfully implement the exact wording of the question.

3. Which types of social science research are GCS well-suited for?

Our starting assumption is that most Social Scientists are interested in learning about the causal connections between some set of treatment variables and some set of outcomes. Thus, each must adopt some identification strategy that allows them to give an estimated association and causal interpretation.  Generally, the most common strategy is selection on the observables, which often requires controlling for a large number of variables.  Given the short format of Google Consumer Surveys, it seems less likely it will be useful for researchers using such an identification strategy.

In contrast, researchers using randomized assignment of treatments as their identification strategy can avoid having to control for a large number of potential confounders and so GCS may be more attractive for these kinds of studies.  Further, we argue there is room in the 10-question format (and given the free information provided) to also explore treatment effect heterogeneity – often a critical aim of experimental studies.

GCS is also sufficiently inexpensive that it can be a useful tool in teaching survey methods, since students can afford to design and conduct a short survey to several hundred respondents for no more than a typical lab fee.

Overall, GCS could be a useful platform for scholars that have limited research budgets to make progress in their research that would not otherwise be possible and to better teach survey methods. Even for well-funded scholars, it could provide an affordable way to do a wider variety of short pretests before committing to a longer instrument and for survey experimenters, GCS is likely sufficient for their final analyses.

4. Thinking about the evolution of survey methodologies over the past decade, how do you imagine that researchers will be conducting surveys and polls in the next decade?

The evolution of survey methodology will continue to move away from live telephone interviewing towards web based surveys.  As access to the internet and devices that connect individuals to the web narrow among the general population, the lower costs and timeliness of web based surveys will become more popular among scholars, market researchers and the media. Moreover, web based surveys have significant advantages over live telephone surveys including higher response rates, accessibility to hard to reach populations, and reducing the ‘social desirability’ effects live telephone interviews have on the reliability of survey responses.  Web based surveys enable the researcher to provide a wider range of content to the survey respondent (e.g., video and still pictures) that better measure the researcher’s explanatory variables.

5. Why did you and your colleagues decide to study the GCS survey methodology?  Are you using it in your own research?

Our work involves designing and fielding a wide variety of different kinds of surveys across the world and we have increasingly turned to web-based solutions. As such we are very interested in understanding both the strengths and weakness of different web-based survey methods (and sampling strategies). When we first discovered GCS, it clearly provided a very low cost and quick means of at least testing new questions but its unusual sampling strategy was worrisome and limited number of questions seemed too limiting. One of us, however, began to use it to do quick question wording experiments (mostly as a pre-testing exercise) and it quickly became apparent that this use played to the strengths of the platform and sparked our interest in a wider investigation.

Since we have come to understand GCS’s strengths and limitations, we use it in our own work for both survey experimentation, but also for projects in which we need to do a lot of pre-testing and design work.  The short turn around with GCS (6-8 hours) and low price point makes it really useful when one is designing a survey with multiple collaborators (all with lots of different ideas) and so need to test lots of different possibilities. Further, it allows one to build questions from a sequence of tests (i.e., test one idea, modify it, test again, etc.) rather than having to do everything in a one-shot “pilot study”.

For example, in a recent project about racial identity and candidate selection, one of us (and his collaborators) went through exactly this process testing different survey questions using GCS.  The result was a well-vetted set of questions and survey experiments that we will field as part of a longer survey on another web based survey platform (i.e., Toluna).

Finally, we are using GCS in the classroom (specifically an undergraduate policy analysis class).  Here, we received a small teaching grant to fund students fielding their own surveys using GCS.

6. Given all of the discussion about the accuracy of polling in the recent U.S. presidential election, do you think that online polling methods like GCS might be the future of public opinion polling?

It seems reasonable to expect that the use of online polling in elections and for other measures of public opinion will continue to expand, given its comparative advantages over live telephone interviewing as described in Question 4. In fact, in the 2016 election there were nearly as many Internet surveys as traditional, live-interview telephone surveys (90 vs 96 national polls). Compare this to the 2012 election in which there were 26 online surveys and more than 100 live-interview national polls.

Of course, there are big challenges remaining for online polling. First, is the simple fact that most internet surveys do not sample randomly from the population of interest. Of course, when one accounts for non-response, neither does live telephone interview polls. In both cases, researchers use extensive statistical modeling to correct for non-representative samples, though the state the art for this kind of weighting is less definitive than most researchers understand (see thread on Andy Gelman’s blog). Second, even if the samples in online polls match those from live interviews for the kinds of demographics typically measured (and used in weighting), there are likely unmeasured variables that impact whether individuals sign up for (and persist as part of) online panels, which may also impact the usual foci of public opinion polls (e.g., political interest). Certainly, there needs to be more work identifying such potentially confounding variables if internet (panel based) surveys become the workhorse of public opinion polling.

With regards to GCS specifically, its “river” sampling strategy is potentially even more problematic than panel sampling (though recent work on multi-level modelling with post-stratification has shown a lot of promise with even widely non-representative samples, thus we do not generally recommend it for generic public opinion polling.  Instead, we have advocated it use mainly as a platform for conducting survey experiments.

In sum, we believe that web-based surveys and online polling are here to stay and they will play an important role in the next generation of public opinion research. Despite lingering concerns about online samples (usually using opt-in panels), the lower cost, versatile format, and speed of data collection are significant advantages that it is hard for most researchers to ignore. However, it is still too early to dismiss the relevance of traditional live interview modes of polling.

Bio:

Lie Philip Santoso is a PhD Candidate in Political Science at Rice University. His research focuses on comparative political behavior, public opinion, and survey methods.

Robert Stein is the Lena Gohlman Fox Professor of Political Science at Rice University.  His research focuses on election sciences, public policy and political behavior.

Randy Stevenson is a Professor of Political Science at Rice University. His research focuses on comparative political behavior in the western democracies including comparative economic voting and comparative political interest, participation, and knowledge.

Leave a reply

Your email address will not be published. Required fields are marked *