Hostname: page-component-848d4c4894-x5gtn Total loading time: 0 Render date: 2024-06-12T20:20:59.445Z Has data issue: false hasContentIssue false

Why People Vote: Estimating the Social Returns to Voting

Published online by Cambridge University Press:  20 October 2014

Rights & Permissions [Opens in a new window]

Abstract

This article measures the social rewards and sanctions associated with voting. A series of survey experiments shows that information about whether a person votes directly affects how favorably that person is viewed. Importantly, the study also compares the rewards and sanctions associated with voting to other activities, including the decisions to recycle, volunteer and return one’s library books on time. It presents a behavioral test of the consequences of non-voting and finds that individuals are willing to take costly action in a dictator game to reward political participation. Finally, it shows that survey measures of social norms about voting are correlated with county-level voter turnout. The study adds to the growing literature documenting the important influence of social concerns on turnout and other political choices.

Type
Articles
Copyright
© Cambridge University Press 2014 

Why do people participate in mass politics, despite the obvious fact that any individual’s actions have roughly zero chance of being decisive in any meaningful political context? Theories that emphasize the social and psychological benefits of political participation have received greater attention in recent years.Footnote 1 In these accounts, political participation is motivated by social pressure, norm compliance and other concerns apart from the potential policy benefits for the individual that are associated with participating. The turn to social and psychological accounts follows from the extreme difficulty that theories based on private instrumental returns have in explaining why individuals undertake costly action for collective goals despite the absence of substantial private benefits.

Although the contention that social pressure and norms may partly explain mass participation is intuitively plausible, the empirical foundation of this perspective remains largely unexplored. Recent research suggests that social dynamics – perhaps most notably the anticipation of social sanctions – can substantially affect behaviors including political participation.Footnote 2 Two key unanswered questions include whether information about people’s political participation affects how others evaluate them and whether variation in participation can be attributed to anticipating these social consequences. In this article, we document the existence of social rewards and sanctions associated with voting behavior and link these social forces to differences in observed participation.

First, we measure and calibrate the degree to which political participation affects people’s opinions of others. We present a series of survey experiments that provides causal evidence that people respond directly to information about voting behavior when forming social evaluations of others. We place the magnitude of these effects into context by comparing the social consequences of voting behavior to those associated with other pro-social activities for which the expected private benefits of action are low, including the decisions to recycle, volunteer and return one’s library books on time. We find that the social implications of the decision to vote tend to be similar in magnitude to those associated with these other decisions. This suggests that, although turnout decisions are associated with social consequences, these consequences are neither uniquely powerful, nor confined to the political realm.

Secondly, we show that individuals are willing to sanction and reward voting behavior even when doing so is personally costly. In an experimental dictator game, participants give more money to an individual who is randomly assigned to be described as having voted than to one randomly assigned to be described as not voting. This provides behavioral validation of the differences in social evaluations expressed in surveys. Of note, this willingness to undertake costly action to support a norm implies that norms may be sustained despite the fact that expressing praise or disapprobation of others is not always in an observer’s immediate self-interest. While our experimental evidence does not directly show that concern for social evaluations has a material effect on voting or other political participation, we take a tentative step toward examining this linkage below. We present evidence from a national survey that included an item that measures social norms about voting. In that analysis, we find that county-level (validated) turnout is correlated with average norms in the county. Turnout is higher in places where the belief that failing to participate will generate social sanctions is more prevalent. This finding is consistent with a prominent role for social evaluations in turnout, and is the first we are aware of to document a correlation between survey measures of norms and validated (rather than self-reported) turnout.

Our findings also have several general implications for understanding political behavior. First, we add to the growing literature documenting the important influence of social concerns on political choices. Classical writers often suggest that the psychological influence of perceptions of social standards is rooted in individuals’ sensitivity to how others assess their behavior, a characteristic that is sometimes considered a basic element of human nature. For example, in The Theory of Moral Sentiments, Adam Smith observes that:

Nature, when she formed man for society, endowed him with an original desire to please, and an original aversion to offend his brethren. She taught him to feel pleasure in their favourable, and pain in their unfavourable regard. She rendered their approbation most flattering and most agreeable to him for its own sake; and their disapprobation most mortifying and most offensive.Footnote 3

Whether sensitivity to social opinion is intrinsic or formed by social forces, we contribute to the project of documenting how social influences can shape behavior.

Our second contribution is that we provide a mechanism for continued compliance with apparent social norms. Prior field experiments have found that raising the visibility of the decision to vote increases participation.Footnote 4 We provide evidence suggesting that the effects may be driven by the accurate anticipation of social sanctions or rewards. Individuals do distinguish between those who vote and those who do not, even at a personal cost. In light of this evidence, it is not surprising that making one’s voting behavior more public increases turnout. Such publicity increases the likelihood that these social sanctions will be realized.

A final and related contribution is that the provision of social rewards and sanctions for political choices provides an explanation of the development and persistence of norms regarding political participation.Footnote 5 Although some empirical evidence shows that psychological factors such as a sense of civic duty predict political participation independently of social pressures,Footnote 6 this analysis leaves open the possibility that a sense of duty itself stems from an inference that it is one’s duty to avoid behavior that one associates with social scorn (such as failing to vote). Indeed, as the broader literature on norm enforcement suggests, selective rewards for norm compliance are the means by which norms become engrained.Footnote 7 We come to recognize behaviors that are rewarded as desirable, so we behave that way even when those means of immediate norm enforcement are withdrawn.Footnote 8

THE SOCIAL CONSEQUENCES OF TURNOUT

Early rational choice models of voting behavior focused on the instrumental benefits derived from voting (for example, the policy benefits associated with one’s preferred candidate winning) as explanations for why people do or do not vote.Footnote 9 Given the extremely small chance that one’s vote will be pivotal in deciding an election outcome, however, standard rational choice models cannot explain the fact that many people do, in fact, vote.Footnote 10 Consequently, other benefits – such as the utility derived from performing one’s civic duty and turning out on election day – became a central component of the ‘calculus of voting.’Footnote 11

Recently, Gerber, Green and Larimer found evidence that promising to reveal whether a voter participated to others in the voter’s community increased turnout; this effect was distinct from the increase in turnout associated with simply reminding voters that participating is a civic duty.Footnote 12 This finding supports the argument that the anticipation of social consequences can affect participation, and has since been replicated in other field experimental contexts.Footnote 13

Importantly, the effects of anticipating social sanctions are not likely to be confined to situations in which individuals are experimentally treated with a message that threatens to divulge their voting behavior. Researchers find that individuals believe that others monitor their behavior, and are able to discern their thoughts and feelings at much higher rates than actually occur. For example, people vastly overestimate the number of people who notice when their behavior deviates from social norms, as well as other people’s ability to detect when they are lying.Footnote 14 Thus many people may believe that others either explicitly monitor (or are at least aware of) their actions or can learn of their feelings and beliefs from observing their behavior – for example, by observing their failure to share the fact that they voted or demurring when the topic of voting is raised in conversation.

We fielded a survey in December 2010 focused on the act of voting. Approximately 65 per cent of respondents reported that one or more of their family, friends, neighbors, co-workers or boss would probably or almost certainly ask whether or not they plan to vote in the upcoming 2012 election. Fully 27 per cent reported that one or more of those individuals would almost certainly ask if they had voted.Footnote 15 For a large proportion of the population, it therefore appears that individuals anticipate that others with whom they regularly interact will ask about their participation. Overall, if people assume that their voting behavior can be observed or inferred by others, their decision to vote is open to influences associated with how others (most likely those they regularly interact with – that is, those with whom they have personal or other forms of social/network ties) will evaluate those choices, the topic of our inquiry here.

Indeed, prior work finds that voting behavior is, in part, a product of a pattern of conforming to the expectations of others in one’s social network,Footnote 16 which supports the claim that the decision to vote is shaped by the prospect of social consequences.Footnote 17 Similarly, there is evidence that individuals inflate their rates of participation for reasons of social desirability.Footnote 18 However, little work has directly demonstrated that these social consequences are real, rather than imagined.Footnote 19 Further, no research has examined whether the social consequences associated with voting behavior are uniquely large or if, instead, they are comparable to those associated with other behavioral decisions.

EXPERIMENTAL INVESTIGATIONS OF THE SOCIAL CONSEQUENCES OF VOTING

The central question we wish to address is whether the perception that abstention from voting results in social sanctions is accurate. Are people correct to expect others to evaluate them less favorably if they fail to vote? We also benchmark the social rewards/sanctions associated with voting to other behaviors. We do this using a series of experiments. The first set of experiments was included on the 2009 CCES.Footnote 20 The second set of experiments uses a convenience sample of US residents recruited using Amazon.com’s Mechanical Turk (MTurk) interface. These different recruitment and experiment delivery methods offer distinct advantages.

The CCES, which is administered by YouGov/Polimetrix, is an internet-based survey that uses a combination of sampling and matching techniques to account for the fact that opt-in internet survey respondents may differ from the general population on factors such as political interest. This process is designed to approximate a random digit-dialing sample.Footnote 21 The CCES therefore both proxies a relatively representative sample and offers a rich set of measures of demographics and other characteristics that we can use in our analysis.

The MTurk population, by contrast, is a convenience sample that appears to be more representative of the larger US population than student samples, but is still not wholly representative of the US population (for example, it is younger, fewer respondents are homeowners and a greater share of respondents report no religious affiliation).Footnote 22 However, because participants in the MTurk sample have not taken an extensive political survey prior to participating in our experiments, there is less concern about priming political considerations with prior survey content.Footnote 23 The MTurk interface also makes it feasible to allocate monetary bonuses to participants, a feature we use to gather behavioral measures of costly decision making in one set of experiments. Demographics for both samples are presented in online appendix Table A1.

Vignette Experiments

Our first evidence comes from a pair of Vignette Experiments (n=731) described in Panel A of Table 1. We presented CCES respondents with descriptions of different individuals; each description included either two or three pieces of information about the individual.Footnote 24 Each vignette was of the form: ‘Suppose you just met someone and learned the following information about them: they [treatment].’Footnote 25 Vignette order was randomly assigned for each respondent. Immediately after each vignette, respondents were asked to rate their level of agreement with three statements about the hypothetical individual on a seven-point scale ranging from strongly agree to strongly disagree: (1) ‘My overall impression of this person is positive,’ (2) ‘I think this person is responsible’ and (3) ‘I respect this person.’ These items were taken from a larger social distance questionnaire used by social psychologists to measure respondents’ broad social evaluations of an individual.Footnote 26 For each vignette, responses to these items were combined into an additive index and rescaled to range from 0 to 1; higher values correspond to more socially favorable evaluations of the hypothetical individual.Footnote 27

Table 1 Experimental Designs and Question Wording

Note: participants were randomly assigned to one behavior from each of the twelve pairs. These behaviors were randomly assigned to two blocks of six behaviors, which participants were asked to rank. Question wording: ‘[Imagine someone new was moving into your neighborhood/Now imagine a different person was moving into your neighborhood.] Please rank these six pieces of information about the person moving in from 1 to 6 by dragging them into the box on the right. Ranking an item at the top (1) means the piece of information would make your neighbors most look forward to having the person live in the neighborhood. Ranking an item at the bottom (6) means the piece of information would make your neighbors least look forward to having this person live in the neighborhood.’

In each vignette, the respondent was told with equal probability either (a) no information about the individual’s turnout behavior or that the individual (b) always, (c) usually or (d) never voted in presidential elections. The two other pieces of information presented in each vignette are listed in Table 1. Each characteristic was assigned independently of the others (that is, simple random assignment was employed). The vignette design offers two benefits. First, by simultaneously providing people with multiple pieces of information about the individual, we are able to ascertain whether information about turnout matters when the evaluator has other information about the individual. Secondly, information about turnout was sometimes not provided, which allows us to identify asymmetries in the social consequences of this behavior being revealed, relative to knowing nothing about it. For example, relative to knowing nothing about someone’s voting behavior, learning that the person always votes might improve social evaluations substantially, whereas learning that they never vote might not worsen them much.

To analyze the results of the Vignette Experiments we present a series of regression results in Table 2. We specify the social evaluation index described above as the dependent variable and include indicators for the treatments as independent variables.Footnote 28 The omitted (reference) category for the turnout treatment is the condition in which the respondent was not provided with any information about the individual’s turnout behavior. Thus the statistically significant coefficient of −0.122 on ‘Vote (1=Never)’ in Column 1 means that an individual described as never voting is evaluated 0.122 units less favorably than an individual for whom turnout information was not provided. Learning that an individual always votes resulted in a 0.07 unit more favorable evaluation. Thus the net effect of finding out that someone always, rather than never, votes is about 0.192 units (0.73 standard deviations; p<0.01, all p-values are two-tailed unless otherwise noted) on the social evaluation scale. We obtain a similar result in Column 2, where the difference between the coefficients for always voting and never voting is 0.213 units (0.80 standard deviations; p<0.01).

Table 2 Social Evaluations Analysis, Vignette Experiments

Note: experiment is described in Table 1, Panel A. Results from OLS regression models (weighted). Robust standard errors in brackets. For voting behavior indicators, the excluded category is no mention of this behavior. Coefficients for demographic controls (age, age-squared, race, sex, education, income, income missing, religious attendance, political interest, trust in government, indicators for ideology and party identification) are suppressed. *significant at 5 per cent; **significant at 1 per cent (two-tailed tests). Source: 2009 CCES.

Before comparing the effects of information about turnout behavior with those associated with other behaviors, we note two interesting points about how participants responded to the turnout information. First, the differences between the ‘usually’ and ‘always’ votes coefficients in Columns 1 and 2 are modest (tests of significance of difference in coefficients are p=0.056 and 0.284, respectively). This suggests that voting in every presidential election has small social benefits relative to voting in most presidential elections. Secondly, relative to knowing nothing about one’s turnout behavior, the positive social benefits of always voting appear to be somewhat smaller than the negative consequences of never voting.Footnote 29 This difference may arise because, absent learning that someone does not vote, people may, on average, presume that the person does vote.

Comparing the effects of finding out that an individual always rather than never votes in presidential elections to the effects of information about other behaviors provides some context. In Column 1, we find that the 0.192 unit positive effect of always rather than never voting is somewhat smaller than the 0.245 (p<0.01) unit effect of finding out a person pays their taxes on time rather than late (p>0.10 for test of difference in differences, DID), but substantially larger than the slightly negative −0.037 unit (p>0.10) effect of finding out that an individual has a college, rather than a high school, diploma (p<0.01 for test of DID). In Column 2, we find that the 0.213 (p<0.01) positive effect of always rather than never voting is substantially larger than the 0.129 (p<0.01) effect of finding out that someone recycles rather than does not (p=0.013 for test of DID). We also find that the social consequences of turnout are similar in size to (and statistically indistinguishable from) the 0.180 (p<0.01) unit effect of keeping informed about current events (p=0.37 for test of DID).

One question is whether this difference in evaluations is confined to only those who are already active in politics. (In the conclusion, we also separately examine geographic differences in social evaluations.) In additional analysis (available upon request) we estimate separate models for respondents with high levels of interest in politics (those who say they follow what is going on in government and public affairs ‘most of the time’ – 55 per cent of the weighted sample) and those with lower levels of political interest. We also estimate models separately for those who reported voting in 2008 (83 per cent of the weighted sample) and those who did not. We find that the hypothetical individual’s voting behavior more strongly affects evaluations among those who are more engaged with politics. However, even those who report lower levels of interest or did not vote in 2008 evaluate those who never vote less favorably than those who always vote.

For example, in models that restrict the sample to those who report high levels of interest in politics, we find that the effects of describing an individual as always, rather than never, voting are 0.211 and 0.234 in the first and second vignettes, respectively. The comparable estimates restricting the sample to respondents who did not report high levels of political interest are 0.134 and 0.180, respectively (p<0.01 for all effects). In models restricting the sample to 2008 voters, the difference between the always and never voting conditions is 0.191 and 0.240 in the first and second vignettes, respectively (p<0.01 for both), while for non-voters (17 per cent of the weighted sample) the comparable estimates are 0.142 (p=0.07) and 0.079 (p=0.35). Thus even individuals with low levels of political interest who did not vote in 2008 are inclined to sanction an individual for not voting, albeit not to the same degree as individuals who are more politically interested and who did vote in 2008.

We also consider the possibility that the effects of the voting behavior treatments were moderated by the other treatments by adding interactions between the voting treatments and each of the other treatments to the models reported in Table 2. In the first experiment, the p-value associated with a test of the joint significance of the interactions between the voting behaviors and the education treatment is 0.283; for the tax return interactions this value is 0.257. In the second experiment, the joint significance of the interactions between the voting treatments and the current events treatment is 0.198; for the recycling interactions the p-value is 0.076. The social penalties associated with never – rather than always or usually – voting were weaker when the target was described as always, rather than never, recycling. This implies that in some cases, people may be more willing to forgive a failure to vote if the target is described as conforming to social norms in other ways. However, it is important to note that this dynamic was not consistent across the experiments.

Behavior-ranking Experiments

The second set of experiments we conducted for this analysis is the behavior-ranking experiments (n=198), described in Panel B of Table 1. One half of the subjects we recruited using the MTurk interface were randomly assigned to participate in these experiments. Respondents were asked to rank characteristics that a hypothetical neighbor might have. Specifically, in each experiment they were asked to rank six items in terms of how appealing they thought their neighbors would find them: ‘Ranking an item at the top (1) means the piece of information would make your neighbors most look forward to having the person live in the neighborhood. Ranking an item at the bottom (6) means the piece of information would make your neighbors least look forward to having this person live in the neighborhood.’Footnote 30

Each subject completed two of these ranking experiments. The six items used in each experiment were randomly drawn, without replacement, from a larger list of twelve trait domains with two response options in each domain. For example, one of the domains was the hypothetical neighbor’s job, which could be either doctor or personal injury lawyer. The political domain of most interest to us is voting behavior (always votes in presidential elections or never votes in presidential elections). Other traits (for example, recycling, paying taxes on time, tipping behavior, donating to charity) were included to allow a comparison to the effect of finding out someone’s voting behavior.

Items were randomly ordered in each ranking experiment, and domains were not reused across the two experiments completed by a given respondent (for example, doctor or personal injury lawyer would appear in one experiment, but not the other). The purpose of this experiment was to provide respondents with the opportunity to evaluate the indirect importance (because respondents were asked to rank how others would view these characteristics) of political traits relative to a broad range of non-political characteristics. The ranking set-up also requires individuals to choose the relative importance of different items, because no two items could be given the same ranking. As with the outcome measure in the Vignette Experiments, we view these rankings as a proxy for the relative social desirability of different characteristics and actions.

We focus on differences in evaluations of behaviors in each of the twelve behavioral domains. These differences in mean rankings are shown in Figure 1.Footnote 31 (Mean rankings for each of the twenty-four behaviors are presented in online appendix Figure A1.)

Fig. 1 Differences in mean rankings of behaviors for each domain, Mechanical Turk behavior-ranking experiments

Note: bars are bootstrapped 95 per cent confidence intervals.

The difference in mean ranking for turnout (always minus never) is 1.72. This difference is substantially larger than the mean difference in the ranking of the two ‘favorite color’ descriptors – characteristics we included because we posited that they should not systematically alter evaluations. The turnout difference is similar in size to (and statistically indistinguishable from; p>0.10 for tests of DID) those associated with physique (1.85), occupation (1.90), tipping behavior (1.99) and whether the individual obeys speed limits (1.99). The difference in rankings for turnout is significantly smaller than (p<0.01 for tests of DID) differences in the domains of smoking (2.62), donating to charity (2.63), recycling (2.81) and being organized or messy (3.57). The difference in mean ranking for turnout is also modestly smaller than that of paying taxes on time rather than late (2.26; p=0.036 for test of DID). This supports the claim that people believe that information about turnout behavior – in this case, finding out that someone always or never votes in presidential elections – shapes how their neighbors are likely to evaluate them. Importantly, these ranking experiments also indicate that the expected rewards and sanctions associated with this dimension of behavior are similar to those associated with other behaviors and characteristics, but that people expect their neighbors to view some behaviors (for example, smoking) as more important.

The vignette and behavior-ranking experiments each provide a way to compare the social consequences of voting behavior to the social consequences of other non-political behaviors and characteristics. Taken together, the findings lead to two important conclusions. First, decisions about whether to vote appear to have social consequences – people who vote are evaluated more favorably than those who do not. Secondly, although the relative effect of the decision to vote compared to other behaviors varies somewhat across the experimental designs and samples, it is not anomalously large.Footnote 32 Instead, consonant with moderate observed turnout rates, the effect of voting is similar in size to the effects of non-political behaviors like returning library books on time and keeping in shape. Although failing to turn out may lead to negative social consequences that are large enough to affect decisions about whether to vote, our findings suggest that this is one of many behaviors that can affect one’s social standing (rather than an exceptional violation of a sacrosanct social duty). By contrast, failing to pay one’s taxes on time, which is a violation of the law, garners more substantial social sanctions, and individuals appear to believe their neighbors care more about a potential neighbor’s neatness than whether the potential neighbor votes.

BEHAVIORAL VALIDATION OF THE EFFECT OF DIFFERENCES IN SOCIAL EVALUATIONS DUE TO VOTING

The analysis discussed thus far suggests that individuals evaluate others on the basis of their voting behavior. However, one concern about our measures of the differences in social evaluations of those who do and do not vote is that they are costless to express. Thus it is unclear whether the less-positive evaluations that individuals make when evaluating those who vote infrequently would carry over to environments in which making such distinctions is costly to the individual. For this reason, we also conducted an additional set of experiments to obtain behavioral verification of these patterns of survey responses.

Those MTurk subjects not randomly assigned to the previously discussed ranking experiment were instead assigned to the allocation experiments (n=195), in which respondents were given three $0.50 bonuses. They were then offered the opportunity to anonymously share none, part or all of each of the three bonuses with another MTurk worker with one of three types of traits.Footnote 33 This is analogous to a standard dictator game, which has been used in other contexts to measure the preferences of actors toward others.Footnote 34 Specifically, the respondent was presented with (in random order) the opportunity to give away some portion of each of three $0.50 bonuses to another MTurk user. In one case the other worker’s color preference was described (randomly assigned as Red or Green); in another, their recycling behavior was presented (Yes or No), and in another the other worker’s voting behavior was described (Always or Never).Footnote 35 Any money the respondent chose to keep was paid to them directly through the MTurk payment interface; any portion they shared was given to another MTurk user with the trait described.Footnote 36 Because choosing to differentiate among individuals in these experiments requires the participant to give up money, any differences in evaluations we find in this context are less likely to be ephemeral. As in the behavior-ranking experiments, we use the difference in color preference manipulation because, ex ante, we believe it should not generate systematic differences in evaluations.

We note that this setting introduces a particularly high bar for discerning behavioral differences: individuals in the allocation experiments were anonymous and their behavior was kept private from all other MTurk users. Thus, unlike in other circumstances where the failure to enforce a social norm might itself risk social sanction, here individuals were anonymous members of a crowd whose behavior was unobservable to anyone other than the researcher, and even the researcher does not know the person’s (non MTurk) identity.Footnote 37

Results from these experiments appear in Table 3. Column 1 of Panel A displays the difference in the proportion of respondents allocating any of their $0.50 reward to another MTurk user for the two available descriptions for each trait domain (Red v. Green; Recycle v. Not Recycle; Always Vote v. Never Vote). Column 2 of Panel A then reports the difference in the average amount respondents shared across each of the two treatments. For both quantities of interest, we also display 95 per cent confidence intervals, which we calculate using a bootstrap technique because of the relatively small sample sizes in these experiments. Panel B provides the raw allocation data for each of the six treatment conditions: proportion allocating in Column 1 and the average amount allocated in Column 2.

Table 3 Results of Allocation Experiments

Note: cell entries are raw averages by condition. Source: Mechanical Turk Allocation Experiment.

The results show that individuals are no more likely to allocate money to an individual who prefers Red to Green – the difference in the probability of allocating (Column 1) is a paltry 1.9 percentage points (p=0.78) and the difference in average contributions (Column 2) is less than 1 cent (p=0.71). By contrast, those who are described as recycling are 27.0 percentage points (95 per cent confidence interval is 13.6 to 40.1; p<0.01) more likely to garner a monetary contribution than those who are not, which is associated with an average difference in contributions of a little more than 6 cents (2.95 to 10.04; p<0.01). Finally, those who are described as always rather than never voting are 20.4 percentage points (7.1 to 33.4; p<0.01) more likely to receive an award, and the difference in the average amount given is a little less than 5 cents (1.61 to 7.80; p<0.01).

These data add support to the findings from the survey experiments presented above. First, individuals distinguish between those who vote and those who do not (and those who recycle and those who do not), even when doing so requires the experimental participant to do so at some (monetary) cost to herself and her actions are not observed by anyone other than the researcher.Footnote 38 Secondly, the relative differences across the domains of turnout and recycling are similar to those we found in the survey context using the MTurk sample (as are the null results for color preference). This suggests that the behavior-ranking experiments do not inflate the apparent social desirability of political choices relative to non-political ones.

SUPPLEMENTARY ANALYSIS: AGGREGATE EVIDENCE LINKING NORMS AND TURNOUT

The experimental work presented above supports the claim that social sanctions associated with voting increase political participation. This is the assumed mechanism in previous field experimental work that promises to reveal citizens’ turnout behavior to others. Our survey and behavioral experimental work provides direct support for the notion that the anticipation of social sanctions from not voting is warranted. However, we do not directly show that concern for social evaluations has an important effect on actual political participation.

We therefore take a complementary step by examining the association between beliefs about norm violation and county-level turnout. To conduct this analysis we combine data on county-level turnout in the 2000 presidential election with survey data about norms culled from the 2000 National Annenberg Election Survey (NAES).Footnote 39 The norms item asked the respondent whether ‘you strongly agree, somewhat agree, somewhat disagree or strongly disagree’ with the statement ‘If I do not vote, my family and friends are disappointed in me.’ We use responses to this item, with greater agreement scored as more positive, to measure whether individuals expect others to sanction (reward) them for failing to adhere (adhering) to norms of appropriate behavior concerning voting.

To assess whether variation in this attitude is correlated with observed differences in turnout across counties, we calculate average responses to the survey item for each county covered by the NAES sample. Once merged with information about population, turnout in the 2000 presidential election and other county-population characteristics, we have observations from 1,835 unique counties. Our measure of turnout is the county-level proportion of the voting-age population (VAP) that cast a ballot in 2000 (mean=0.53). In addition to the survey measure of norms and the turnout measure, we collected a wide range of measures of other factors that might reasonably explain turnout. The first column of Table 4 (Column 0) provides summary statistics for all model variables. Column 1 of Table 4 then presents the results of an ordinary least squares (OLS) regression analysis in which we simply take the county-level average of the voting norms question and use it to predict VAP turnout at the county level, with a set of control variables. We use this measure of norms at the broader community level, rather than individual respondents’ subjective assessments of community norms, because we view it as more plausibly exogenous to the respondent’s voting behavior, and because our outcome of interest is county-level validated turnout.Footnote 40 In this analysis, the coefficient on the norms variable is statistically significant (p=0.011).

Table 4 Observational Analysis: NAES Data Matched to County-Level Voting-Age Population Turnout Records

Note: Column 0: means with standard deviations in brackets. Variables prefixed with MRP are county estimates from multi-level model with post-stratification; see text and online appendix for details. Columns 1–7: OLS regression coefficients with robust standard errors in brackets, clustered at the state level. * significant at 10 per cent; ** significant at 5 per cent; *** significant at 1 per cent (two-tailed tests). Source: 2000 NAES.

A key issue with this analysis, however, is measurement error. In counties with few survey respondents (at the extreme, only one), our measures of average citizen attitudes will necessarily be less reliable than in counties where we have more survey observations. To account for the greater imprecision of estimates from counties with fewer observations, we take three different approaches. First, we conduct weighted analysis where greater weight is given to counties with more respondents. Secondly, we employ multilevel regression with post-stratification (MRP)Footnote 41 to generate estimates of county-level norms that account for sampling variability across counties. The details of the MRP procedure are described in the online appendix, but the key insight is that by combining information about the average responses of different demographic groups to the survey questions with census data on the distribution of those different groups across counties, we can generate a county-level norm estimate that is less affected by sampling variability. Thirdly, we partition our analysis by the number of observations in the county by presenting analysis separately for the 513 counties that have five or more survey observations.

The results of these analyses, all of which were estimated using OLS regression, appear in Columns 2–7 of Table 4. To make comparisons of magnitudes easier across specifications, we standardize both norms measures (the simple county mean and the MRP) to have a mean of 0 and a standard deviation of 1. Additionally, to account for the correlation among observations at the state level, we cluster standard errors within states. In Column 2, the estimates are weighted by the square root of the number of observations in each county (that is, a county with four survey observations receives twice the weight of a county with only a single observation). Using this specification, a two-standard-deviation increase in the social norm measure is correlated with a statistically significant 1.0 percentage point increase in turnout, or about 1.9 per cent relative to average county turnout. In Column 3, we use the MRP-derived estimates of county-level norms (that more efficiently account for sampling variability), and the correlation is larger: a two-standard-deviation increase in responses to the social sanction measure is associated with an increase in turnout of 4.6 percentage points, or about 8.7 per cent relative to average county turnout.

Columns 4 through 7 assess the robustness of this relationship. In Columns 4 and 5 we add state fixed effects, which diminishes the magnitude and statistical significance of the county-weighted analysis (Column 4), but substantially increases the size of the MRP coefficient.Footnote 42 In Columns 6 and 7 we continue to employ state fixed effects, but also restrict our analysis to the 513 counties for which we have five or more survey observations in an effort to attenuate measurement error in the survey measure. Doing so noticeably increases the estimated size of the social sanction measure coefficient in the weighted analysis (Column 6), but has a small effect on the MRP-based estimated coefficient (Column 7).Footnote 43 Per the Column 6 specification, a two-standard-deviation increase in the family and friends item is associated with a 1.8 percentage point increase in turnout.

Overall, these findings are consistent with the claim that social norms (and the potential costs of deviating from them) are important explanations of differences in participatory behavior.Footnote 44 At the same time, these results should be viewed with caution. They are based on observational data and are vulnerable to many threats to inference, including concerns about aggregation bias, omitted variables and reverse causality. Nevertheless, with the experimental work presented above and the field experimental evidence presented in prior work, there is growing evidence that social norms influence participatory decisions.

DISCUSSION

The present study is an important step toward improving our understanding of the role of social evaluations in shaping political behavior. In particular, we make two main contributions to identifying the social determinants of political participation.

First, we show that decisions about whether to vote have demonstrable social consequences. Using experimental designs that isolate other confounding factors, we find that people evaluate individuals who never vote less favorably than those who always vote. Notably, these effects are not confined to respondents who report particularly high levels of interest in politics, although they are more pronounced for those who are more interested. Secondly, the results of the behavioral dictator game experiments provide important corroborating evidence by showing that these differences in evaluations persist, and are acted upon when making such distinctions is personally costly. Overall, these results suggest that individuals do exercise judgments that will tend to reinforce the social pressure to vote in many places.

We also compare the social consequences of voting behavior to those associated with non-political behaviors. This explicitly situates turnout behavior, and the individuals who are being evaluated, in the broader context of social behavior. Our analysis shows that the social consequences of turnout behavior are similar in size to the effects of non-political behaviors such as volunteering in the community and returning library books on time.Footnote 45 Scholars have increasingly looked to social consequences as a way to explain decisions to participate in political activities where instrumental benefits are likely to be substantially outweighed by instrumental costs. Although our findings validate the claim that these social consequences are likely to play a role in political decisions, they also suggest that these social consequences are not disproportionately large in the political arena. Indeed, just as researchers have shown that increasing the social visibility of voting can increase turnout, experimental interventions that rely on social pressure by emphasizing the compliance of others with a norm have been shown to increase recycling rates and reduce energy consumption.Footnote 46 This experimental literature therefore suggests that social scorn and esteem are powerful motivators across a range of behavioral domains in which instrumental benefits are unlikely to explain individuals’ willingness to undertake costly action. Heretofore, however, the social importance of political choices relative to other choices has not been documented. The relative size of the social rewards of voting and other pro-social behavior is also consistent with the intuition that social factors explain why some (but not all) people vote, as well as the fact that rates of voting vary considerably across electoral contexts that may have different social expectations about participation (for example, presidential versus off year races).Footnote 47 It may also be useful in future work to examine the possibility that other powerful predictors of participation in the survey context (for example, age, income, education) may operate in part by affecting the social expectations about participation among one’s peers. Thus social norms may be a mediator of some other canonical predictors of participation, although this article does not fully explore this potential mechanism.

Our experimental analysis has its shortcomings. Although we provide evidence that learning whether or not someone is a voter affects how the individual is assessed and treated, our experiments do not provide any direct evidence of precisely how such information is disclosed to others or direct evidence that concerns about such disclosure affect actual patterns of turnout. That said, it is easy to list ways in which information about political participation is made known to others, and survey evidence shows that people believe others will learn about their voting activities. People often divulge that they vote or engage in other public political activities. This is to be expected if they believe, as the evidence presented here suggests, that it may boost their social standing, even if only minimally.Footnote 48 However, a serious investigation of these understudied segments of the pathway from social norms to behavioral effects is a sensible next step in solidifying our understanding of how social opinion affects voting and other political choices.

It is important to consider, for example, whether the same social consequences exist outside of this setting. Although we cannot say definitively that the same results would be found outside the survey context, the findings from our allocation experiments, and the results from field experiments that attempt to increase social pressure, suggest that the specter of social consequences affects political decisions.

We also do not know what broader inferences people may have made based on the information provided in the experiments. For example, it is possible that voting behavior is treated as a heuristic or proxy for other desirable qualities. Similarly, people may assume that an individual who is informed about current events or volunteers is also socially engaged in other ways (for example, that they vote). Despite this ambiguity, our findings suggest that information about these behaviors serves as a social cue and matters even when multiple pieces of information are provided about a given individual (as in the vignette experiments). Future work could expand the vignettes to include more informationFootnote 49 or directly examine how cues about political behavior affect other beliefs about a person by, for example, asking people how likely they think it is that a person who always votes also recycles. Most importantly, if people use voting as a proxy for other desirable behaviors, it does not mitigate against the argument that people are inclined to vote for social reasons. Our evidence suggests that individuals may be motivated to vote because they anticipate that the failure to do so will be viewed as undesirable – either because others value the specific act of voting or because they see a failure to vote as a signal of broader personal deficiencies.

The present study can be extended and improved in a number of other ways. For example, as Campbell has argued, it is important to understand the origins of differences in the social consequences of political decisions.Footnote 50 Failing to turn out may have particularly substantial consequences in communities with high levels of ‘political capital’, but next to none in others.Footnote 51 The small sample size of our dictator game participants (n=195) makes it difficult to assess whether these findings were moderated by individual characteristics such as ‘political capital’. Thus fully exploring such variation is an important avenue for subsequent research. We do, however, have suggestive evidence that the social rewards of voting are more pronounced in places where turnout is higher.Footnote 52 In particular, the data presented in Figure 2 show that differences in the evaluations of individuals described as voting rather than not are larger in counties where turnout is higher.Footnote 53 Relative to an average difference in evaluations across all counties of around 1.7, being in the highest turnout counties rather than the lowest turnout counties increases the magnitude of differences in social evaluations by about 0.5 (p<0.05 in OLS regression), or about 30 per cent of the mean difference in evaluations. Thus, not only is turnout higher in counties where individuals believe others are more likely to evaluate them negatively if they do not vote (Table 4), but the data displayed in Figure 2 also show that individuals in high turnout counties view participating as more important when forming evaluations of others.Footnote 54 Cumulatively, our analysis also suggests a mechanism for the over-time persistence of differences in norms about appropriate behavior. We find that individuals evaluate those who fail to comply with their ideals of appropriate behavior less favorably. When such social sanctioning methods are present, norms of appropriate behavior are far more likely to be replicated in subsequent generations, thereby explaining persistent over-time differences in patterns of political and social interactions.Footnote 55

Fig. 2 Effect of county turnout on differences in evaluations of voters and non-voters, CCES single-item experiments

Note: in OLS regression (n=20) predicting the difference in social evaluations across county turnout quantiles (1 to 20), coefficient=0.028 and SE=0.011. Average number of observations per county turnout quantiles is 25.3.

The larger point made in this article is that social forces are likely to play a role in many political choices, and that these forces deserve a place in models predicting how individuals confront those choices. Apart from the political consequences of political choices, forming accurate models of behavior may require researchers to understand the social consequences of political decisions for the individual. These consequences may be much more significant than many of the more familiar and instrumental reasons used to explain how people choose. More generally, incorporating social concerns into models of political choice provides a way to integrate explanations of political behavior with more general models of human decision making. Doing so ultimately provides a means to better address normative questions such as understanding democratic choice in the face of strong social pressure.

Footnotes

*

Yale University, Department of Political Science, Institution for Social and Policy Studies (email: alan.gerber@yale.edu); Yale University, Department of Political Science, Institution for Social and Policy Studies (email: gregory.huber@yale.edu); Loyola University Chicago, Department of Political Science (email: ddoherty@luc.edu); University of Mississippi, Department of Political Science (email: cdowling@olemiss.edu). A previous version of this article was presented at the 2010 meeting of the American Political Science Association; other versions were presented at Harvard, UC-Berkeley, Notre Dame and Emory. Earlier versions of the article were circulated under the titles ‘Social Judgments and Political Participation: Estimating the Consequences of Social Rewards and Sanctions for Voting’ and ‘The Social Benefits of Voting and Co-partisanship: Evidence from Survey Experiments’. We thank Seth Hill and Mary McGrath for assistance. An online appendix is available at http://dx.doi.org/doi:10.1017/S0007123414000271

1 For example, Abrams, Iversen, and Soskice Reference Abrams, Iversen and Soskice2010; Campbell Reference Campbell2006; Coleman Reference Coleman1990; Gerber, Green, and Larimer Reference Gerber, Green and Larimer2008; Putnam Reference Putnam2001; Rolfe Reference Rolfe2012.

3 Smith Reference Smith1976 [1759], 212.

5 See, for example, Campbell Reference Campbell2006, Ch. 7; Coleman Reference Coleman1990; Key and Munger Reference Key and Munger1959.

8 An account based on social influence also suggests that patterns of correlated behavior may emerge not because of differences in information or circumstances across groups, but because of peer influence within a group.

10 Palfrey and Rosenthal Reference Palfrey and Rosenthal1985.

11 For example, Aldrich Reference Aldrich1993. Other models have considered subjective estimates of pivotality (Blais Reference Blais2000), expressive benefits to voting (Coleman Reference Coleman1990; Knack Reference Knack1992) and group-based accounts for participation (Edlin, Gelman, and Kaplan Reference Edlin, Gelman and Kaplan2007). If one’s peers overlap with one’s group identity, the last account is consistent with our notion of social consequences as an important explanation of turnout.

12 Gerber, Green, and Larimer Reference Gerber, Green and Larimer2008.

13 Gerber, Green, and Larimer Reference Gerber, Green and Larimer2010; Panagopoulos Reference Panagopoulos2010.

14 Elaad Reference Elaad2003; Gilovich, Medvec, and Savitsky Reference Gilovich, Medvec and Savitsky2000.

15 Exact question wording: ‘Looking ahead to the 2012 election, do you think any of the following individuals [parent; child; friend; neighbor; co-worker; boss or supervisor] will call, email, or speak to you in person to ask you if you plan to vote?’ Response options: Almost certainly will not; Probably will not; Probably will; Almost certainly will. The survey was conducted over the internet by YouGov/Polimetrix. YouGov interviewed 3,507 respondents who had taken both waves of the 2010 CCES. These interviews were then matched on gender, age, race, education, party identification, ideology and political interest down to a sample of 3,000 to produce the final dataset. YouGov then weighted the matched set of survey respondents to known marginals for the citizen population of the United States aged 25+ from the 2008 American Community Survey. Thus the final weighted dataset (n=3,000) is meant to be representative of the US adult population (aged 25 and over). For more details, see Gerber et al. (Reference Gerber, Huber, Doherty, Dowling and Hill2010).

16 Abrams, Iversen, and Soskice Reference Abrams, Iversen and Soskice2010, 229.

17 Related work suggests that individuals may be influenced to vote by their social networks (Huckfeldt, Mendez, and Osborn Reference Huckfeldt, Morehouse Mendez and Osborn2004; Knack Reference Knack1992; Rolfe Reference Rolfe2012). What is unclear, however, is the mechanism by which social connections encourage participation. It could be, for example, that density of social networks is correlated with personal characteristics such as personality (for example, extroversion) or that socioeconomic status affects turnout. Our experimental evidence that individuals are evaluated less favorably when they do not vote suggests one means by which network effects arise – a fear of negative social sanctions for failing to participate. If individuals associate with others who have shared norms (either in terms of high or low rewards to participation), then the similarity in their behavior may be sustained by these social pressures.

18 For example, Della Vigna et al. Reference Della Vigna, List, Malmendier and Rao2014; Karp and Brockington Reference Karp and Brockington2005; Katosh and Traugott Reference Katosh and Traugott1981; Silver, Anderson, and Abramson Reference Silver, Anderson and Abramson1986; Vavreck Reference Vavreck2007.

19 For a recent notable exception, see Abrams, Iversen, and Soskice (Reference Abrams, Iversen and Soskice2010). They document a relationship between beliefs about the social consequences of participation and individual-level reported turnout. However, as Karp and Brockington (Reference Karp and Brockington2005) show, over-reporting of turnout appears to be more frequent in places where expectations about participation are greater. This suggests that the effect of norms may be overstated in analysis that employs reported (rather than validated/observed) turnout.

20 Ansolabehere Reference Ansolabehere2009.

21 Specifically, the final survey sample is constructed by drawing a target population sample that is representative of the general population on a variety of characteristics (for example, gender, age, race, income, education, state of residence, party identification) based on cases in the 2005–07 American Community Study, November 2008 Current Population Survey Supplement and the 2007 Pew Religious Life Survey. After administering the CCES to more respondents than is required, YouGov/Polimetrix used matching techniques to select cases that most closely match cases in the target sample. Weights were then calculated to adjust the final sample to reflect the national public on the demographic and other characteristics listed above. We use these weights in the analysis we present below. In additional analysis (reported in the online appendix) we find that unweighted analysis yields similar results. For more detailed information on this type of survey and sampling technique, see Vavreck and Rivers (Reference Vavreck and Rivers2008). For critiques, see Chang and Krosnick (Reference Chang and Krosnick2009) and Malhotra and Krosnick (Reference Malhotra and Krosnick2007). More broadly, see AAPOR Executive Council Task Force (2010) on the strengths and limitations of using opt-in surveys, especially those fielded on the internet.

22 Berinsky, Huber, and Lenz Reference Berinsky, Huber and Lenz2012.

23 The survey was fielded from 15–17 December 2010. Respondents were paid $0.50 to participate. The text of the Mechanical Turk request read: ‘This survey will ask you a series of questions about you and your feelings about current events and politics. The survey is here: [URL]. Once you finish the survey you will be provided with a code. To get paid, please enter the code below and click “Submit”. DO NOT CLOSE THIS WINDOW WHILE YOU ARE TAKING THE SURVEY. Payment is auto-approved in 5 days.’

24 The total n for our CCES module was 800 respondents. The final n is 731 because we restrict our analysis to individuals with no missing cases for any of the outcome measures or control variables used below. We discuss tests of balance in note 28 below. The survey also included two other similar vignettes that did not include any information about turnout behavior. Instead, they focused on partisanship. These results are not reported in this article.

25 This is the exact text for the first vignette. The exact text for subsequent vignettes was ‘Now imagine you met a different person and learned the following information about them: they [treatment].’

27 For each vignette, the alpha reliability coefficient of the three items exceeded 0.90.

28 These regression models also control for demographic and other pre-treatment measures of respondent characteristics to reduce our standard errors. Specifically, we control for age, age-squared, race (White, Black, Hispanic and Other), sex, education, income, income missing, religious attendance, political interest, trust in government, indicators for ideology (five categories) and party identification (seven categories). These controls also provide a way to account for imbalances in pre-treatment variables. Predicting treatment assignment to one of the sixteen experimental conditions (four voting conditions × two other conditions×two other conditions) in a multinomial logit model using these covariates as predictors, in Vignette 1 we find statistically significant imbalances on the race Other indicator (p<0.001), religious attendance (p=0.017), and income (p=0.025) and income missing (p<0.001). For Vignette 2 we find that the indicators for Black and race Other (p<0.001) significantly predict treatment assignment. For all other covariates, p>0.05. All CCES analysis uses weights. Full regression results displaying all covariates are presented in online appendix Table A2. Results are robust to analysis excluding either weights or these control variables (see online appendix Table A3).

29 The difference in the absolute value (magnitude) of these coefficients is not statistically significant in Column 1 (p=0.238), but is in Column 2 (p<0.001).

30 There is also a subtle difference in the wording of the treatments across the two experiments. In the vignette experiments the respondent is asked how a behavior would affect her own assessment, whereas in the behavior-ranking experiments the respondent speculates about how behaviors would be viewed by neighbors. This does not seem to produce important differences in how the subject responds, but it is an important theoretical distinction. More broadly, both scenarios involve evaluating relative ‘strangers’. It is possible that when evaluating individuals with closer social ties, the same evaluations do not take place. Perhaps, for example, greater knowledge of the individual – and reasons for not voting – would result in less sanctioning behavior. However, some work finds that in other contexts in which social pressure is possible, individuals appear more responsive to the evaluations of those they perceive as more able to sanction them (Gerber et al. Reference Gerber, Huber, Doherty and Dowling2013). As further evidence, in the analysis presented below in Section 4, we employ a measure of the perceived costs of norms violations that relates to those one likely shares a social network with.

31 Because of the relatively small sample size, we estimated bootstrapped confidence intervals for this analysis, as well as for all other analysis from the MTurk studies we describe here. For tests of difference in differences we calculate bootstrapped standard errors and use these standard errors to perform unpaired t-tests.

32 Some of the variation in findings across studies may stem from differences in sample characteristics between the CCES and MTurk populations.

33 The allocation experiments were introduced with this text: ‘For each $0.50 bonus you can keep the additional $0.50 for yourself, or choose to share some or all of it with another Mechanical Turk worker. On the next three pages we will describe three different people who you can share your bonuses with. In all cases, whatever money you keep for yourself will be allocated to you as a bonus payment and the remainder will be paid as a bonus to another Mechanical Turk worker. No matter how much money you choose to keep for yourself, your decision will remain private. The other Mechanical Turk worker will not know who made the decision about whether or not to allocate him/her this money.’

34 We note that in our experiment, subjects could reward individuals for their behavior, but we did not explicitly allow individuals to sanction (for example, by default giving the individual a reward and then allowing the subject to remove it if she wanted to do so). For a general discussion of various experimental games that measure social norms, see Camerer and Fehr (Reference Camerer and Fehr2004). For specific examples of the use of dictator games, see, for example, Andreoni and Miller (Reference Andreoni and Miller2002), Charness and Rabin (Reference Charness and Rabin2002), Fehr and Schmidt (Reference Fehr and Schmidt1999), and Korenok, Millner, and Razzolini (Reference Korenok, Millner and Razzolini2008). For another example of a dictator game using MTurk subjects, see Raihani and Bshary (Reference Raihani and Bshary2012). Raihani and Bshary note that dictators gave away similar amounts to previous studies, suggesting that the anonymity of the online dictator game did not alter giving behavior. A trust game could also be used to measure social norms concerning voting. Such an approach, however, would be more difficult to execute in the MTurk environment.

35 The standard dictator game is a one-shot game. However, ‘[s]ome experimenters have repeated the game, but changed recipients every round’ (Engel Reference Engel2011, 592), as we do here.

36 Those recipients were identified in a separate survey we conducted on MTurk.

37 The fact that subjects are aware that the researcher observes their contributions does leave open the possibility of experimental demand effects, whereby subjects place more importance on voting and recycling because they think they are supposed to do so. However, our experimental design helps mitigate against this possibility because our subjects were only ‘treated’ with one type of each trait. In other words, a subject who is told that an individual ‘always’ votes does not know that the counterfactual is an individual who ‘never’ votes. Because subjects are not exposed to both conditions (that is, we employ a between-subjects design), this helps mitigate against concerns that they are making an explicit comparison between ‘always’ and ‘never’ voters and assuming that they should give more money to the individual who always votes because that is what the researcher would like to see. If there is some general inclination to give money because subjects are being studied, this should mitigate against observing differences across treatment conditions.

38 Among those who allocated any money to someone described as voting, the average allocation is about $0.18, while for those who gave any money to someone who recycled, the average allocation is approximately $0.22. These amounts represent, respectively, 36 and 44 per cent of the money allocated to the experimental participant to keep or give away.

39 The NAES has two important features for this undertaking: a question about norms answered by a relatively large sample (approximately 9,500 respondents) and an identifier for county of residence.

40 In analysis assessing the direct relationship between an individual’s assessments of local voting norms and their voting behavior, we find that a one-standard deviation increase in the ‘friends and family disappointed’ item is associated with a 4.3 percentage point increase in retrospective self-reported turnout. With county fixed effects, the coefficient is 4.2 percentage points. In the NAES sample, base reported turnout was 83 per cent, so this one-standard deviation increase in the ‘friends and family disappointed’ item corresponds to a 5 per cent increase in turnout. Results are available upon request.

41 See, for example, Lax and Phillips Reference Lax and Phillips2009; Park, Gelman, and Bafumi Reference Park, Gelman and Bafumi2004.

42 In each state fixed effects specification we must exclude observations from states with only a single eligible county.

43 This is consistent with the argument that the MRP estimates of county-level norms already account for the greater measurement volatility in counties with fewer survey respondents.

44 A related explanation of the observed cross-sectional differences in the relationship between norms about voting in a county and actual turnout in the county is that individuals who hold these norms are more likely to be in social networks that promote voting (see Rolfe Reference Rolfe2012; Huckfeldt, Mendez, and Osborn Reference Huckfeldt, Morehouse Mendez and Osborn2004). Our analysis showing the correlation between the social rewards of voting and levels of participation suggests that one mechanism by which these network effects may persist is through the provision of negative (positive) social rewards for failing to vote (voting).

45 We note that one potential source of bias in this comparison is that individuals were described as ‘always’ or ‘never’ voting, but as, for example, either recycling or not. It is possible, although we view it as unlikely, that the more extreme always/never voting treatments resulted in larger effects than we would have obtained if we had simply said voted or not (as in the other conditions).

48 For example, almost 80 per cent of respondents to a module fielded as part of the post-election 2008 CCES reported sharing their vote choices with at least one other person, and over 30 per cent reported sharing their choices with more than ten people (authors’ analysis, details available upon request).

49 For example, we did not benchmark the social consequences of the decision to vote to those of income levels, religious beliefs and attendance, or respecting private morality (for example, being faithful to a spouse). Research suggests that income is an important source of information in the evaluation of others (Christopher and Schlenker Reference Christopher and Schlenker2000; Christopher et al. Reference Christopher, Morgan, Marek, Troisi, Jones and Reinhar2005) and therefore may provide a useful point of comparison for benchmarking the social consequences of political behaviors. More generally, respondents could be given more detailed information about a target individual to provide a way to more thoroughly assess the extent to which political behavior ‘stands out’ as a social cue. Although it is not exculpatory, these potential exclusion restriction violations are endemic in survey experiments. We do have some evidence, however, that providing additional information about the individual described in the vignette may attenuate – but not erase – the effects of described voting behavior on how they are evaluated. Near the end of the CCES survey (approximately 5 minutes after the vignette experiments) we included a series of similar, but simplified, experiments that asked respondents to evaluate a hypothetical individual described as engaging in a single behavior (rather than multiple behaviors; for example, ‘Suppose you just met someone and learned the following information about them: they recycle’). Evaluations were made using the same three-item battery. The pattern of results (available upon request) was similar to that found in the (multi-behavior) vignette experiments. However, the effect magnitudes were somewhat larger.

50 Campbell Reference Campbell2006.

51 For example, Coleman Reference Coleman1990.

52 It is possible that other factors may condition the association between social sanctions and turnout behavior. For example, being a member of a ‘political minority’ may make people more (or less) susceptible to social sanction concerns (Karpowitz et al. Reference Karpowitz, Monson, Nielson, Patterson and Snell2011; Huckfeldt and Sprague Reference Huckfeldt and Sprague1995).

53 This analysis relies on a series of additional experiments included in the CCES, similar to the vignette experiments, but in which only a single characteristic of the hypothetical individual was manipulated (see note 49). For each level of 2008 county turnout (the x-axis) we calculated differences between the average evaluations of those assigned to the ‘Always’ votes condition and the average evaluation of those in the ‘Never’ votes condition (n=506).

54 Of note, as we report in the text above, we also find that less politically interested individuals and those who did not vote in 2008 express more negative (and statistically distinguishable) evaluations of non-voters than voters, suggesting that it is not only voters who reward participation.

55 For example, Putnam Reference Putnam2001.

References

AAPOR Executive Council Task Force. 2010. Research Synthesis: AAPOR Report on Online Panels. Public Opinion Quarterly 74:711781.CrossRefGoogle Scholar
Abrams, Samuel, Iversen, Torben, and Soskice, David. 2010. Informal Social Networks and Rational Voting. British Journal of Political Science 41:229257.CrossRefGoogle Scholar
Aldrich, John H. 1993. Rational Choice and Turnout. American Journal of Political Science 37:246278.Google Scholar
Andreoni, James, and Miller, John. 2002. Giving According to GARP: An Experimental Test of the Consistency of Preferences for Altruism. Econometrica 70:737753.CrossRefGoogle Scholar
Ansolabehere, Stephen. 2009. Cooperative Congressional Election Study, 2009: Common Content. [Computer File] Release 2: 2 March 2010. Cambridge, MA: Massachusetts Institute of Technology (producer).Google Scholar
Berinsky, Adam J., Huber, Gregory A., and Lenz, Gabriel S. 2012. Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk. Political Analysis 20:351368.CrossRefGoogle Scholar
Blais, Andre. 2000. To Vote or Not to Vote? Pittsburgh, PA: University of Pittsburgh Press.Google Scholar
Bond, Robert M., Fariss, Christopher J., Jones, Jason J., Kramer, Adam D. I., Marlow, Cameron, Settle, Jaime, and Fowler, James H. 2012. A 61-Million-Person Experiment in Social Influence and Political Mobilization. Nature 489:295298.CrossRefGoogle ScholarPubMed
Camerer, Colin F., and Fehr, Ernst. 2004. Measuring Social Norms and Preferences Using Experimental Games: A Guide for Social Scientists. In Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies, edited by Joseph Henrich, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr and Herbert Gintis, 5595. Oxford: Oxford University Press.Google Scholar
Campbell, David E. 2006. Why We Vote. Princeton, NJ: Princeton University Press.Google Scholar
Chang, Linchiat, and Krosnick, Jon A. 2009. National Surveys via RDD Telephone Interviewing Versus the Internet: Comparing Sample Representativeness and Response Quality. Public Opinion Quarterly 73:641678.CrossRefGoogle Scholar
Charness, Gary, and Rabin, Matthew. 2002. Understanding Social Preferences with Simple Tests. Quarterly Journal of Economics 117:817869.Google Scholar
Christopher, Andrew N., Morgan, Ryan D., Marek, Pam, Troisi, Jordan D., Jones, Jason R., and Reinhar, David F. 2005. Affluence Cues and First Impressions: Does it Matter How the Affluence was Acquired? Journal of Economic Psychology 26:187200.CrossRefGoogle Scholar
Christopher, Andrew N., and Schlenker, Barry R. 2000. The Impact of Perceived Material Wealth and Perceiver Personality on First Impressions. Journal of Economic Psychology 21:119.Google Scholar
Coleman, James S. 1990. Foundations of Social Theory. Cambridge, MA: Belknap Press.Google Scholar
Della Vigna, Stefano, List, John A., Malmendier, Ulrike, and Rao, Gautam. 2014. Voting to Tell Others. NBER Working Paper Series 19832. Available from http://www.nber.org/papers/w19832, accessed 25 March 2014Google Scholar
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Addison Wesley.Google Scholar
Edlin, Aaron, Gelman, Andrew, and Kaplan, Noah. 2007. Voting as a Rational Choice: Why and How People Vote to Improve the Well-Being of Others. Rationality and Society 19:293314.Google Scholar
Elaad, Eitan. 2003. Effects of Feedback on the Overestimated Capacity to Detect Lies and the Underestimated Ability to Tell Lies. Applied Cognitive Psychology 17:349363.Google Scholar
Ellickson, Robert. 1991. Order Without Law: How Neighbors Settle Disputes. Cambridge, MA: Harvard University Press.Google Scholar
Engel, Christoph. 2011. Dictator Games: A Meta Study. Experimental Economics 14:583610.Google Scholar
Fehr, Ernst, and Schmidt, Klaus M. 1999. A Theory of Fairness, Competition, and Cooperation. Quarterly Journal of Economics 114:817868.Google Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W. 2008. Social Pressure and Voter Turnout: Evidence from a Large-Scale Field Experiment. American Political Science Review 102:3348.Google Scholar
Gerber, Alan S., Green, Donald P., and Larimer, Christopher W. 2010. An Experiment Testing the Relative Effectiveness of Encouraging Voter Participation by Inducing Feelings of Pride or Shame. Political Behavior 32:409412.CrossRefGoogle Scholar
Gerber, Alan S., Huber, Gregory A., Doherty, David, and Dowling, Conor M. 2013. Is There a Secret Ballot? Ballot Secrecy Perceptions and their Implications for Voting Behaviour. British Journal of Political Science 43:77102.CrossRefGoogle Scholar
Gerber, Alan S., Huber, Gregory A., Doherty, David, Dowling, Conor M., and Schwartzberg, Nicole. 2009. Using Battleground States as a Natural Experiment to Test Theories of Voting. Paper Presented at the Annual Meeting of the American Political Science Association, Toronto, Canada, 3–6 September.Google Scholar
Gerber, Alan S., Huber, Gregory A., Doherty, David, Dowling, Conor M., and Hill, Seth J. 2010. Perceptions of the Voting Experience. Datafile, Yale University. Available from http://huber.research.yale.edu, accessed 14 June 2013.Google Scholar
Gilovich, Thomas, Medvec, Victoria Husted, and Savitsky, Kenneth. 2000. The Spotlight Effect in Social Judgment: An Egocentric Bias in Estimates of the Salience of One’s Own Actions and Appearance. Journal of Personality and Social Psychology 78:211222.Google Scholar
Graziano, William G., Bruce, Jennifer, Sheese, Brad E., and Tobin, Renee M. 2007. Attraction, Personality, and Prejudice: Liking None of the People Most of the Time. Journal of Personality and Social Psychology 93:565582.Google Scholar
Huckfeldt, Robert J., Morehouse Mendez, J., and Osborn, T. 2004. Disagreement, Ambivalence, and Engagement: The Political Consequences of Heterogeneous Networks. Political Psychology 25:6595.Google Scholar
Huckfeldt, Robert J., and Sprague, John. 1995. Citizens, Politics, and Social Communication: Information and Influence in an Election Campaign. New York: Cambridge University Press.Google Scholar
Karp, Jeffrey A., and Brockington, David. 2005. Social Desirability and Response Validity: A Comparative Analysis of Overreporting Voter Turnout in Five Countries. Journal of Politics 67:825840.Google Scholar
Karpowitz, Christopher F., Monson, J. Quin, Nielson, Lindsay, Patterson, Kelly D., and Snell, Steven A. 2011. Political Norms and the Private Act of Voting. Public Opinion Quarterly 75:659685.Google Scholar
Katosh, John P., and Traugott, Michael W. 1981. The Consequences of Validated and Self-Reported Voting Measures. Public Opinion Quarterly 45:519535.Google Scholar
Key, V.O., and Munger, Frank. 1959. Social Determinism and Electoral Decision. In American Voting Behavior, edited by Eugene Burdick and Arthur Brodbeck, 281299. Glencoe, IL: Free Press.Google Scholar
Knack, Stephen. 1992. Civic Norms, Social Sanctions, and Voter Turnout. Rationality and Society 4:133156.Google Scholar
Korenok, Oleg, Millner, Edward L., and Razzolini, Laura. 2008. Experimental Evidence on Inequality Aversion: Dictators Give to Help the Less Fortunate. Working paper. Richmond: Virginia Commonwealth University.Google Scholar
Lax, Jeffrey R., and Phillips, Justin H. 2009. How Should We Estimate Public Opinion in the States? American Journal of Political Science 53:107121.Google Scholar
Malhotra, Neil, and Krosnick, Jon A. 2007. The Effect of Survey Mode and Sampling on Inferences About Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples. Political Analysis 15:286324.Google Scholar
Mann, Christopher B. 2010. Is There Backlash to Social Pressure? A Large-Scale Field Experiment on Voter Mobilization. Political Behavior 32:387407.Google Scholar
Nickerson, David W. 2008. Is Voting Contagious? Evidence from Two Field Experiments. American Political Science Review 102:4957.Google Scholar
Nolan, Jessica M., Schultz, P. Wesley, Cialdini, Robert B., Goldstein, Noah J., and Griskevicius, Vladas. 2008. Normative Social Influence is Underdetected. Personality and Social Psychology Bulletin 34:913923.Google Scholar
Palfrey, Thomas R., and Rosenthal, Howard. 1985. Voter Participation and Strategic Uncertainty. American Political Science Review 79:6278.Google Scholar
Panagopoulos, Costas. 2010. Affect, Social Pressure and Prosocial Motivation: Field Experimental Evidence of the Mobilizing Effects of Pride, Shame and Publicizing Voting Behavior. Political Behavior 32:369386.Google Scholar
Park, David K., Gelman, Andrew, and Bafumi, Joseph. 2004. Bayesian Multilevel Estimation with Poststratification: State-Level Estimates from National Polls. Political Analysis 12:375385.Google Scholar
Putnam, Robert D. 2001. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster.Google Scholar
Raihani, Nichola J., and Bshary, Redouan. 2012. A Positive Effect of Flowers Rather Than Eye Images in a Large-Scale, Cross-Cultural Dictator Game. Proceedings of the Royal Society B 279:35563564.Google Scholar
Rolfe, Meredith. 2012. Voter Turnout: A Social Theory of Political Participation. Cambridge: Cambridge University Press.Google Scholar
Schultz, P. Wesley. 1999. Changing Behavior with Normative Feedback Interventions: A Field Experiment on Curbside Recycling. Basic & Applied Social Psychology 21:2536.Google Scholar
Silver, Brian D., Anderson, Barbara A., and Abramson, Paul R. 1986. Who Overreports Voting? American Political Science Review 80:613624.Google Scholar
Sinclair, Betsy, McConnell, Margaret, and Michelson, Melissa R. 2013. Local Canvassing and Social Pressure: The Efficacy of Grassroots Voter Mobilization. Political Communication 30:4257.Google Scholar
Snyder, Mark, and Haugen, Julie A. 1994. Why Does Behavioral Confirmation Occur? A Functional Perspective on the Role of the Perceiver. Journal of Experimental Social Psychology 30:218246.Google Scholar
Smith, Adam. 1976 (1759). The Theory of Moral Sentiments, Ed. E.G. West. Indianapolis: Liberty Fund, Inc. Google Scholar
Vavreck, Lynn. 2007. The Exaggerated Effects of Advertising on Turnout: The Dangers of Self-Reports. Quarterly Journal of Political Science 2:325343.Google Scholar
Vavreck, Lynn, and Rivers, Douglas. 2008. The 2006 Cooperative Congressional Election Study. Journal of Elections, Public Opinion and Parties 18:355366.Google Scholar
Figure 0

Table 1 Experimental Designs and Question Wording

Figure 1

Table 2 Social Evaluations Analysis, Vignette Experiments

Figure 2

Fig. 1 Differences in mean rankings of behaviors for each domain, Mechanical Turk behavior-ranking experimentsNote: bars are bootstrapped 95 per cent confidence intervals.

Figure 3

Table 3 Results of Allocation Experiments

Figure 4

Table 4 Observational Analysis: NAES Data Matched to County-Level Voting-Age Population Turnout Records

Figure 5

Fig. 2 Effect of county turnout on differences in evaluations of voters and non-voters, CCES single-item experimentsNote: in OLS regression (n=20) predicting the difference in social evaluations across county turnout quantiles (1 to 20), coefficient=0.028 and SE=0.011. Average number of observations per county turnout quantiles is 25.3.

Supplementary material: PDF

Gerber Supplementary Material

Appendix

Download Gerber Supplementary Material(PDF)
PDF 43.5 KB