We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What are the consequences of including a “don't know” (DK) response option to attitudinal survey questions? Existing research, based on traditional survey modes, argues that it reduces the effective sample size without improving the quality of responses. We contend that it can have important effects not only on estimates of aggregate public opinion, but also on estimates of opinion differences between subgroups of the population who have different levels of political information. Through a pre-registered online survey experiment conducted in the United States, we find that the DK response option has consequences for opinion estimates in the present day, where most organizations rely on online panels, but mainly for respondents with low levels of political information and on low salience issues. These findings imply that the exclusion of a DK option can matter, with implications for assessments of preference differences and our understanding of their impacts on politics and policy.
Mass media are often portrayed as having large effects on democratic politics. Media content is not simply an exogenous influence on publics and policymakers, however. There is reason to think that this content reflects publics and politics as much as—if not more than—it affects them. This letter examines those possibilities, focusing on interactions between news coverage, budgetary policy, and public preferences in the defense, welfare, and health-care domains in the United States. Results indicate that media play a largely reflective role. Taking this role into account, we suggest, leads to a fundamentally different perspective on how media content matters in politics.
Presidential elections can be forecast using information from political and economic conditions, polls, and a statistical model of changes in public opinion over time. However, these “knowns” about how to make a good presidential election forecast come with many unknowns due to the challenges of evaluating forecast calibration and communication. We highlight how incentives may shape forecasts, and particularly forecast uncertainty, in light of calibration challenges. We illustrate these challenges in creating, communicating, and evaluating election predictions, using the Economist and Fivethirtyeight forecasts of the 2020 election as examples, and offer recommendations for forecasters and scholars.
This chapter offers our first empirical analyses of media coverage of policy, across the various policy domains and news organizations. We first compare the aggregated “media signals” to actual changes in policy. Does aggregated coverage follow policy over time? Does this relationship vary across domains? Given the multiple measures developed in the previous chapter, this chapter also considers whether and how the measures matter for what we observe. This chapter centers on figures depicting the ebb and flow of policy and media coverage over time. In so doing, it offers the first large-scale comparison of policy change, and media coverage of policy change, across six domains over a forty-year period. Do patterns vary across newspapers? How about across media, particularly television coverage? Does it match what we see in newspapers? This chapter offers some critical diagnostics, assessing the degree to which media coverage has followed public policy; and relatedly, whether media coverage reliably includes the information citizens need to respond to policy change.
This chapter spells out how we believe the mass media cover public policy, particularly the outputs government produces. Although there is a considerable body of work detailing a range of biases in coverage and a lack of policy content, we posit that mass media can and do track trends in policy, at least in very salient policy areas that attract a lot of attention. Put differently, even as media can be biased and provide inaccurate information, there also can be a signal of important policy actions amidst the noise. News organizations have a professional and economic interest in doing so, at least up to a point. We are especially interested in media coverage of policy change. This is in part because we suppose that media often reports on change in policy, not levels, much as research on news coverage of other areas, for example, economic conditions, has revealed. (Change also seems easier to directly measure.) The conceptualization and theory in this chapter guide both the measurement and analyses that follow.
Chapter 3 laid out the building blocks for our measures of the media policy signal and presented a preliminary version of that signal across newspapers, television, and social media content. We now turn to a series of refinements and robustness tests, critical checks on the accuracy of our media policy signal measures. We begin with some comparisons between crowdsourced codes and those produced by trained student coders. Assessing the accuracy of crowdsourced data is important for the dictionary-based measures in the preceding chapter and for the comparisons with machine-learning-based measures introduced in this chapter. We then turn to crowdsourced content analyses of the degree to which extracted content reflects past, present, or future changes in spending. Our measures likely reflect some combination of these spending changes, and understanding the balance of each will be important for analyses in subsequent chapters. Finally, we present comparisons of dictionary-based measures and those based on machine-learning, using nearly 30,000 human-coded sentences and random forest models to replicate that coding across our entire corpus.
Does media coverage matter for the functioning of representative democracy? Do people notice news coverage? Do they take it into account? In particular, do citizens use the information that media content conveys to update their policy preferences? These questions are the central motivation for this book. In this chapter we try to provide some answers. We begin by introducing our principal measures of public preferences from the General Social Survey. We then consider a smaller, unique body of data on public perceptions of policy change, from the American National Election Studies. These data allow us some preliminary insight into whether the public notices government spending and media coverage of government spending. The remainder of the chapter then presents results of analyses of public preferences, first to establish the effects of spending on preferences, and then to assess the role of the media signal. Results document thermostatic public responsiveness, as found in previous research, and also that news coverage is a critical mediating force.
Preceding chapters have provided evidence that media coverage frequently reflects public policy, and that public preferences respond to a combination of policy and the media “policy signal.” Those results speak to some important questions about the nature and functioning of representative democracy, we believe. A good number of questions nevertheless remain. This chapter attempts to address some of what seem to us to be the most pressing issues. First, we consider the impact that trends in media consumption have on public responsiveness. Second, we consider heterogeneity in public responsiveness to the media policy signal. Third, we reconsider the causal relationships between policy, news coverage, and the public. Fourth and finally, we investigate several of the domain-specific media effects identified in Chapter 6. Media coverage of policy matters, but to varying degrees and in different ways. We offer additional analyses here to help illuminate some of these domain-level differences in information flows.
This chapter provides an introduction to the ideas and literatures that guide the analyses that follow. We consider past work on the potential role of media coverage in representative democracy and public responsiveness.
This chapter moves from theory to practice and implements a measure of media coverage. We introduce our database of news coverage. We also described the unique “layered dictionary” approach used to identify sentences on the direction of policy change. The focus on change in policy and not levels is critical, and we discuss this in some detail. We also compare the use of application of both dictionary and supervised machine-learning approaches to content analyses of news content. This chapter is necessarily technical, but it also is an opportunity for us to introduce the methods to a broader audience. We plan to escort readers through the various available approaches, our implementation of them, and then an assessment of the outputs they produce. We end the chapter with some substantive findings: the overall amount of coverage of policy change in newspapers and television, and the general trends in aggregated “media signals” generated by the different approaches.
This chapter reviews the findings in previous chapter and considers their implications for research on media democracy, as well as for citizens and journalists.
Around the world, there are increasing concerns about the accuracy of media coverage. It is vital in representative democracies that citizens have access to reliable information about what is happening in government policy, so that they can form meaningful preferences and hold politicians accountable. Yet much research and conventional wisdom questions whether the necessary information is available, consumed, and understood. This study is the first large-scale empirical investigation into the frequency and reliability of media coverage in five policy domains, and it provides tools that can be exported to other areas, in the US and elsewhere. Examining decades of government spending, media coverage, and public opinion in the US, this book assesses the accuracy of media coverage, and measures its direct impact on citizens' preferences for policy. This innovative study has far-reaching implications for those studying and teaching politics as well as for reporters and citizens.