We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What has allowed inequalities in material resources to mount in advanced democracies? This chapter considers the role of media reporting on the economy in weakening accountability mechanisms that might otherwise have incentivized governments to pursue more equal outcomes. Building on prior work on the United States, we investigate how journalistic depictions of the economy relate to real distributional developments across OECD countries. Using sentiment analysis of economic news content, we demonstrate that the evaluative content of the economic news strongly and disproportionately tracks the fortunes of the very rich and that good (bad) economic news is more common in periods of rising (falling) income shares at the top. We then propose and test an explanation in which pro-rich biases in news tone arise from a journalistic focus on the performance of the economy in the aggregate, while aggregate growth is itself positively correlated with relative gains for the rich. The chapter’s findings suggest that the democratic politics of inequality may be shaped in important ways by the skewed nature of the informational environment within which citizens form economic evaluations.
This chapter provides a primer on the logic of Bayesian updating and shows how it is used for answering causal queries. We illustrate with applications to correlational and process-tracing inferences.
We illustrate Bayesian mixed methods with causal models through a reexamination of the model of inequality and democratization and of institutions and growth introduced in Chapter 8. We show how to use updated population models to draw both population- and case-level inferences, demonstrate situations in which learning is minimal and in which it is more substantial, and illustrate how the probative value of case-level evidence can be empirically established through model updating.
We turn to the problem of choosing between going “wide” and going “deep”: between seeking a little bit of information on a large number of cases versus studying a smaller number of cases intensively. We outline a simulation-based approach to identify the optimal mix of breadth and depth. Simulations suggest that going deep is especially valuable where confounding is a concern, for queries about causal pathways, and where models embed strong beliefs about causal effects. We also find that there are diminishing marginal returns to each strategy and that depth often provides the greatest gains when we have cross-case evidence on only a modest number of cases.
This chapter shows how to use causal models to inform the selection of cases for intensive analysis. We outline a procedure in which we predict the inferences that will be made when future data are found and use these predictions to inform case-selection strategies. We ask: Given a set of cases on which we already have data on X and Y, which cases will it be most advantageous to choose for more in-depth investigation? We show that the optimal case-selection strategy depends jointly on the model we start with and the causal question we seek to answer and draw out the implication that researchers should be wary of generic case-selection principles.
This chapter addresses the question of whether causal models can themselves be justified. We outline strategies for justifying models on the basis of prior data and so empirically grounding beliefs about the probative value of clues.
In this concluding chapter, we summarize the key payoffs of the causal-model-based approach to causal inference, point to a set of important limitations of the approach, and sketch out what we see as next steps in strengthening model-based inference.
We describe strategies for figuring out whether a model is likely doing more harm than good and for comparing the performance of different models to one another.
We provide a lay-language primer on the counterfactual model of causal inference and the logic of causal models. Topics include the representation of causal models with causal graphs and using causal graphs to read off relations of conditional independence among variables in a causal domain.
This chapter illustrates how we can express theoretical ideas in the form of a causal model by translating three arguments from published social science research into models. We illustrate using Paul Pierson’s (1994) work on welfare-state retrenchment, Elizabeth Saunders’ (2011) research on military intervention strategies, and Adam Przeworski and Fernando Limongi’s (1997) study of the relationship between national wealth and democracy.
This chapter extends the analysis from Chapter 7 to multi-case settings and demonstrate how we can use the approach to undertake mixed-method analysis. We show how, when analyzing multiple cases, we can update our theory from the evidence and then use our updated theory to draw both population- and case-level inferences. While single-case process tracing is entirely theory-informed, mixed-data inference is thus also “data”-informed. We show how the approach can integrate information across any arbitrary mix of data structures, such as “thin” data on causes and outcomes in many cases and “thicker” process evidence on a subset of those cases.
We connect the literature on causal models to qualitative inference strategies used in process tracing. The chapter outlines a procedure for drawing case-level causal inferences from a causal model and within-case evidence. We also show how a key result from the causal-models literature provides a condition for when the observation of a node in a causal model (a “clue”) may be (or certainly will not be) informative, and we extract a set of implications for process-tracing methods.