To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Regression models are perhaps the most common form of data analysis used to evaluate alternative explanations for outcomes of interest to quantitatively oriented social scientists. In the past 50 years, a remarkable variety of regression models have been developed by statisticians. Accordingly, most major data analysis software packages allow for regression estimation of the relationships between interval and categorical variables, in cross sections and longitudinal panels, and in nested and multilevel patterns. In this chapter, however, we restrict our attention to ordinary least squares (OLS) regression, focusing mostly on the regression of an interval-scaled variable on a binary causal variable. As we will show, the issues are complicated enough for these models, and it is our knowledge of how least squares models work that allows us to explain this complexity. In addition, nearly all of the insight that can be gained from a deep examination of OLS models carries over to more complex regression models because the identification and heterogeneity issues that generate the complexity apply in analogous fashion to all regression-type models.
In this chapter, we present least squares regression from three different perspectives: (1) regression as a descriptive modeling tool, (2) regression as a parametric adjustment technique for estimating causal effects, and (3) regression as a matching estimator of causal effects. We give more attention to the third of these three perspectives on regression than is customary in methodological texts because this perspective allows one to understand the others from a counterfactual perspective.
In this chapter, we present the basic conditioning strategy for the estimation of causal effects. We first provide an account of the two basic implementations of conditioning – balancing the determinants of the cause of interest and adjusting for other causes of the outcome – using the language of “back-door paths.” After explaining the unique role that collider variables play in systems of causal relationships, we present what has become known as the back-door criterion for sufficient conditioning to identify a causal effect. To bring the back-door criterion into alignment with related guidance based on the potential outcome model, we then present models of causal exposure, introducing the treatment assignment and treatment selection literature from statistics and econometrics. We conclude with a discussion of the identification and estimation of conditional average causal effects by conditioning.
Conditioning and Directed Graphs
In Section 1.5, we introduced the three most common approaches for the estimation of causal effects, using language from the directed graph literature: (1) conditioning on variables that block all back-door paths from the causal variable to the outcome variable, (2) using exogenous variation in an appropriate instrumental variable to isolate covariation in the causal variable and the outcome variable, and (3) establishing the exhaustive and isolated mechanism that intercepts the effect of the causal variable on the outcome variable and then calculating the causal effect as it propagates through the mechanism.
Do charter schools increase the test scores of elementary school students? If so, how large are the gains in comparison to those that could be realized by implementing alternative educational reforms? Does obtaining a college degree increase an individual's labor market earnings? If so, is this particular effect large relative to the earnings gains that could be achieved only through on-the-job training? Did the use of a butterfly ballot in some Florida counties in the 2000 presidential election cost Al Gore votes? If so, was the number of miscast votes sufficiently large to have altered the election outcome?
At their core, these types of questions are simple cause-and-effect questions of the form, Does X cause Y? If X causes Y, how large is the effect of X on Y? Is the size of this effect large relative to the effects of other causes of Y?
Simple cause-and-effect questions are the motivation for much research in the social, demographic, and health sciences, even though definitive answers to cause-and-effect questions may not always be possible to formulate given the constraints that researchers face in collecting data and evaluating alternative explanations. Even so, there is reason for optimism about our current and future abilities to effectively address cause-and-effect questions. Over the past four decades, a counterfactual model of causality has been developed and refined, and as a result a unified framework for the prosecution of causal questions is now available.
In this chapter, we introduce the foundational components of the potential outcome model. We first discuss causal states, the relationship between potential and observed outcome variables, and the usage of the label “counterfactual” to refer to unobserved potential outcomes. We introduce average causal effects and then discuss the assumption of causal effect stability, which is maintained explicitly in most applications that use the potential outcome model. We discuss simple estimation techniques and demonstrate the importance of considering the relationship between the potential outcomes and the process of causal exposure. We conclude by extending our presentation to over-time potential outcome variables for one or more units of analysis, as well as causal variables that take on more than two values.
Defining the Causal States
The counterfactual framework for observational data analysis presupposes the existence of well-defined causal states to which all members of the population of interest could be exposed. As we will show in the next section, causal effects are then defined based on comparisons of outcomes that would result from exposure to alternative causal states. For a binary cause, the two states are usually labeled treatment and control. When a many-valued cause is analyzed, the convention is to refer to the alternative states as alternative treatments.
Although these labels are simple, the assumed underlying states must be very carefully defined so that the contribution of an empirical analysis based upon them is clear.
With an Extended Example of a Weighted Regression Alternative to Matching
In the last chapter, we argued that traditional regression estimators of casual effects have substantial weaknesses, especially when individual-level causal effects are heterogeneous in ways that are not explicitly parameterized. In this chapter, we will introduce weighted regression estimators that solve these problems by appropriately averaging individual-level heterogeneity across the treatment and control groups using estimated propensity scores. In part because of this capacity, weighted regression estimators are now at the frontier of causal effect estimation, alongside the latest matching estimators that are also designed to properly handle such heterogeneity.
In the long run, we expect that weighted regression estimators will prove to be a common choice among alternative conditioning procedures that are used to estimate causal effects. In fact, we expect that weighted regression estimators will be used more frequently than the matching estimators presented in Chapter 5 when there is good overlap in the distributions of adjustment variables across the treatment and control groups. We have four primary reasons for this prediction, each of which we will explain in this chapter. First, weighted regression estimators allow the analyst to adopt the spirit of matching, and the clear thinking that it promotes, within a mode of data analysis that utilizes widely available software and that is familiar to most social scientists.
What role should the counterfactual approach to observational data analysis play in causal analysis in the social sciences? Some scholars see its elaboration as a justification for experimental methodology as an alternative to observational data analysis. We agree that by laying bare the challenges that confront causal analysis with observational data, the counterfactual approach does indirectly support experimentation as an alternative to observation. But, because experiments are often (perhaps usually) infeasible for most of the causal questions that practicing social scientists appear to want to answer, this implication, when considered apart from others, is understandably distressing.
We see the observational data analysis methods associated with the potential outcome model, motivated using directed graphs, as useful tools that can help to improve the investigation of causal relationships within the social sciences, especially when experiments are infeasible. Accordingly, we believe that the methods associated with the counterfactual approach complement and extend older approaches to causal analysis with observational data by shaping the goals of an analysis, requiring explicit consideration of individual-level heterogeneity of causal effects, encouraging a wider consideration of available identification strategies, and clarifying standards for credible interpretations.
In this chapter, we first shore up our presentation of the counterfactual approach by considering several critical perspectives on its utility. We weigh in with the arguments that we find most compelling, and it will not be surprising to the reader that we find these objections less serious than do those who have formulated them.
Social scientists have recognized for decades that the best explanations for how causes bring about their effects must specify in empirically verifiable ways the causal pathways between causes and their outcomes. This valuation of depth of causal explanation applies to the counterfactual tradition as well. Accordingly, it is widely recognized that a consistent estimate of a counterfactually defined causal effect of D on Y may not qualify as a sufficiently deep causal account of how D effects Y, based on the standards that prevail in a particular field of study.
In this chapter, we first discuss the dangers of insufficiently deep explanations of causal effects, reconsidering the weak explanatory power of some of the natural experiments discussed already in Chapter 9. We then consider the older literature on intervening variables in the social sciences as a way to introduce the mechanism-based estimation strategy proposed by Pearl (2009). In some respects, Pearl's approach is completely new, and it shows in a novel and sophisticated way how intervening variables can be used to identify causal effects, even when unblocked back-door paths between a causal variable and an outcome variable are present. In other respects, however, Pearl's approach is refreshingly familiar, as it helps to clarify the appropriate usage of intervening variables when attempting to deepen the explanation of a causal claim.
Independent of Pearl's important work, a diverse group of social scientists has appealed recently for the importance of mechanisms to all explanation in social science research.
In this chapter, we consider how analysts can proceed when no observed variables are available to point-identify and then estimate causal effects using the procedures explained in prior chapters. We discuss three complementary approaches. First, we will review the early literature on selection-bias adjustment, which shows clearly how distributional assumptions about unobservables, when harnessed from within a structural model, can identify causal parameters of interest. Point identification and estimation utilizing this strategy was frequent before it became clear to researchers in the 1990s how rarely the required distributional assumptions were warranted for their applications. The more recent selection-bias adjustment literature offers less restrictive semiparametric methods, but it also reemphasizes the relative value of instrumental variables in contrast to distributional assumptions for unobservables.
We will then consider the exact opposite strategy. Rather than place strong and usually untestable assumptions on the distributional characteristics of unobservables, the set-identification approach asks what can be learned about particular causal parameters by asserting only weak but defendable assumptions about unobserved variables. Instead of attempting to point-identify average causal effects, the set-identification approach suggests that it is more credible to try to limit the interval within which an average treatment effect must fall.
Finally, we will consider the related approach known as sensitivity analysis. Here, the analyst offers an estimate based on the provisional maintenance of an identifying assumption – most often, ignorability or selection on the observables – and then assesses the extent to which the estimate would vary as violations of the identifying assumption increase in severity.