We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The majority opinion of the Supreme Court establishes precedent, but separate opinion writing affords the justices the ability to expound upon it or express their disagreement with the ruling or its logic. We broaden the exploration of separate opinion writing to consider how decisions and case features at the moment of granting cert shape justices’ decisions to engage in nonconsensual behavior. We also sharpen the focus on external actors to consider the nature of amici curiae. Through an empirical study of Supreme Court cases between 1986 and 1993, we find that aspects of the agenda-setting stage affect justices’ decisions at the litigation stage. In addition, we find that the number of briefs and the diversity of organized interests impacted by the case is particularly relevant to justices. The decision to write a separate opinion is the product of internal and external factors over the full course of a case’s history.
Interest group ideology is theoretically and empirically critical in the study of American politics, yet our measurement of this key concept is lacking both in scope and time. By leveraging network science and ideal point estimation, we provide a novel measure of ideology for amicus curiae briefs and organized interests with accompanying uncertainty estimates. Our Amicus Curiae Network scores cover more than 12,000 unique groups and more than 11,000 briefs across 95 years, providing the largest and longest measure of organized interest ideologies to date. Substantively, the scores reveal that: interests before the Court are ideologically polarized, despite variance in their coalition strategies; interests that donate to campaigns are more conservative and balanced than those that do not; and amicus curiae briefs were more common from liberal organizations until the 1980s, with ideological representation virtually balanced since then.
Engaged pluralism entails active interaction, debate, and learning from each other. I argue that individuals need to undertake the challenges arising from engaged pluralism to ensure a healthy, vibrant disciplinary future, and for a democracy that thrives. I consciously extend the term “engagement” to apply not only to understanding across sub-disciplines and different grounds of knowledge, but also to addressing research to the needs of society. There are golden opportunities centered around the benefits of a more open, rigorous, and contentious science that can be maximized through focused engagement around methodologies and methods. In short, two primary themes encapsulate my views on where our discipline should be heading. First, the pursuit of engaged methodological pluralism in our scholarship is critical. Second, supporting democratic principles and civic engagement, which is at the core of the American Political Science Association and has continued, in ebbs and flows, throughout the discipline’s life, is necessary.
In the study of social processes, the presence of unobserved heterogeneity is a regular concern. It should be particularly worrisome for the statistical analysis of networks, given the complex dependencies that shape network formation combined with the restrictive assumptions of related models. In this paper, we demonstrate the importance of explicitly accounting for unobserved heterogeneity in exponential random graph models (ERGM) with a Monte Carlo analysis and two applications that have played an important role in the networks literature. Overall, these analyses show that failing to account for unobserved heterogeneity can have a significant impact on inferences about network formation. The proposed frailty extension to the ERGM (FERGM) generally outperforms the ERGM in these cases, and does so by relatively large margins. Moreover, our novel multilevel estimation strategy has the advantage of avoiding the problem of degeneration that plagues the standard MCMC-MLE approach.
We introduce the conditional frailty model, an event history model that separates and accounts for both event dependence and heterogeneity in repeated events processes. Event dependence and heterogeneity create within-subject correlation in event times thereby violating the assumptions of standard event history models. Simulations show the advantage of the conditional frailty model. Specifically they demonstrate the model's ability to disentangle the sources of within-subject correlation as well as the gains in both efficiency and bias of the model when compared to the widely used alternatives, which often produce conflicting conclusions. Two substantive political science problems illustrate the usefulness and interpretation of the model: state policy adoption and terrorist attacks.
In contrast to conventional studies on campaign finance, which focus on the aggregate effect of money on the vote, we propose a more general dynamic model based on temporally disaggregated data. The model is supported by the substantive understanding that at different stages of the campaign process candidates have different goals, and their expenditures should have different effects on the final election outcome. Using Achen's (1986) framework of quasi experiments, the model includes dynamic “assignment equations” and “outcome equations”, which address the problem of nonrandom assignment. A final vote equation is derived in which the coefficients of period-specific incumbent expenditures are constrained by an Almon polynomial. Empirical estimation provides evidence for a three-stage dynamic campaign process.
Estimators within the Cox family are often used to estimate models for repeated events. Yet, there is much we still do not know about the performance of these estimators. In particular, we do not know how they perform given time dependence, different censoring rates, and a varying number of events and sample sizes. We use Monte Carlo simulations to demonstrate the performance of a variety of popular semi-parametric estimators as these data aspects change and under conditions of event dependence and heterogeneity, both, or neither. We conclude that the conditional frailty model outperforms other standard estimators under a wide array of data-generating processes, and data limitations rarely alter its performance.
When, how, and under what conditions can individual legislators affect presidential appointments? Since the early 1900s, the senatorial norm of the blue slip has played a key role in the confirmation process of federal district and appeals court judges, and it is an important aspect of the individual prerogative that characterizes senatorial behavior more broadly. We analyze newly available blue slips, covering the historical period 1933–1960. We show that the blue slip functioned in this era most often to support and expedite nominations, indicating that senators used this device to shape the nominations agenda in this period. Additionally, we analyze the factors that contributed to an individual senator's decision to support or oppose a nominee, or return a blue slip at all, finding that senators were more likely to return positive blue slips when the Judiciary Committee chair was not a coalition ally. We argue that while blue slips did at times provide an early warning for poor nominees, they more often offered a means by which senators ensured that their desired nominees were confirmed swiftly. The positive role of the blue slip demonstrates that this device protected the individual prerogatives of senators, allowing them a degree of agenda-setting authority with regard to nominees in the weak parties era.
We compare and contrast the network formation of interest groups across industry and issue area. We focus on membership interest groups, which by virtue of representing the interests of voluntary members face particular organizational and maintenance constraints. To reveal their cooperative behavior we build a network dataset based on cosigner status to United States Supreme Court amicus curiae briefs and analyze it with exponential random graph models and multidimensional scaling. Our methodological approach culminates in a clear and compact spatial representation of network similarities and differences. We find that while many of the same factors shape membership networks, religious, labor, and political organizations do not share the same structure as each other or as the business, civic and professional groups.
We began this book by suggesting that scholars in the social sciences are often interested in how processes – whether political, economic, or social – changeover time. Throughout, we have emphasized that although many of our theories discuss that change, often our empirical models do not give the concept of change the same pride of place. Time series elements in data are often treated as a nuisance – something to cleanse from otherwise meaningful information – rather than part and parcel of the data-generating process that we attempt to describe with our theories.
We hope this book is an antidote to this thinking. Social dynamics are crucial to all of the social sciences. We have tried to provide some tools to model and therefore understand some of these social dynamics. Rather than treat temporal dynamics as a nuisance or a problem to be ameliorated, we have emphasized that the diagnosis, modeling, and analysis of those dynamics are key to the substance of the social sciences. Knowing a unit root exists in a series tell us something about the data-generating process: shocks to the series permanently shift the series, integrating into it. Graphing the autocorrelation functions of a series can tell us whether there are significant dynamics at one lag (i.e., AR(1))or for more lags (e.g., an AR(3)). Again, this tells us something about the underlying nature of the data: how long does an event hold influence?
The substance of these temporal dynamics is even more important when thinking about the relationships between variables.
The first class of time series models we investigate are univariate models called ARMA (autoregressive moving average) models. In the Appendix, we show how to gain significant insights into the dynamics of difference equations –the basis of time series econometrics – by simply solving them and plotting solutions over time. By stipulating a model based on our verbal theory and deriving its solution, we can note the conditions under which the processes we model return to equilibrium.
In the series of models discussed in this chapter, we turn this procedure round. We begin by studying the generic forms of patterns that could be created by particular datasets. We then analyze the data to see what dynamics are present in the data-generating process, which induce the underlying structure of the data. As a modeling process, ARMA models were perfected by Box and Jenkins (1970), who were attempting to come up with a better way than extrapolation or smoothing to predict the behavior of systems. Indeed, their method of examining the structures in a time series, filtering them from the data, and leaving a pure stochastic series improved predictive (i.e., forecasting)ability. Box-Jenkins modeling became quite popular, and as Kennedy notes,“for years the Box-Jenkins methodology was synonymous with time series analysis” (Kennedy, 2008, 297).
The intuition behind Box-Jenkins modeling is straightforward. Time series data redundent can be composed of multiple temporal processes.
In Chapter 1 we discussed the distinction between strongly and weakly restricted time series models. A weakly restricted model uses techniques such as those we studied in Chapter 2, where one primarily infers from the data the structure of the data-generating process by assessing the AR and MA components of an observed univariate series. Extending the weakly restricted approach to multivariate models, which we do in subsequent chapters, leads to the use of vector autoregression (VAR) and error correction models (ECMs). Important modeling choices, such as how many lags of a variable to include, are inferred from the data rather than specified before the analysis. Recall as well that the quasi-experimental approach uses weakly restricted models, highlighting the problem of specification uncertainty.
In this chapter we discuss strongly restricted time series modeling, which assumes that we know much more about the functional forms of our data-generating process. Making these strong assumptions about a time series' functional form and proceeding directly to testing hypotheses about the relation-ships between variables encompass what we term the “time series regression tradition.” This approach is popular and widely used. It is appropriate whenever an analyst can comfortably and ably make the strong assumptions required for the technique.
We provide an overview of the basic components of time series regression models and explore tests for serial correlation in the residuals, which provide guidance to analysts regarding various types of serial correlation.
The analysis of time series data is a vast enterprise. With this fact in mind, the previous chapters introduced the core concepts and analytic tools that form a foundational understanding of time series analysis. This chapter presents four more advanced topics: fractional integration, heterogeneity, forecasting, and estimating and modeling with unknown structural breaks. Although by no means an exhaustive list, the topics presented in this chapter represent concerns of the contemporary literature: they extend some of the previously discussed concepts, provide additional means of evaluating time series models, and are a means through which time series analysis can inform policy.
Fractional integration is an extension of the preceding discussion of unit roots and of tests for unit roots. The first few chapters assumed that our time series data was stationary, but it was subsequently presented that this may not necessarily be the case; as a result, tests for unit roots or an integrated series were presented in detail in Chapter 5. However, as intuition may suggest, it may not always be the case in practice that every series can be appropriately characterized as either stationary or integrated, as shocks may enter the series, persist for a nontrivial amount of time, and eventually dissipate. In such a case, the series is neither stationary nor integrated, because the shocks do not rapidly exit the series, nor do they persist indefinitely.
Our work has several motivations. We think that longitudinal analysis provides infinitely more insight than does examining any one slice of time. As we show throughout the book, longitudinal analysis is essential for the study of normatively important problems such as democratic accountability and international conflict. Given the importance of dynamic analysis in answering new questions and providing new answers to old questions, we want to get more social scientists thinking in dynamic terms. Time series is one of the most useful tools for dynamic analysis, and our goal is to provide a more accessible treatment for this approach. We are also motivated by the burgeoning supply of new social science time series data. Sometimes this causes the opposite problem of too much data and figuring out how to analyze it, but that is a problem we gladly embrace. The proliferation of new social science data requires techniques that are designed to handle complexity, and time series analysis is one of the most applicable tools. The incorporation of time series analysis into standard statistical packages such as STATA and R, as well as the existence of specialized packages such as RATS and Eviews, provides an additional motivation because it enables more scholars to easily use time series in their work.
We have found over our years of teaching time series that, although many social science students have the brain power to learn time series methods, they often lack the training and motivation to use the most well-known books on the topic.
Thus far, all of our models assumed that our data are stationary. A stationary series does not have statistical properties that depend on time. All shocks and past values in a stationary series eventually lose their influence on the value of the variable today. A stationary stochastic process is defined such that
• A stochastic process is stationary if the mean and variance are constant overtime and covariance between two time points depends only on the distance of the lag between the two time periods and not on the actual time that the covariances are computed.
• In other words, if a time series is stationary, its mean, variance, and auto-covariance (at various lags) remain the same, no matter when we measure them.
Why should analysts care if variables are stationary? Econometric problems may occur when we run a regression with variables that are not stationary. For example, in the Box-Jenkins identification stage, because of nonstationarity, we may fail to diagnose a higher order AR process. We need to diagnose and correctly account for the characteristics of the data-generating process.
Several other issues arise with nonstationary data, which we discuss in this and the following chapters. At a basic level, nonstationary data violate the invertibility condition for the value of φ (the AR process in our ARMA model)and bias our estimate of φ (that is, the extent to which past values of the dependent variable influence the current value).
The material in this appendix is aimed at readers interested in the mathematical underpinnings of time series models. As with any statistical method, one can estimate time series models without such foundational knowledge. But the material here is critical for any reader who is interested in going beyond applying existing “off the shelf” models and conducting research in time series methodology.
Many social theories are formulated in terms of changes in time. We conceptualize social processes as mixes of time functions. In so doing, we use terms such as trend and cycle. A trend usually is a function of the form α × t where α is a constant and t is a time counter, a series of natural numbers that represents successive time points. When α is positive (negative), the trend is steadily increasing (decreasing). The time function sin αt could be used to represent asocial cycle, as could a positive constant times a negative integer raised to the time counter: α(−1)t. In addition, we argue that social processes experience sequences of random shocks and make assumptions about the distributions from which these shocks are drawn. For instance, we often assume that processes repeatedly experience a shock, ∈t, drawn independently across time from a normal distribution with mean zero and unit variance.
Social processes presumably are a combination of these trends, cycles, and shocks.