Our work has several motivations. We think that longitudinal analysis provides infinitely more insight than does examining any one slice of time. As we show throughout the book, longitudinal analysis is essential for the study of normatively important problems such as democratic accountability and international conflict. Given the importance of dynamic analysis in answering new questions and providing new answers to old questions, we want to get more social scientists thinking in dynamic terms. Time series is one of the most useful tools for dynamic analysis, and our goal is to provide a more accessible treatment for this approach. We are also motivated by the burgeoning supply of new social science time series data. Sometimes this causes the opposite problem of too much data and figuring out how to analyze it, but that is a problem we gladly embrace. The proliferation of new social science data requires techniques that are designed to handle complexity, and time series analysis is one of the most applicable tools. The incorporation of time series analysis into standard statistical packages such as STATA and R, as well as the existence of specialized packages such as RATS and Eviews, provides an additional motivation because it enables more scholars to easily use time series in their work.
We have found over our years of teaching time series that, although many social science students have the brain power to learn time series methods, they often lack the training and motivation to use the most well-known books on the topic.
The material in this appendix is aimed at readers interested in the mathematical underpinnings of time series models. As with any statistical method, one can estimate time series models without such foundational knowledge. But the material here is critical for any reader who is interested in going beyond applying existing “off the shelf” models and conducting research in time series methodology.
Many social theories are formulated in terms of changes in time. We conceptualize social processes as mixes of time functions. In so doing, we use terms such as trend and cycle. A trend usually is a function of the form α × t where α is a constant and t is a time counter, a series of natural numbers that represents successive time points. When α is positive (negative), the trend is steadily increasing (decreasing). The time function sin αt could be used to represent asocial cycle, as could a positive constant times a negative integer raised to the time counter: α(−1)t. In addition, we argue that social processes experience sequences of random shocks and make assumptions about the distributions from which these shocks are drawn. For instance, we often assume that processes repeatedly experience a shock, ∈t, drawn independently across time from a normal distribution with mean zero and unit variance.
Social processes presumably are a combination of these trends, cycles, and shocks.
In Chapter 1 we discussed the distinction between strongly and weakly restricted time series models. A weakly restricted model uses techniques such as those we studied in Chapter 2, where one primarily infers from the data the structure of the data-generating process by assessing the AR and MA components of an observed univariate series. Extending the weakly restricted approach to multivariate models, which we do in subsequent chapters, leads to the use of vector autoregression (VAR) and error correction models (ECMs). Important modeling choices, such as how many lags of a variable to include, are inferred from the data rather than specified before the analysis. Recall as well that the quasi-experimental approach uses weakly restricted models, highlighting the problem of specification uncertainty.
In this chapter we discuss strongly restricted time series modeling, which assumes that we know much more about the functional forms of our data-generating process. Making these strong assumptions about a time series' functional form and proceeding directly to testing hypotheses about the relation-ships between variables encompass what we term the “time series regression tradition.” This approach is popular and widely used. It is appropriate whenever an analyst can comfortably and ably make the strong assumptions required for the technique.
We provide an overview of the basic components of time series regression models and explore tests for serial correlation in the residuals, which provide guidance to analysts regarding various types of serial correlation.
We began this book by suggesting that scholars in the social sciences are often interested in how processes – whether political, economic, or social – changeover time. Throughout, we have emphasized that although many of our theories discuss that change, often our empirical models do not give the concept of change the same pride of place. Time series elements in data are often treated as a nuisance – something to cleanse from otherwise meaningful information – rather than part and parcel of the data-generating process that we attempt to describe with our theories.
We hope this book is an antidote to this thinking. Social dynamics are crucial to all of the social sciences. We have tried to provide some tools to model and therefore understand some of these social dynamics. Rather than treat temporal dynamics as a nuisance or a problem to be ameliorated, we have emphasized that the diagnosis, modeling, and analysis of those dynamics are key to the substance of the social sciences. Knowing a unit root exists in a series tell us something about the data-generating process: shocks to the series permanently shift the series, integrating into it. Graphing the autocorrelation functions of a series can tell us whether there are significant dynamics at one lag (i.e., AR(1))or for more lags (e.g., an AR(3)). Again, this tells us something about the underlying nature of the data: how long does an event hold influence?
The substance of these temporal dynamics is even more important when thinking about the relationships between variables.
The analysis of time series data is a vast enterprise. With this fact in mind, the previous chapters introduced the core concepts and analytic tools that form a foundational understanding of time series analysis. This chapter presents four more advanced topics: fractional integration, heterogeneity, forecasting, and estimating and modeling with unknown structural breaks. Although by no means an exhaustive list, the topics presented in this chapter represent concerns of the contemporary literature: they extend some of the previously discussed concepts, provide additional means of evaluating time series models, and are a means through which time series analysis can inform policy.
Fractional integration is an extension of the preceding discussion of unit roots and of tests for unit roots. The first few chapters assumed that our time series data was stationary, but it was subsequently presented that this may not necessarily be the case; as a result, tests for unit roots or an integrated series were presented in detail in Chapter 5. However, as intuition may suggest, it may not always be the case in practice that every series can be appropriately characterized as either stationary or integrated, as shocks may enter the series, persist for a nontrivial amount of time, and eventually dissipate. In such a case, the series is neither stationary nor integrated, because the shocks do not rapidly exit the series, nor do they persist indefinitely.
UNDERSTANDING UNIVARIATE PROCESSES
The first class of time series models we investigate are univariate models called ARMA (autoregressive moving average) models. In the Appendix, we show how to gain significant insights into the dynamics of difference equations –the basis of time series econometrics – by simply solving them and plotting solutions over time. By stipulating a model based on our verbal theory and deriving its solution, we can note the conditions under which the processes we model return to equilibrium.
In the series of models discussed in this chapter, we turn this procedure round. We begin by studying the generic forms of patterns that could be created by particular datasets. We then analyze the data to see what dynamics are present in the data-generating process, which induce the underlying structure of the data. As a modeling process, ARMA models were perfected by Box and Jenkins (1970), who were attempting to come up with a better way than extrapolation or smoothing to predict the behavior of systems. Indeed, their method of examining the structures in a time series, filtering them from the data, and leaving a pure stochastic series improved predictive (i.e., forecasting)ability. Box-Jenkins modeling became quite popular, and as Kennedy notes,“for years the Box-Jenkins methodology was synonymous with time series analysis” (Kennedy, 2008, 297).
The intuition behind Box-Jenkins modeling is straightforward. Time series data redundent can be composed of multiple temporal processes.
The study of equilibrium relationships is at the heart of time series analysis. Because cointegration provides one way to study equilibrium relationships, it is a cornerstone of current time series analysis. The original idea behind cointegraton is that two series may be in equilibrium in the long run, but in the short run the two series deviate from that equilibrium. Clarke, Stewart, and Whiteley (1998, 562) explain that “cointegrated series are in a dynamic equilibrium in the sense that they tend to move together in the long run. Shocks that persist over a single period are ‘reequilibrated’ or adjusted by this cointegrating relationship.” Thus cointegration suggests a long-run relationship between two or more series that may move in quite different ways in the short run. Put a bit more formally, cointegration says that a specific combination of two non stationary series may be stationary. We then say these two series or variables are cointegrated, and the vector that defines the stationary linear combination is called the cointegrating vector.
Recall from the previous chapter that a time series is stationary when its mean and variance do not vary over or depend on time. Lin and Brannigan(2003, 153) point out that “many times series variables in the social sciences and historical studies are nonstationary since the variables typically measure the changing properties of social events over, for example, the last century or over the last x-number of months or days of observations. These variables display time varying means, variances, and sometimes autocovariances.”
Thus far, all of our models assumed that our data are stationary. A stationary series does not have statistical properties that depend on time. All shocks and past values in a stationary series eventually lose their influence on the value of the variable today. A stationary stochastic process is defined such that
• A stochastic process is stationary if the mean and variance are constant overtime and covariance between two time points depends only on the distance of the lag between the two time periods and not on the actual time that the covariances are computed.
• In other words, if a time series is stationary, its mean, variance, and auto-covariance (at various lags) remain the same, no matter when we measure them.
Why should analysts care if variables are stationary? Econometric problems may occur when we run a regression with variables that are not stationary. For example, in the Box-Jenkins identification stage, because of nonstationarity, we may fail to diagnose a higher order AR process. We need to diagnose and correctly account for the characteristics of the data-generating process.
Several other issues arise with nonstationary data, which we discuss in this and the following chapters. At a basic level, nonstationary data violate the invertibility condition for the value of φ (the AR process in our ARMA model)and bias our estimate of φ (that is, the extent to which past values of the dependent variable influence the current value).
Email your librarian or administrator to recommend adding this to your organisation's collection.