We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Young stellar objects (YSOs) are protostars that exhibit bipolar outflows fed by accretion disks. Theories of the transition between disk and outflow often involve a complex magnetic field structure thought to be created by the disk coiling field lines at the jet base; however, due to limited resolution, these theories cannot be confirmed with observation and thus may benefit from laboratory astrophysics studies. We create a dynamically similar laboratory system by driving a $\sim$1 MA current pulse with a 200 ns rise through a $\approx$2 mm-tall Al cylindrical wire array mounted to a three-dimensional (3-D)-printed, stainless steel scaffolding. This system creates a plasma that converges on the centre axis and ejects cm-scale bipolar outflows. Depending on the chosen 3-D-printed load path, the system may be designed to push the ablated plasma flow radially inwards or off-axis to make rotation. In this paper, we present results from the simplest iteration of the load which generates radially converging streams that launch non-rotating jets. The temperature, velocity and density of the radial inflows and axial outflows are characterized using interferometry, gated optical and ultraviolet imaging, and Thomson scattering diagnostics. We show that experimental measurements of the Reynolds number and sonic Mach number in three different stages of the experiment scale favourably to the observed properties of YSO jets with $Re\sim 10^5\unicode{x2013}10^9$ and $M\sim 1\unicode{x2013}10$, while our magnetic Reynolds number of $Re_M\sim 1\unicode{x2013}15$ indicates that the magnetic field diffuses out of our plasma over multiple hydrodynamical time scales. We compare our results with 3-D numerical simulations in the PERSEUS extended magnetohydrodynamics code.
Political event data are widely used in studies of political violence. Recent years have seen notable advances in the automated coding of political event data from international news sources. Yet, the validity of machine-coded event data remains disputed, especially in the context of event geolocation. We analyze the frequencies of human- and machine-geocoded event data agreement in relation to an independent (ground truth) source. The events are human rights violations in Colombia. We perform our evaluation for a key, 8-year period of the Colombian conflict and in three 2-year subperiods as well as for a selected set of (non)journalistically remote municipalities. As a complement to this analysis, we estimate spatial probit models based on the three datasets. These models assume Gaussian Markov Random Field error processes; they are constructed using a stochastic partial differential equation and estimated with integrated nested Laplacian approximation. The estimated models tell us whether the three datasets produce comparable predictions, underreport events in relation to the same covariates, and have similar patterns of prediction error. Together the two analyses show that, for this subnational conflict, the machine- and human-geocoded datasets are comparable in terms of external validity but, according to the geostatistical models, produce prediction errors that differ in important respects.
The diurnal feeding patterns of dairy cows affects the 24 h robot utilisation of pasture-based automatic milking systems (AMS). A decline in robot utilisation between 2400 and 0600 h currently occurs in pasture-based AMS, as cow feeding activity is greatly reduced during this time. Here, we investigate the effect of a temporal variation in feed quality and quantity on cow feeding behaviour between 2400 and 0600 h as a potential tool to increase voluntary cow trafficking in an AMS at night. The day was allocated into four equal feeding periods (0600 to 1200, 1200 to 1800, 1800 to 2400 and 2400 to 0600 h). Lucerne hay cubes (CP = 19.1%, water soluble carbohydrate = 3.8%) and oat, ryegrass and clover hay cubes with 20% molasses (CP = 11.8%, water soluble carbohydrate = 10.7%) were offered as the ‘standard’ and ‘preferred’ (preference determined previously) feed types, respectively. The four treatments were (1) standard feed offered ad libitum (AL) throughout 24 h; (2) as per AL, with preferred feed replacing standard feed between 2400 and 0600 h (AL + P); (3) standard feed offered at a restricted rate, with quantity varying between each feeding period (20:10:30:60%, respectively) as a proportion of the (previously) measured daily ad libitum intake (VA); (4) as per VA, with preferred feed replacing standard feed between 2400 and 0600 h (VA + P). Eight non-lactating dairy cows were used in a 4 × 4 Latin square design. During each experimental period, treatment cows were fed for 7 days, including 3 days habituation and 4 days data collection. Total daily intake was approximately 8% greater (P < 0.001) for the AL and AL + P treatments (23.1 and 22.9 kg DM/cow) as compared with the VA and VA + P treatments (21.6 and 20.9 kg DM/cow). The AL + P and VA treatments had 21% and 90% greater (P < 0.001) dry matter intake (DMI) between 2400 and 0600 h, respectively, compared with the AL treatment. In contrast, the VA + P treatment had similar DMI to the VA treatment. Our experiment shows ability to increase cow feeding activity at night by varying feed type and quantity, though it is possible that a penalty to total DMI may occur using VA. Further research is required to determine if the implementation of variable feed allocation on pasture-based AMS farms is likely to improve milking robot utilisation by increasing cow feeding activity at night.
Textual data are plagued by underreporting bias. For example, news sources often fail to report human rights violations. Cook et al. propose a multi-source estimator to gauge, and to account for, the underreporting of state repression events within human codings of news texts produced by the Agence France-Presse and Associated Press. We evaluate this estimator with Monte Carlo experiments, and then use it to compare the prevalence and seriousness of underreporting when comparable texts are machine coded and recorded in the World-Integrated Crisis Early Warning System dataset. We replicate Cook et al.’s investigation of human-coded state repression events with our machine-coded events, and validate both models against an external measure of human rights protections in Africa. We then use the Cook et al. estimator to gauge the seriousness and prevalence of underreporting in machine and human-coded event data on human rights violations in Colombia. We find in both applications that machine-coded data are as valid as human-coded data.
Analyzing macro-political processes is complicated by four interrelated problems: model scale, endogeneity, persistence, and specification uncertainty. These problems are endemic in the study of political economy, public opinion, international relations, and other kinds of macro-political research. We show how a Bayesian structural time series approach addresses them. Our illustration is a structurally identified, nine-equation model of the U.S. political-economic system. It combines key features of the model of Erikson, MacKuen, and Stimson (2002) of the American macropolity with those of a leading macroeconomic model of the United States (Sims and Zha, 1998; Leeper, Sims, and Zha, 1996). This Bayesian structural model, with a loosely informed prior, yields the best performance in terms of model fit and dynamics. This model 1) confirms existing results about the countercyclical nature of monetary policy (Williams 1990); 2) reveals informational sources of approval dynamics: innovations in information variables affect consumer sentiment and approval and the impacts on consumer sentiment feed-forward into subsequent approval changes; 3) finds that the real economy does not have any major impacts on key macropolity variables; and 4) concludes, contrary to Erikson, MacKuen, and Stimson (2002), that macropartisanship does not depend on the evolution of the real economy in the short or medium term and only very weakly on informational variables in the long term.
The symposium develops statistical models and methods for the study of path dependence. In this introductory essay, the connections between key areas in the path dependence and statistical literatures are illuminated. And some ways in which familiar time series and regression models embody these ideas are explained. The arguments in the articles in the symposium then are summarized and compared. Finally, directions for additional, statistically grounded research on path dependence are discussed.
Systematic sampling and temporal aggregation are the practices of sampling a time series at regular intervals and of summing or averaging time series observations over a time interval, respectively. Both practices are a source of statistical error and faulty inference. The problems that systematic sampling and temporal aggregation create for the construction of strongly specified and weakly specified models are discussed. The seriousness of these problems then is illustrated with respect to the debate about superpower rivalry. The debate is shown to derive, in part, from the fact that some researchers employ highly temporally aggregated measures of U.S. and Soviet foreign policy behavior. The larger methodological lessons are that we need to devote more time to determining the natural time unit of our theories and to conducting robustness checks across levels of temporal aggregation.
Bayesian approaches to the study of politics are increasingly popular. But Bayesian approaches to modeling multiple time series have not been critically evaluated. This is in spite of the potential value of these models in international relations, political economy, and other fields of our discipline. We review recent developments in Bayesian multi-equation time series modeling in theory testing, forecasting, and policy analysis. Methods for constructing Bayesian measures of uncertainty of impulse responses (Bayesian shape error bands) are explained. A reference prior for these models that has proven useful in short- and medium-term forecasting in macroeconomics is described. Once modified to incorporate our experience analyzing political data and our theories, this prior can enhance our ability to forecast over the short and medium terms complex political dynamics like those exhibited by certain international conflicts. In addition, we explain how contingent Bayesian forecasts can be constructed, contingent Bayesian forecasts that embody policy counterfactuals. The value of these new Bayesian methods is illustrated in a reanalysis of the Israeli-Palestinian conflict of the 1980s.
Cointegration was introduced to our discipline by Renée Smith and Charles Ostrom Jr. and by Robert Durr more than two decades ago at political methodology meetings in Washington University�St. Louis and Florida State University. Their articles, along with comments by Neal Beck and John T. Williams, were published in a symposium like this one in the fourth volume of Political Analysis. Keele, Lin, and Webb (2016; hereafter KLW) and Grant and Lebo (2016; hereafter GL) show how, in the years that followed, cointegration was further evaluated by political scientists, and the related idea of error correction subsequently was applied.
Have the last twenty-plus years witnessed significant progress in modeling nonstationary political time series? In some respects, the answer is yes. The present symposium represents progress in understanding equation balance, analyzing bounded variables, and decomposing short- and longterm causal effects. In these respects KLW's and GL's articles deserve wide dissemination. But KLW and GL leave important methodological issues unresolved. They do not address some critical methodological challenges. From a historical perspective, the present symposium shows that we have made relatively little progress in modeling nonstationary political time series.
We began this book by suggesting that scholars in the social sciences are often interested in how processes – whether political, economic, or social – changeover time. Throughout, we have emphasized that although many of our theories discuss that change, often our empirical models do not give the concept of change the same pride of place. Time series elements in data are often treated as a nuisance – something to cleanse from otherwise meaningful information – rather than part and parcel of the data-generating process that we attempt to describe with our theories.
We hope this book is an antidote to this thinking. Social dynamics are crucial to all of the social sciences. We have tried to provide some tools to model and therefore understand some of these social dynamics. Rather than treat temporal dynamics as a nuisance or a problem to be ameliorated, we have emphasized that the diagnosis, modeling, and analysis of those dynamics are key to the substance of the social sciences. Knowing a unit root exists in a series tell us something about the data-generating process: shocks to the series permanently shift the series, integrating into it. Graphing the autocorrelation functions of a series can tell us whether there are significant dynamics at one lag (i.e., AR(1))or for more lags (e.g., an AR(3)). Again, this tells us something about the underlying nature of the data: how long does an event hold influence?
The substance of these temporal dynamics is even more important when thinking about the relationships between variables.
The first class of time series models we investigate are univariate models called ARMA (autoregressive moving average) models. In the Appendix, we show how to gain significant insights into the dynamics of difference equations –the basis of time series econometrics – by simply solving them and plotting solutions over time. By stipulating a model based on our verbal theory and deriving its solution, we can note the conditions under which the processes we model return to equilibrium.
In the series of models discussed in this chapter, we turn this procedure round. We begin by studying the generic forms of patterns that could be created by particular datasets. We then analyze the data to see what dynamics are present in the data-generating process, which induce the underlying structure of the data. As a modeling process, ARMA models were perfected by Box and Jenkins (1970), who were attempting to come up with a better way than extrapolation or smoothing to predict the behavior of systems. Indeed, their method of examining the structures in a time series, filtering them from the data, and leaving a pure stochastic series improved predictive (i.e., forecasting)ability. Box-Jenkins modeling became quite popular, and as Kennedy notes,“for years the Box-Jenkins methodology was synonymous with time series analysis” (Kennedy, 2008, 297).
The intuition behind Box-Jenkins modeling is straightforward. Time series data redundent can be composed of multiple temporal processes.
In Chapter 1 we discussed the distinction between strongly and weakly restricted time series models. A weakly restricted model uses techniques such as those we studied in Chapter 2, where one primarily infers from the data the structure of the data-generating process by assessing the AR and MA components of an observed univariate series. Extending the weakly restricted approach to multivariate models, which we do in subsequent chapters, leads to the use of vector autoregression (VAR) and error correction models (ECMs). Important modeling choices, such as how many lags of a variable to include, are inferred from the data rather than specified before the analysis. Recall as well that the quasi-experimental approach uses weakly restricted models, highlighting the problem of specification uncertainty.
In this chapter we discuss strongly restricted time series modeling, which assumes that we know much more about the functional forms of our data-generating process. Making these strong assumptions about a time series' functional form and proceeding directly to testing hypotheses about the relation-ships between variables encompass what we term the “time series regression tradition.” This approach is popular and widely used. It is appropriate whenever an analyst can comfortably and ably make the strong assumptions required for the technique.
We provide an overview of the basic components of time series regression models and explore tests for serial correlation in the residuals, which provide guidance to analysts regarding various types of serial correlation.
The analysis of time series data is a vast enterprise. With this fact in mind, the previous chapters introduced the core concepts and analytic tools that form a foundational understanding of time series analysis. This chapter presents four more advanced topics: fractional integration, heterogeneity, forecasting, and estimating and modeling with unknown structural breaks. Although by no means an exhaustive list, the topics presented in this chapter represent concerns of the contemporary literature: they extend some of the previously discussed concepts, provide additional means of evaluating time series models, and are a means through which time series analysis can inform policy.
Fractional integration is an extension of the preceding discussion of unit roots and of tests for unit roots. The first few chapters assumed that our time series data was stationary, but it was subsequently presented that this may not necessarily be the case; as a result, tests for unit roots or an integrated series were presented in detail in Chapter 5. However, as intuition may suggest, it may not always be the case in practice that every series can be appropriately characterized as either stationary or integrated, as shocks may enter the series, persist for a nontrivial amount of time, and eventually dissipate. In such a case, the series is neither stationary nor integrated, because the shocks do not rapidly exit the series, nor do they persist indefinitely.
Our work has several motivations. We think that longitudinal analysis provides infinitely more insight than does examining any one slice of time. As we show throughout the book, longitudinal analysis is essential for the study of normatively important problems such as democratic accountability and international conflict. Given the importance of dynamic analysis in answering new questions and providing new answers to old questions, we want to get more social scientists thinking in dynamic terms. Time series is one of the most useful tools for dynamic analysis, and our goal is to provide a more accessible treatment for this approach. We are also motivated by the burgeoning supply of new social science time series data. Sometimes this causes the opposite problem of too much data and figuring out how to analyze it, but that is a problem we gladly embrace. The proliferation of new social science data requires techniques that are designed to handle complexity, and time series analysis is one of the most applicable tools. The incorporation of time series analysis into standard statistical packages such as STATA and R, as well as the existence of specialized packages such as RATS and Eviews, provides an additional motivation because it enables more scholars to easily use time series in their work.
We have found over our years of teaching time series that, although many social science students have the brain power to learn time series methods, they often lack the training and motivation to use the most well-known books on the topic.
Thus far, all of our models assumed that our data are stationary. A stationary series does not have statistical properties that depend on time. All shocks and past values in a stationary series eventually lose their influence on the value of the variable today. A stationary stochastic process is defined such that
• A stochastic process is stationary if the mean and variance are constant overtime and covariance between two time points depends only on the distance of the lag between the two time periods and not on the actual time that the covariances are computed.
• In other words, if a time series is stationary, its mean, variance, and auto-covariance (at various lags) remain the same, no matter when we measure them.
Why should analysts care if variables are stationary? Econometric problems may occur when we run a regression with variables that are not stationary. For example, in the Box-Jenkins identification stage, because of nonstationarity, we may fail to diagnose a higher order AR process. We need to diagnose and correctly account for the characteristics of the data-generating process.
Several other issues arise with nonstationary data, which we discuss in this and the following chapters. At a basic level, nonstationary data violate the invertibility condition for the value of φ (the AR process in our ARMA model)and bias our estimate of φ (that is, the extent to which past values of the dependent variable influence the current value).