We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What explains why some Latinos feel strongly tied to their coethnics while others do not? Demographic context is one of the most cited predictors of identity strength, but the size and direction of its effects are disputed. Geographic differences in policy environments may explain the phenomenon. We argue that high levels of immigration enforcement indirectly lead to increased feelings of ethnic linked fate by determining where and how demographic context—in this case, the size of the immigrant population—will be salient. To test this, we combine information from local immigration-enforcement data (obtained via Freedom of Information Act requests) with the Latino Decisions' 2016 Collaborative Multiracial Post-Election Survey. The results suggest native-born Latinos have a stronger sense of ethnic linked fate when they live near large immigrant populations and rates of enforcement are high. When enforcement is low, the presence of immigrants has a negligible effect on native-born attitudes. Foreign-born Latinos' sense of linked fate is unaffected by policy context. These results suggest that as immigration enforcement becomes intensifies, conservative politicians may see increased backlash, at least in certain communities, from native-born Latinos. This is because feelings about ethnic linked fate correlate with increased participation and more proimmigrant policy stances.
Giant electromagnetic pulses (EMP) generated during the interaction of high-power lasers with solid targets can seriously degrade electrical measurements and equipment. EMP emission is caused by the acceleration of hot electrons inside the target, which produce radiation across a wide band from DC to terahertz frequencies. Improved understanding and control of EMP is vital as we enter a new era of high repetition rate, high intensity lasers (e.g. the Extreme Light Infrastructure). We present recent data from the VULCAN laser facility that demonstrates how EMP can be readily and effectively reduced. Characterization of the EMP was achieved using B-dot and D-dot probes that took measurements for a range of different target and laser parameters. We demonstrate that target stalk geometry, material composition, geodesic path length and foil surface area can all play a significant role in the reduction of EMP. A combination of electromagnetic wave and 3D particle-in-cell simulations is used to inform our conclusions about the effects of stalk geometry on EMP, providing an opportunity for comparison with existing charge separation models.
The SPICA mid- and far-infrared telescope will address fundamental issues in our understanding of star formation and ISM physics in galaxies. A particular hallmark of SPICA is the outstanding sensitivity enabled by the cold telescope, optimised detectors, and wide instantaneous bandwidth throughout the mid- and far-infrared. The spectroscopic, imaging, and polarimetric observations that SPICA will be able to collect will help in clarifying the complex physical mechanisms which underlie the baryon cycle of galaxies. In particular, (i) the access to a large suite of atomic and ionic fine-structure lines for large samples of galaxies will shed light on the origin of the observed spread in star-formation rates within and between galaxies, (ii) observations of HD rotational lines (out to ~10 Mpc) and fine structure lines such as [C ii] 158 μm (out to ~100 Mpc) will clarify the main reservoirs of interstellar matter in galaxies, including phases where CO does not emit, (iii) far-infrared spectroscopy of dust and ice features will address uncertainties in the mass and composition of dust in galaxies, and the contributions of supernovae to the interstellar dust budget will be quantified by photometry and monitoring of supernova remnants in nearby galaxies, (iv) observations of far-infrared cooling lines such as [O i] 63 μm from star-forming molecular clouds in our Galaxy will evaluate the importance of shocks to dissipate turbulent energy. The paper concludes with requirements for the telescope and instruments, and recommendations for the observing strategy.
High-quality archaeological surveys and data are vital to preservation planning and mitigation efforts. Federal and state historic preservation offices (SHPOs) are accumulating and reviewing more data at an ever-faster pace. Given the critical nature of this information, a SAA task force was charged with assessing current survey practices and concerns. Our review indicates that survey policies and archaeological standards have improved substantially over the last two decades, but SHPOs remain challenged by insufficient professional training for field archaeologists, the need for standardization and integration of new technologies in field work, reporting, and review, as well as the sheer quantity and variety of digital data. A number of analytical tools and metrics are available to assess data quality, but seemingly there is not time or money for states to evaluate how to improve existing and future survey data. We draw upon a survey of SHPOs, a review of current literature, and our own experience to assess archaeological survey quality, data utility and durability for current and anticipated future uses. We offer suggestions on how to move forward, including consideration of an e-106 system for streamlining transfer and exchange of digital data and upgrading current approaches to survey and planning.
Event History Modeling, first published in 2004, provides an accessible guide to event history analysis for researchers and advanced students in the social sciences. The substantive focus of many social science research problems leads directly to the consideration of duration models, and many problems would be better analyzed by using these longitudinal methods to take into account not only whether the event happened, but when. The foundational principles of event history analysis are discussed and ample examples are estimated and interpreted using standard statistical packages, such as STATA and S-Plus. Critical innovations in diagnostics are discussed, including testing the proportional hazards assumption, identifying outliers, and assessing model fit. The treatment of complicated events includes coverage of unobserved heterogeneity, repeated events, and competing risks models. The authors point out common problems in the analysis of time-to-event data in the social sciences and make recommendations regarding the implementation of duration modeling methods.
In this article, we consider how the factors driving Anglo attitudes toward immigration changed in the post-9/11 era. We argue that in the aftermath of the 9/11 attacks, the immigration issue became nationalized, framed in a threat context. In this context, acculturation fear and anti-Latino sentiment are strong predictors of restrictionist sentiment; in the pre-9/11 period, these factors have little substantive impact on Anglo attitudes. We theorize that the current climate has helped “activate” social identities, which in turn has deleterious consequences for the Latinos in the United States. Using data from the 2000 and 2004 National Election Studies, we estimate a model of Anglo immigration attitudes. We show indicators of acculturation fear, anti-Latino sentiment, and media exposure significantly relate to Anglo immigration attitudes in the post-9/11 period but not the pre-9/11 period.
Since 1990, the standard statistical approach for studying state policy adoption has been an event history analysis using binary link models, such as logit or probit. In this article, we evaluate this logit-probit approach and consider some alternative strategies for state policy adoption research. In particular, we discuss the Cox model, which avoids the need to parameterize the baseline hazard function and, therefore, is often preferable to the logit-probit approach. Furthermore, we demonstrate how the Cox model can be modified to deal effectively with repeatable and competing events, events that the logit-probit approach cannot be used to model.
Through the use of an original data set of bill initiation activity in six presidential democracies, we advance scholarly understanding of how the institutional incentives faced by legislative candidates influence representation. We extend and adapt theory, derived primarily from the experience of the U.S. Congress, demonstrating its viability, once assumed constants from the U.S. case are explicitly modeled, in quite distinct institutional contexts. In particular, we find the focus of individual legislators on national versus parochial concerns responds to the incentives provided by the candidate selection process, general election rules, legislator career patterns, and interbranch relations.
In this chapter, we present an alternative modeling strategy to the fully parametric methods discussed in the previous chapter. Specifically, we consider the Cox proportional hazards model (Cox 1972, 1975). The Cox model is an attractive alternative to fully parametric methods because the particular distributional form of the duration times is left unspecified, although estimates of the baseline hazard and baseline survivor functions can be retrieved.
Problems with Parameterizing the Baseline Hazard
The parametric models discussed in Chapter 3 are desirable if one has a good reason to expect the duration dependency to exhibit some particular form. With the exception of the restrictive exponential model, any of the distribution functions discussed in the previous chapter are “flexible” inasmuch as the hazard rate may assume a wide variety of shapes, given the constraints of the model, i.e., the Weibull or Gompertz must yield monotonic hazards. However, most theories and hypotheses of behavior are less focused on the notion of time-dependency, and more focused on the relationship between some outcome (the dependent variable) and covariates of theoretical interest. In our view, most research questions in social science should be chiefly concerned with getting the appropriate theoretical relationship “right” and less concerned with the specific form of the duration dependency, which can be sensitive to the form of the posited model.
Moreover, ascribing substantive interpretations to ancillary parameters (for example the p, σ, or γ terms) in fully parametric models can, in our view, be tenuous.
In this chapter, we consider some important issues regarding model selection, assessment, and diagnostic methods through the use of residuals. The issues discussed in this chapter have a direct analog to methods of model selection and to model diagnostics in the context of the traditional linear model. For example, issues pertaining to functional form, influential observations, and similar other topics are directly relevant to the duration model. Because most of the methods of specification analysis discussed in this chapter make use of residuals, in the next section, we consider the different kinds of residuals that are retrievable from a typical duration model. Following this, we present several illustrations using residual analysis to assess various facets of the duration model. Most of the discussion in this chapter is presented in terms of the Cox model; however, diagnostic methods for parametric models are considered at the end of the chapter.
Residuals in Event History Models
The basic idea of a residual is to compare predicted and observed durations. In OLS regression, residuals are deviations of the observed values of the dependent variable from the values estimated or predicted value under the regression model, that is yi – ŷi. In event history analysis, defining a residual is more difficult because of censoring and because of issues relevant to estimation methods like maximum likelihood (in the case of parametric models) or maximum partial likelihood (in the case of the Cox model).
Our work on event history began in graduate school. We met as graduate students attending the Political Methodology Society's annual meeting in 1993 at Florida State University in Tallahassee, Florida. A small group of us at the meeting were interested in event history modeling and we saw its great potential for unlocking new answers to old questions and for revealing new questions in political science. We are indebted to the Political Methodology Group for bringing us together, providing a forum for us to present subsequent work, and providing ready and constructive critics and supporters. We are also indebted to our home departments for surrounding us with highly talented graduate students and interesting, stimulating colleagues. Meetings subsequent to our initial one in 1993, collaborations, and prodding from students and colleagues across the country who were interested in event history methodology, led to this manuscript.
This work has several goals. Our first goal in writing this book was to connect the methodology of event history to a core interest that social scientists, and indeed many scientists in fields as diverse as biostatistics and engineering, are interested in, namely understanding the causes and consequences of change over time. Scholars are commonly interested in “events.” For example, political scientists who study international relations might investigate the occurrence of a militarized dispute or criminologists might study instances of victimization. Events such as these connote change and frequently, this concern with events is concomitantly tied to an interest in the “history” preceding the event.
The applications of event history methods discussed to this point have all presumed that the event history process is absolutely continuous. This assumes change can occur anywhere in time. Nevertheless, continuity is often belied by the data: measures of time are frequently imprecise, or are made out of practical concerns (and out of convenience). For example, although cabinet governments may fall presumably at any time, the data used in our examples treat the termination point as occurring within a month. This implies that although we have data for processes that are continuous in nature, the data themselves are discrete. As event occurrences amass at discrete intervals, it may be more practical, and perhaps substantively natural, to consider models for discretetime processes. In this chapter, we consider some approaches for modeling event history processes where events only occur (or are only observed) at discrete intervals.
Discrete-Time Data
Event history data for discrete-time processes generally record the dependent variable as a series of binary outcomes denoting whether or not the event of interest occurred at the observation point. To illustrate, consider the public policy data in Table 5.1. These data are from a study of state adoption of restrictive abortion policy (Brace, Hall, and Langer 1999). The event of interest is whether or not a state adopted legislation that placed restrictions on abortion rights. The starting point of the analysis is the first legislative session after the Roe v. Wade decision (1973).
One of the strengths of the event history model over the traditional regression model is the ability of the event history model to account for covariates that change values across the span of the observation period. In principle, inclusion of TVCs in the event history model is straightforward. However, the use of TVCs can raise special problems, both substantively and statistically, for event history analysis. And while these problems are not exclusive to duration models, the problems sometimes manifest themselves differently in the context of duration analysis. In this chapter, we consider some of these problems and note that the “solutions” to these problems are in large part theoretical, and not statistical. Additionally, we illustrate how TVCs can be readily included in each of the models discussed to this point.
Incorporating Exogenous TVCs into the Duration Model
Several researchers have variously categorized the different kinds of TVCs used in event history analysis. Kalbfleisch and Prentice (1980) provide the most thorough and widely used categorization scheme. They distinguish TVCs as either being external or internal. Further, they subcategorize external covariates as being “fixed,” “defined,” or “ancillary” (Kalbfleisch and Prentice 1980, 123). A fixed external covariate is equivalent to a time-independent covariate; i.e., one having values that are known in advance and do not change over the course of the study. Defined covariates have values that can change over time, but the time path is known in advance.
The basic logic underlying parametric event history models is to directly model the time dependency exhibited in event history data. This is easily done by specifying a distribution function for the failure times. If the researcher suspects that the risk of an event occurrence is increasing, or “rising” over time, for example, then one may specify a distribution function that accounts for such a relationship. Social scientists, and in particular, political scientists, have made use of parametric methods to understand such phenomena as coalition durations (King et al. 1990; Warwick 1992), the survival of political leaders (Bueno de Mesquita and Siverson 1995), and the duration of military conflicts (Bennett and Stam 1996). Parametric models for political analysis would seem most reasonable when there exists a strong theoretical expectation regarding the “shape” of the hazard rate (or by extension, survival times), conditional on the covariates included in the model. Under such conditions, correctly specifying the distribution function will yield slightly more precise estimates of the time dependency in the data as well as more precise estimates of covariate parameters than nonparametric approaches in small samples (Collett 1994); however, if the distribution of failure times is parameterized incorrectly (for any size sample) then the nice interpretations afforded parametric models may not hold (Bergström and Edin 1992).
We make this cautionary note primarily because parametric methods directly specify the shape of the hazard rate. This will happen even if the parameterization is wrong.
The models discussed to this point have all involved so-called “single-state” processes, or equivalently, one-way transition models. In such models, there is a singular event of interest and once the event is experienced—or once an observation fails—the observation leaves the risk set and is assumed to be no longer at risk of returning to the previously occupied state. Concomitantly, in a single-state process, we assume that an observation is only at risk of experiencing a single event; that is, the observation is not at risk of making a transition to another state. Is this a reasonable assumption? Often it is not, and, at a minimum, it is an assumption that should be tested. In this chapter, we consider some models that account for repeatable events.
Additionally, in previous applications, we did not attempt to account for the different kinds of events that could occur. Some research problems, however, may lead one to consider how observations are at risk of experiencing one of several kinds of events. Problems of this kind are sometimes referred to as multi-state processes or competing risks processes because survival times may terminate in a variety of substantively interesting ways. In this chapter, we consider event history approaches to deal with the issue of competing risks.
The issues of repeatable events and competing risks serve to highlight the greater concern of this chapter: how does one employ event history models in the face of complicated social processes?
The lexicon of event history analysis stems from its historical roots in biostatistics. Terms like “death,” “failure,” and “termination” are natural for analyses of medical survival data, but may seem awkward for social science analysis. In the context of medical research, survival data usually consist of longitudinal records indicating the duration of time individuals survive until death (if death is observed). In analyzing survival data, medical researchers are commonly interested in how long subjects survive before they die. The “event” is death, while the duration of time leading up to the death, the “history,” is the observed survival time. Analysts working with survival data may be interested in assessing the relationship between survival times and covariates of interest such as drug treatments.
Likewise, social scientists frequently work with “survival data,” although such data are generally not thought of in terms of survival and death. Nevertheless, much of the data social scientists use are generated from the same kinds of processes producing survival data. Concepts like “survival,” “risk,” and “failure” are directly analogous to concepts with which social scientists work. Thus, the concept of survival and the notion of survival and failure times are useful starting points to motivate event history analysis.
Event history data are, as Petersen (1995) notes, generated from failure-time processes. A failure-time process consists of units (individuals, governments, countries, dyads) observed at some natural starting point or time-of-origin.