Introduction
Over the past 20 years, US local, state, and federal agencies have implemented a wide range of measures for public health emergency preparedness; “but these efforts have not resulted in a clear picture of the nation’s preparedness owing to ambiguous and uncertain preparedness goals, a lack of agreement about what the measures should aim at and how they should be interpreted, and a weak system of accountability for producing results.” Reference Lurie, Wasserman and Nelson1,Reference Nelson, Lurie, Wasserman and Zakowski2
Health security is a recent concept that encompasses “activities and measures across sovereign boundaries that mitigates public health incidents to ensure the health of populations.” 3 However, like preparedness, there remains limited consensus regarding the definition and scope of health security. Reference Kamradt-Scott4
In 2012, the US Centers for Disease Control and Prevention (CDC; Atlanta, Georgia USA) initiated development of the National Health Security Preparedness Index (NHSPI) for “measuring the nation’s progress in preparing for, responding to, and recovering from disasters and other large-scale emergencies that pose risks to health and well-being in the United States.” 5
As a measurement tool, the NHSPI is intended to “summarize levels of preparedness achieved within individual states and for the nation as a whole” and has been used for this purpose. Health impact is implied as the domains of the NHSPI are reportedly considered to have been “shown to be important in protecting people from the health consequences of disasters and other large-scale hazards and emergencies.” 5
Validity is central to determining the utility of any scale. Validity describes a test’s ability to produce results consistent with other measures of the same characteristic and it requires external criteria. Reference Karras6 Predictive validity is considered to be achieved when there is significant correlation between an experimental test (eg, the NHSPI) and a reference criterion standard (eg, excess coronavirus disease 2019 [COVID-19] mortality). This study evaluated the validity of the NHSPI for predicting excess COVID-19-related mortality.
Methods
This study compared 2019 values for the NHSPI with rates of COVID-19-related morbidity and mortality as reported by the 50 US states and Puerto Rico during the first six months of the pandemic (March 1 - August 31, 2020).
Data Collection
State data summaries for the 2019 NHSPI were downloaded from the 2019 Preparedness Index Data Preview website. 7 Table 1 lists the 2019 NHSPI indicators grouped into six domains and 19 sub-domains.
Abbreviation: NHSPI, National Health Security Preparedness Index.
Data regarding provisional death counts and excess deaths for COVID-19 (according to state and territory) were downloaded from the CDC National Center for Health Statistics website. 8 Reports were downloaded for the six-month-long period of March 1 - August 31, 2020. The CDC defined excess deaths as the difference between the observed numbers of deaths in specific time periods and expected numbers of deaths in the same time periods. Estimates of excess deaths were calculated by CDC using Farrington surveillance algorithms. 8 The lower and upper estimates of excess mortality were calculated as the difference between the observed count and one of two thresholds (either the average expected count or the upper bound of the 95% prediction interval) by jurisdiction. Excess mortality rates were calculated as the number of excess deaths per 100,000 persons in each state population using 2019 US Census Bureau (Suitland, Maryland USA) data. 9
Data Analysis
Mean values for state and territorial NHSPI domain indices were compared to state and territorial rates of COVID-19-related excess mortality using multiple linear regression, including analysis of variance. Correlations between the 51 (state and territorial) NHSPI values and corresponding COVID-19 excess mortality rates were calculated using Pearson’s correlation coefficient.
Results
In general, these calculations failed to demonstrate any significant association between values of the overall NHSPI (or its six domains) and excess COVID-19 mortality rates for 50 US states and Puerto Rico.
Table 2 reveals a high degree of variance and poor correlation of values for the overall NHSPI as compared to low and high estimates of excess COVID-19 mortality rates for 50 US states and Puerto Rico.
Abbreviation: NHSPI, National Health Security Preparedness Index.
Table 3 also indicates a high degree of variance and poor correlation of values for the six individual domains of the NHSPI as compared to low and high estimates of excess COVID-19 mortality rates for 50 US states and Puerto Rico.
Abbreviation: NHSPI, National Health Security Preparedness Index.
Discussion
A research-practice gap exists across all fields of public health, including disaster-related health science. Reference Brownson, Eyler, Harris, Moore and Tabak10,Reference Kirsch and Keim11 Public health has moved forward in recent years to bridge this gap. Evidence-based public health calls for knowledge of the determinants and consequences of disease, as well as the efficacy, effectiveness, and costs of interventions. Reference Lhachimi, Bala and Vanagas12,Reference Brownson, Fielding and Maylahn13 And yet, despite repeated urging of public health leadership, disaster epidemiology remains chiefly concerned with etiological, rather than evaluative, hypotheses. Reference Lurie, Manolio, Patterson, Collins and Frieden14,15
Scale development and validation are critical to much of the work in public health. However, scale development is not a straightforward endeavor. There are many steps to scale development. There is significant jargon within these techniques. The work can be costly, time consuming, and complex statistical analysis is often required. Despite the availability of a large amount of technical literature on scale theory and development, many incomplete scales remain. Reference Boateng, Neilands, Frongillo, Melgar-Quiñonez and Young16
Part of the challenge in establishing a valid scale for preparedness stems from the reliance upon expert opinion as compared to a set of specific empirical observations to reach the overarching conclusion.
Future developments are not always predicted correctly by Delphi consensus, especially those involving complex forecasts (eg, disasters) with multiple factors (eg, 140 separate measures). The Delphi method is therefore used most successfully in forecasting single scalar indicators.
Practical application of the NHSPI is unlikely to occur if it is not directly tied to indicators of health outcome (ie, mortality). This challenge is compounded by the polysemous nature of both of the key phrases in question, “preparedness” and “health security,” making predicted health outcomes difficult to define, let alone measure. There would appear to be limited practical utility for a health-related index that has no proven association with actual health outcomes.
Study Limitations
While this study does raise an important and reportable question regarding the criterion-related validity of the NHSPI, the investigation does not, however, prove the null hypothesis. Further study is necessary to further detail the association between the NHSPI and COVID-19-related mortality. Subsequent studies should also include a systematic analysis of content validity considering the relatively ambiguous and polysemous set of definitions associated with terms such as “health security” and “preparedness.”
Conclusion
The NHSPI does not appear to be a valid predictor of excess COVID-19 mortality rates for 50 US states and Puerto Rico during the first six months of the pandemic.
Conflicts of interest
none