Hostname: page-component-848d4c4894-4rdrl Total loading time: 0 Render date: 2024-06-14T13:14:09.507Z Has data issue: false hasContentIssue false

Accuracy of Reporting Nosocomial Infections In Intensive-Care–Unit Patients to the National Nosocomial Infections Surveillance System: A Pilot Study

Published online by Cambridge University Press:  02 January 2015

T. Grace Emori*
Affiliation:
Hospital Infections Program, Centers for Disease Control and Prevention, Atlanta, Georgia
Jonathan R. Edwards
Affiliation:
Hospital Infections Program, Centers for Disease Control and Prevention, Atlanta, Georgia
David H. Culver
Affiliation:
Hospital Infections Program, Centers for Disease Control and Prevention, Atlanta, Georgia
Catherine Sartor
Affiliation:
Hospital Infections Program, Centers for Disease Control and Prevention, Atlanta, Georgia
Leonardo A. Stroud
Affiliation:
Hospital Infections Program, Centers for Disease Control and Prevention, Atlanta, Georgia
Edward E. Gaunt
Affiliation:
Analytical Sciences, Inc, Durham, North Carolina
Teresa C. Horan
Affiliation:
Hospital Infections Program, Centers for Disease Control and Prevention, Atlanta, Georgia
Robert P. Gaynes
Affiliation:
Hospital Infections Program, Centers for Disease Control and Prevention, Atlanta, Georgia
*
Hospital Infections Program, Centers for Disease Control and Prevention, Mailstop E55, Atlanta, GA 30333

Abstract

OBJECTIVE: To assess the accuracy of nosocomial infections data reported on patients in the intensive-care unit by nine hospitals participating in the National Nosocomial Infections Surveillance (NNIS) System.

DESIGN: A pilot study was done in two phases to review the charts of selected intensive-care–unit patients who had nosocomial infections reported to the NNIS System. The charts of selected high- and low-risk patients in the same cohort who had no infections reported to the NNIS System also were included. In phase I, trained data collectors reviewed a sample of charts for nosocomial infections. Retrospectively detected infections that matched with previously reported infections were deemed to be true infections. In phase II, two Centers for Disease Control and Prevention (CDC) epidemiologists reexamined a sample of charts for which a discrepancy existed. Each sampled infection either was confirmed or disallowed by the epidemiologists. Confirmed infections also were deemed to be true infections. True infections from both phases were used to estimate the accuracy of reported NNIS data by calculating the predictive value positive, sensitivity, and specificity at each major infection site and the “other sites.”

RESULTS: The data collectors examined a total of 1,136 patients' charts in phase I. Among these charts were 611 infections that the study hospitals had reported to the CDC. The data collectors retrospectively matched 474 (78%) of the prospectively identified infections, but also detected 790 infections that were not reported prospectively. Phase II focused on the discrepant infections: the 137 infections that were identified prospectively and reported but not detected retrospectively, and the 790 infections that were detected retrospectively but not reported previously. The CDC epidemiologists examined a sample of 113 of the discrepant reported infections and 369 of the discrepant detected infections, and estimated that 37% of all discrepant reported infections and 43% of all discrepant detected infections were true infections. The predictive value positive for reported bloodstream infections, pneumonia, surgical-site infection, urinary tract infection, and other sites was 87%, 89%, 72%, 92%, and 80%, respectively; the sensitivity was 85%, 68%, 67%, 59%, and 30%, respectively; and the specificity was 98.3%, 97.8%, 97.7%, 98.7%, and 98.6%, respectively.

CONCLUSIONS: When the NNIS hospitals in the study reported a nosocomial infection, the infection most likely was a true infection, and they infrequently reported conditions that were not infections. The hospitals also identified and reported most of the nosocomial infections that occurred in the patients they monitored, but accuracy varied by infection site. Primary bloodstream infection was the most accurately identified and reported site. Measures that will be taken to improve the quality of the infection data reported to the NNIS System include reviewing the criteria for definitions of infections and other data fields, enhancing communication between the CDC and NNIS hospitals, and improving the training of surveillance personnel in NNIS hospitals

Type
Original Articles
Copyright
Copyright © The Society for Healthcare Epidemiology of America 1998

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

1. Joint Commission on Accreditation of Hospitals. Infections in Hospitals. Chicago, IL: Bulletin of Joint Commission on Accreditation of Hospitals, no. 18, 08 1958.Google Scholar
2. Eickoff, TC. Hospital infection control begins with good surveillance. Hospitals JAHA 1967;41:118120.Google Scholar
3. Haley, RW, Culver, DH, White, JW, Morgan, WM, Emori, TG, Munn, VP, et al. The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol 1985;121:182205.CrossRefGoogle ScholarPubMed
4. Garner, JS, Jarvis, WR, Emori, TG, Horan, TC, Hughes, JM. CDC definitions for nosocomial infections, 1988. Am J Infect Control 1988;16:128140.CrossRefGoogle ScholarPubMed
5. Emori, TG, Culver, DH, Horan, TC, Jarvis, WR, White, JW, Olson, DR, et al. National Nosocomial Infections Surveillance (NNIS) System: description of surveillance methods. Am J Infect Control 1991;19:1935.CrossRefGoogle ScholarPubMed
6. Centers for Disease Control. Nosocomial infection rates for interhospital comparison: limitations and possible solutions. Infect Control Hosp Epidemiol 1991;12:609621.CrossRefGoogle Scholar
7. Centers for Disease Control and Prevention. National Nosocomial Infections Surveillance (NNIS) System, semiannual report, May 1995. Am J Infect Control 1995;23:377385.Google Scholar
8. Centers for Disease Control and Prevention. National Nosocomial Infections Surveillance (NNIS) System, data summary from October 1986-April 1996, issued May 1996. Am J Infect Control 1996;24:380398.CrossRefGoogle Scholar
9. US Department of Health and Human Services. Healthy People 2000: National Health Promotion and Disease Prevention Objectives. Washington, DC: US Government Printing Office; 1990. DHHS publication PHS 91-50212.Google Scholar
10. Eickhoff, TC, Brachman, PS, Bennett, JV, Brown, JF. Surveillance of nosocomial infections in community hospitals, I: surveillance methods, effectiveness, and initial results. J Infect Dis 1969;120:305317.Google Scholar
11. Bennett, JV, Scheckler, WE, Maki, DG, Brachman, PV. Current national patterns—United States. Proceedings of the International Conference on Nosocomial Infections. Atlanta, GA: Centers for Disease Control; 1970.Google Scholar
12. Wenzel, RP, Osterman, CA, Hunting, KJ, Gwaltney, JM. Hospital-aquired infections, I: surveillance in a university hospital. Am J Epidemiol 1976;103:251260.CrossRefGoogle ScholarPubMed
13. Feldman, L, Lamson, M, Galleli, JF, Bennett, JE. Surveillance of nosocomial infections by antibiotic monitoring. JAMA 1979;241:28062807.CrossRefGoogle ScholarPubMed
14. Gross, PA, Beaugard, A, Van Antwerpen, C. Surveillance for nosocomial infections: can the sources of data be reduced? Infect Control 1980;1:233236.Google Scholar
15. Birnbaum, D, King, LA. Disadvantages of infection surveillance by medical record chart review. Am J Infect Control 1981;9:1517.CrossRefGoogle ScholarPubMed
16. Cardo, DM, Falk, PS, Mayhall, CG. Validation of surgical wound surveillance. Infect Control Hosp Epidemiol 1993;14:211215.Google Scholar
17. Baker, C, Luce, J, Chenoweth, C, Friedman, C. Comparison of case-finding methodologies for endometritis after cesarean section. Am J Infect Control 1995;23:2733.CrossRefGoogle ScholarPubMed
18. Haley, RW, Schaberg, DR, McClish, DK, Quade, D, Crossley, KB, Culver, DH, et al. The accuracy of retrospective chart review in measuring nosocomial infection rates: results of validation studies in pilot hospitals. Am J Epidemiol 1980;111:516533.Google Scholar
19. Massanari, RM, Wilkerson, K, Streed, SA, Hierholzer, WJ Jr. Reliability of reporting nosocomial infections in the discharge abstract and implications for receipt of revenues under prospective reimbursement. Am J Pub Health 1987;77:561564.CrossRefGoogle ScholarPubMed
20. Wenzel, RP, Osterman, CA, Hunting, KJ, Gwaltney, JM. Hospital- aquired infections, I: surveillance in a university hospital. Am J Epidemiol 1976;103:251260.Google Scholar