We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although the relationship between dyslipidaemia (DL) and coronary artery disease (CAD) or between trace minerals intake and CAD is well known separately, the exact nature of this relationship remains unknown. We hypothesize that the relationship between trace mineral intake and CAD may differ depending on whether or not the individual has DL. The present study analysed the relationships among trace mineral intake, DL, and CAD in middle-aged and older adults living in Shika town, Ishikawa prefecture, Japan. This study included 895 residents following the exclusion of those with genetic risk carriers for familial hypercholesterolemia. Trace mineral intake was evaluated using the brief-type self-administered diet history questionnaire. Interactions were observed between DL and CAD with zinc (p = 0.004), copper (p = 0.010), and manganese intake (p < 0.001) in a two-way analysis of covariance adjusted for covariates such as sex, age, body mass index, and current smokers and drinkers. Multiple logistic regression analysis showed that zinc (odds ratio (OR): 0.752; 95% confidence interval (CI): 0.606, 0.934; p = 0.010), copper (OR: 0.175; 95% CI: 0.042, 0.726; p = 0.016), and manganese (OR: 0.494; 95% CI: 0.291, 0.839; p = 0.009) were significant independent variables for CAD in the dyslipidaemic group. The present results suggest that DL with a low trace mineral intake is associated with CAD. Further longitudinal studies are required to confirm this relationship.
In this controlled study, we found that exposure to ultraviolet-C (UV-C) radiation was able to arrest the growth of selected pathogenic enteric and nonfermenting Gram-negative rods. Further studies are needed to confirm the clinical efficacy and determine optimal implementation strategies for utilizing UV-C terminal disinfection.
In this systematic literature review and meta-analysis, we did not find a statistically significant difference in readmission and treatment failure rates between home-based and facility-based OPAT. Optimal patient selection for appropriate OPAT location appears to be more important than the location itself for the best OPAT outcome.
Patients with malignant wounds suffer from physical and psychological symptom burden. Despite psychological support being required, the impact of malignant wounds on patients’ psychological distress is poorly investigated. We evaluated psychological distress associated with malignant wounds for patients at their end of life.
Methods
This study used the secondary analysis of the results of a large prospective cohort study, which investigated the dying process among patients with advanced cancer in 23 palliative care units in Japan. The primary outcome of this study was the prevalence of moderate to severe psychological symptom burden, evaluated by the Integrated Palliative Care Outcome Scale (IPOS)-feeling at peace scores of 2–4. In addition, the factors affecting psychological symptoms were investigated. The quality of death was also evaluated upon death using the Good Death Scale score.
Results
Out of the total 1896 patients, 156 had malignant wounds (8.2%). Malignant wounds were more common in female and young people. The breast, head, and neck were the most prevalent primary sites. More patients with malignant wounds had IPOS-feeling at peace scores of 2–4 than patients without malignant wounds (41.0% vs. 31.3%, p = 0.024). Furthermore, psychological distress was associated with moderate to severe IPOS-pain and the frequency of dressing changes. The presence of malignant wounds did not affect the quality of death.
Significance of results
This study showed increased psychological distress due to malignant wounds. Patients with malignant wounds require psychological support in addition to the treatment of physical symptoms for maintaining their quality of life.
Foreign body airway obstruction (FBAO) is a life-threatening emergency, and the prognosis of patients with FBAO is greatly affected by the prehospital process. There are only a few large-scale studies analyzing prehospital process databases of the fire department.
Study Objective:
The aim of this study was to investigate whether characteristics of patients with FBAO were associated with prehospital factors and outcomes.
Methods:
In this retrospective observational study, patients transferred to the hospital by the Tokyo, Japan Fire Department for FBAO from 2017 through 2019 were included. The association between neurologically favorable survival among the characteristics of patients with FBAO and prehospital factors affecting the outcomes was evaluated.
Results:
Of the 2,429,175 patients, 3,807 (0.2%) patients had FBAO. The highest number of FBAO cases was 99 (2.6%), which occurred on January 1 (New Year’s Day), followed by 40 cases (1.1%) on January 2, and 28 cases (0.7%) on January 3. The number of patients who experienced out-of-hospital cardiac arrest (OHCA) caused by FBAO was 1,644 (43.2%). Comparing the OHCA and non-OHCA groups, there were significant differences in age, sex, time spent at the site, and distance between the site and hospital. Cardiac arrest was significantly lower in infants after FBAO (P < .001). In total, 98.2% of patients who did not have return of spontaneous circulation (ROSC) before hospital arrival died within 30 days, a significantly higher mortality rate than that in patients who had ROSC (98.2% versus 65.8%; P < .001).
Conclusions:
Among patients who did not have ROSC following FBAO upon arrival at the hospital, 98.2% died within 30 days. Thus, it is important to remove foreign bodies promptly and provide sufficient ventilation to the patient at the scene to increase the potential for ROSC. Further, more precautions should be exercised to prevent FBAO at the beginning of the year.
Even though antimicrobial days of therapy did not significantly decrease during a period of robust stewardship activities at our center, we detected a significant downward trend in antimicrobial spectrum, as measured by days of antibiotic spectrum coverage (DASC). The DASC metric may help more broadly monitor the effect of stewardship activities.
Although multiple studies have revealed that coronavirus disease 2019 (COVID-19) vaccines can reduce COVID-19–related outcomes, little is known about their impact on post–COVID-19 conditions. We performed a systematic literature review and meta-analysis on the effectiveness of COVID-19 vaccination against post–COVID-19 conditions (ie, long COVID).
Methods:
We searched PubMed, CINAHL, EMBASE, Cochrane Central Register of Controlled Trials, Scopus, and Web of Science from December 1, 2019, to April 27, 2022, for studies evaluating COVID-19 vaccine effectiveness against post–COVID-19 conditions among individuals who received at least 1 dose of Pfizer/BioNTech, Moderna, AstraZeneca, or Janssen vaccine. A post–COVID-19 condition was defined as any symptom that was present 3 or more weeks after having COVID-19. Editorials, commentaries, reviews, study protocols, and studies in the pediatric population were excluded. We calculated the pooled diagnostic odds ratios (DORs) for post–COVID-19 conditions between vaccinated and unvaccinated individuals. Vaccine effectiveness was estimated as 100% × (1 − DOR).
Results:
In total, 10 studies with 1,600,830 individuals evaluated the effect of vaccination on post–COVID-19 conditions, of which 6 studies were included in the meta-analysis. The pooled DOR for post–COVID-19 conditions among individuals vaccinated with at least 1 dose was 0.708 (95% confidence interval (CI), 0.692–0.725) with an estimated vaccine effectiveness of 29.2% (95% CI, 27.5%–30.8%). The vaccine effectiveness was 35.3% (95% CI, 32.3%–38.1%) among those who received the COVID-19 vaccine before having COVID-19, and 27.4% (95% CI, 25.4%–29.3%) among those who received it after having COVID-19.
Conclusions:
COVID-19 vaccination both before and after having COVID-19 significantly decreased post–COVID-19 conditions for the circulating variants during the study period although vaccine effectiveness was low.
This chapter examines issues of language naming and language recognition practised by local Tibetans and scholars in the eastern Tibetosphere and discusses how and why Tibetans border their various speeches actively by naming them in various ways. It focuses on three cases: ‘Tibetic’, ‘logs-skad’, and ‘mixed language’ as separate instantiations of language recognition. Firstly, the term ‘Tibetic’ triggers controversy both amongst linguists and between linguists and the Tibetan community. Secondly, the use of the Tibetan term ‘logs-skad’ marks the recognition of unintelligible speeches to mainstream Tibetans. Thirdly, the label of ‘mixed language’ can be a crucial part of speakers’ identity. The chapter argues that linguists have a responsibility to balance their commitments to specificity with Tibetans’ practice of naming languages.
Keywords: naming, language recognition, language classification, Khams, Tibeto-Burman
Introduction
Language names in the eastern Tibetosphere have long been discussed by academics as well as Tibetans. Naming both makes and marks a boundary between a designated object and others; language names thus produce specificity and separation. In this chapter, I focus on a variety of language names and their places with systems of language classifications in the eastern Tibetosphere. These names include those used by scholars and members of local communities: ‘Tibetic’, ‘logs skad ལོགས ་ སྐད་ ‘, ‘Minyag’, ‘Lamo’, and ‘Selibu’. In discussing these names, I explore how these labels operate as means for Tibetans to engage in recognition and bordering; I also examine how scholars working in this area have considered language names and how they might respond to Tibetan ways of recognising and labelling languages.
Linguists generally recognise a language by giving it an independent name, that is, a glottonym. Glottonyms can be considered as proper names in their meaning and function (see Zink, 1963; Katz, 1977; Van Langendonck, 2008). Numerous debates around glottonyms exist, because these labels are typically not uniformly determined, and several different names may be provided for a single language or a language group, as seen in the entries of the widely used linguistic reference work, the Ethnologue (Eberhard et al., 2019).
Although chronic pain (CP) is classified as inflammatory or non-inflammatory, the involvement of fatty acid intake in this process has not yet been examined in detail. Therefore, the present study investigated whether the relationship between CP and fatty acid intake differs between high and low C-reactive protein (CRP) levels in middle-aged and elderly individuals in the Shika study. One-thousand and seven males and 1216 females with mean ages of 68⋅78 and 69⋅65 years, respectively, participated in the present study. CRP was quantified by blood sampling from participants who responded to a CP questionnaire. The brief-type self-administered diet history questionnaire (BDHQ) was used to assess fatty acid intake. Interactions were observed between CP and CRP on monounsaturated fatty acids (MUFA) and eicosadienoic acid in a two-way analysis of covariance adjusted for sex, age, lack of exercise, lack of sleep, current smoking and drinking status, and BMI. MUFA (OR 1⋅359) and eicosadienoic acid (OR 1⋅072) were identified as significant independent variables for CP in a multiple logistic regression analysis, but only in the low CRP group. Only a high intake of MUFA and eicosadienoic acid was associated with chronic neck/shoulder/upper limb pain without elevated CRP. In psychogenic and neuropathic pain without elevated CRP, an increased intake of MUFA and eicosadienoic acid, a family member of n-6 fatty acids, appears to affect CP. Further longitudinal studies are needed to elucidate this relationship.
Although multiple studies revealed high vaccine effectiveness of coronavirus disease 2019 (COVID-19) vaccines within 3 months after the completion of vaccines, long-term vaccine effectiveness has not been well established, especially after the δ (delta) variant became prominent. We performed a systematic literature review and meta-analysis of long-term vaccine effectiveness.
Methods:
We searched PubMed, CINAHL, EMBASE, Cochrane Central Register of Controlled Trials, Scopus, and Web of Science from December 2019 to November 15, 2021, for studies evaluating the long-term vaccine effectiveness against laboratory-confirmed COVID-19 or COVID-19 hospitalization among individuals who received 2 doses of Pfizer/BioNTech, Moderna, or AstraZeneca vaccines, or 1 dose of the Janssen vaccine. Long-term was defined as >5 months after the last dose. We calculated the pooled diagnostic odds ratio (DOR) with 95% confidence interval for COVID-19 between vaccinated and unvaccinated individuals. Vaccine effectiveness was estimated as 100% × (1 − DOR).
Results:
In total, 16 studies including 17,939,172 individuals evaluated long-term vaccine effectiveness and were included in the meta-analysis. The pooled DOR for COVID-19 was 0.158 (95% CI: 0.157-0.160) with an estimated vaccine effectiveness of 84.2% (95% CI, 84.0- 84.3%). Estimated vaccine effectiveness against COVID-19 hospitalization was 88.7% (95% CI, 55.8%–97.1%). Vaccine effectiveness against COVID-19 during the δ variant period was 61.2% (95% CI, 59.0%–63.3%).
Conclusions:
COVID-19 vaccines are effective in preventing COVID-19 and COVID-19 hospitalization across a long-term period for the circulating variants during the study period. More observational studies are needed to evaluate the vaccine effectiveness of third dose of a COVID-19 vaccine, the vaccine effectiveness of mixing COVID-19 vaccines, COVID-19 breakthrough infection, and vaccine effectiveness against newly emerging variants.
We aimed to decrease the use of outpatient parenteral antimicrobial therapy (OPAT) for patients admitted for bone and joint infections (BJIs) by applying a consensus protocol to suggest oral antibiotics for BJI.
Design:
A quasi-experimental before-and-after study.
Setting:
Inpatient setting at a single medical center.
Patients:
All inpatients admitted with a BJI.
Methods:
We developed a consensus table of oral antibiotics for BJI among infectious diseases (ID) specialists. Using the consensus table, we implemented a protocol consisting of a weekly reminder e-mail and case-based discussion with the consulting ID physician. Outcomes of patients during the implementation period (November 1, 2020, to May 31, 2021) were compared with those during the preimplementation period (January 1, 2019, to October 31, 2020). Our primary outcome was the proportion of patients treated with OPAT. Secondary outcomes included length of hospital stay (LOS) and recurrence or death within 6 months.
Results:
In total, 77 patients during the preimplementation period and 22 patients during the implementation period were identified to have a BJI. During the preimplementation period, 70.1% of patients received OPAT, whereas only 31.8% of patients had OPAT during the implementation period (P = .003). The median LOS after final ID recommendation was significantly shorter during the implementation period (median 3 days versus 1 day; P < .001). We detected no significant difference in the 6-month rate of recurrence (24.7% vs 31.8%; P = .46) or mortality (9.1% vs 9.1%; P = 1.00).
Conclusions:
More patients admitted with BJIs were treated with oral antibiotics during the implementation phase of our quality improvement initiative.
Healthcare workers (HCWs) are at risk of COVID-19 due to high levels of SARS-CoV-2 exposure. Thus, effective vaccines are needed. We performed a systematic literature review and meta-analysis on COVID-19 short-term vaccine effectiveness among HCWs.
Methods:
We searched PubMed, CINAHL, EMBASE, Cochrane Central Register of Controlled Trials, Scopus, and Web of Science from December 2019 to June 11, 2021, for studies evaluating vaccine effectiveness against symptomatic COVID-19 among HCWs. To meta-analyze the extracted data, we calculated the pooled diagnostic odds ratio (DOR) for COVID-19 between vaccinated and unvaccinated HCWs. Vaccine effectiveness was estimated as 100% × (1 − DOR). We also performed a stratified analysis for vaccine effectiveness by vaccination status: 1 dose and 2 doses of the vaccine.
Results:
We included 13 studies, including 173,742 HCWs evaluated for vaccine effectiveness in the meta-analysis. The vast majority (99.9%) of HCWs were vaccinated with the Pfizer/BioNTech COVID-19 mRNA vaccine. The pooled DOR for symptomatic COVID-19 among vaccinated HCWs was 0.072 (95% confidence interval [CI], 0.028–0.184) with an estimated vaccine effectiveness of 92.8% (95% CI, 81.6%–97.2%). In stratified analyses, the estimated vaccine effectiveness against symptomatic COVID-19 among HCWs who had received 1 dose of vaccine was 82.1% (95% CI, 46.1%–94.1%) and the vaccine effectiveness among HCWs who had received 2 doses was 93.5% (95% CI, 82.5%–97.6%).
Conclusions:
The COVID-19 mRNA vaccines are highly effective against symptomatic COVID-19, even with 1 dose. More observational studies are needed to evaluate the vaccine effectiveness of other COVID-19 vaccines, COVID-19 breakthrough after vaccination, and vaccine efficacy against new variants.
Efforts to improve antimicrobial prescribing are occurring within a changing healthcare landscape, which includes the expanded use of telehealth technology. The wider adoption of telehealth presents both challenges and opportunities for promoting antimicrobial stewardship. Telehealth provides 2 avenues for remote infectious disease (ID) specialists to improve inpatient antimicrobial prescribing: telehealth-supported antimicrobial stewardship and tele-ID consultations. Those 2 activities can work separately or synergistically. Studies on telehealth-supported antimicrobial stewardship have reported a reduction in inpatient antimicrobial prescribing, cost savings related to less antimicrobial use, a decrease in Clostridioides difficile infections, and improved antimicrobial susceptibility patterns for common organisms. Tele-ID consultation is associated with fewer hospital transfers, a shorter length of hospital stay, and decreased mortality. The implementation of these activities can be flexible depending on local needs and available resources, but several barriers may be encountered. Opportunities also exist to improve antimicrobial use in outpatient settings. Telehealth provides a more rapid mechanism for conducting outpatient ID consultations, and increasing use of telehealth for routine and urgent outpatient visits present new challenges for antimicrobial stewardship. In primary care, urgent care, and emergency care settings, unnecessary antimicrobial use for viral acute respiratory tract infections is common during telehealth encounters, as is the case for fact-to-face encounters. For some diagnoses, such as otitis media and pharyngitis, antimicrobials are further overprescribed via telehealth. Evidence is still lacking on the optimal stewardship strategies to improve antimicrobial prescribing during telehealth encounters in ambulatory care, but conventional outpatient stewardship strategies are likely transferable. Further work is warranted to fill this knowledge gap.
To evaluate the frequency of antibiotic prescribing for common infections via telemedicine compared to face-to-face visits.
Design:
Systematic literature review and meta-analysis.
Methods:
We searched PubMed, CINAHL, Embase (Elsevier platform) and Cochrane CENTRAL to identify studies comparing frequency of antibiotic prescribing via telemedicine and face-to-face visits without restrictions by publish dates or language used. We conducted meta-analyses of 5 infections: sinusitis, pharyngitis, otitis media, upper respiratory infection (URI) and urinary tract infection (UTI). Random-effect models were used to obtain pooled odds ratios (ORs). Heterogeneity was evaluated with I2 estimation and the Cochran Q statistic test.
Results:
Among 3,106 studies screened, 23 studies (1 randomized control study, 22 observational studies) were included in the systematic literature review. Most of the studies (21 of 23) were conducted in the United States. Studies were substantially heterogenous, but stratified analyses revealed that providers prescribed antibiotics more frequently via telemedicine for otitis media (pooled odds ratio [OR], 1.26; 95% confidence interval [CI], 1.04–1.52; I2 = 31%) and pharyngitis (pooled OR, 1.16; 95% CI, 1.01–1.33; I2 = 0%). We detected no significant difference in the frequencies of antibiotic prescribing for sinusitis (pooled OR, 0.86; 95% CI, 0.70–1.06; I2 = 91%), URI (pooled OR, 1.18; 95% CI, 0.59–2.39; I2 = 100%), or UTI (pooled OR, 2.57; 95% CI, 0.88–7.46; I2 = 91%).
Conclusions:
Telemedicine visits for otitis media and pharyngitis were associated with higher rates of antibiotic prescribing. The interpretation of these findings requires caution due to substantial heterogeneity among available studies. Large-scale, well-designed studies with comprehensive assessment of antibiotic prescribing for common outpatient infections comparing telemedicine and face-to-face visits are needed to validate our findings.
The association between fruit and vegetable consumption before and during pregnancy and offspring’s physical growth has been well reported, but no study has focused on offspring’s neurological development. We aimed to explore the association between maternal fruit and vegetable consumption before and during pregnancy and developmental delays in their offspring aged 2 years. Between July 2013 and March 2017, 23 406 women were recruited for the Tohoku Medical Megabank Project Birth and Three-Generation Cohort Study. Fruit and vegetable consumption was calculated using FFQ, and offspring’s developmental delays were evaluated by the Ages & Stages Questionnaires, Third Edition (ASQ-3) for infants aged 2 years. Finally, 10 420 women and 10 543 infants were included in the analysis. Totally, 14·9 % of children had developmental delay when screened using the ASQ-3. Women in the highest quartile of vegetable consumption from pre-pregnancy to early pregnancy and from early to mid-pregnancy had lower odds of offspring’s developmental delays (OR 0·74; 95 % CI 0·63, 0·89 and OR 0·70; 95 % CI 0·59, 0·84, respectively) than women in the lowest quartile. Women in the highest quartile of fruit consumption from early to mid-pregnancy had lower odds of offspring’s developmental delays (OR 0·78; 95 % CI 0·66, 0·92) than women in the lowest quartile. In conclusion, high fruit and vegetable consumption before and during pregnancy was associated with a lower risk of developmental delays in offspring aged 2 years.
To develop a fully automated algorithm using data from the Veterans’ Affairs (VA) electrical medical record (EMR) to identify deep-incisional surgical site infections (SSIs) after cardiac surgeries and total joint arthroplasties (TJAs) to be used for research studies.
Design:
Retrospective cohort study.
Setting:
This study was conducted in 11 VA hospitals.
Participants:
Patients who underwent coronary artery bypass grafting or valve replacement between January 1, 2010, and March 31, 2018 (cardiac cohort) and patients who underwent total hip arthroplasty or total knee arthroplasty between January 1, 2007, and March 31, 2018 (TJA cohort).
Methods:
Relevant clinical information and administrative code data were extracted from the EMR. The outcomes of interest were mediastinitis, endocarditis, or deep-incisional or organ-space SSI within 30 days after surgery. Multiple logistic regression analysis with a repeated regular bootstrap procedure was used to select variables and to assign points in the models. Sensitivities, specificities, positive predictive values (PPVs) and negative predictive values were calculated with comparison to outcomes collected by the Veterans’ Affairs Surgical Quality Improvement Program (VASQIP).
Results:
Overall, 49 (0.5%) of the 13,341 cardiac surgeries were classified as mediastinitis or endocarditis, and 83 (0.6%) of the 12,992 TJAs were classified as deep-incisional or organ-space SSIs. With at least 60% sensitivity, the PPVs of the SSI detection algorithms after cardiac surgeries and TJAs were 52.5% and 62.0%, respectively.
Conclusions:
Considering the low prevalence rate of SSIs, our algorithms were successful in identifying a majority of patients with a true SSI while simultaneously reducing false-positive cases. As a next step, validation of these algorithms in different hospital systems with EMR will be needed.
Dietary intake modification is important for the treatment of chronic kidney disease (CKD); however, little is known about the association between dietary intake of antioxidant vitamins and kidney function based on gender difference. We examined the relationship of dietary intake of antioxidant vitamins with decreased kidney function according to gender in Japanese subjects. This population-based, cross-sectional study included 936 Japanese participants with the age of 40 years or older. A validated brief self-administered diet history questionnaire was used to measure dietary intakes of vitamin E and its four isoforms, vitamin A and vitamin C. Decreased kidney function was defined as estimated glomerular filtration rate <60 ml/min/1·73 m2. A total of 498 (53·2 %) of the study participants were women. Mean age was 62·4 ± 11·3 years. Overall, 157 subjects met the criteria of decreased kidney function. In the fully adjusted model, a high vitamin E intake is inversely associated with decreased kidney function in women (odds ratio, 0·886; 95 % confidence interval, 0·786–0·998), whereas vitamin E intake was not associated with decreased kidney function (odds ratio, 0·931; 95 % confidence interval, 0·811–1·069) in men. No significant association between dietary intake of vitamins A and C and decreased kidney function was observed in women and men. Higher dietary intake of vitamin E was inversely associated with decreased kidney function in middle-aged and older women, and the result may provide insight into the more tailored dietary approaches to prevent CKD.
The aims of this research communication were to investigate the in vivo tissue accumulation of phytanic acid (PA) and any changes in the tissue fatty acid profiles in mice. Previous in vitro studies have demonstrated that PA is a milk component with the potential to cause both beneficial effects on lipid and glucose metabolism and detrimental effects on neuronal cells. However, there is limited information about its in vivo actions. In this study, mice were fed diets containing either 0.00 or 0.05% 3RS, 7R, 11R-PA, which is the isomer found in milk and the human body. After 4 weeks, adipose tissue, liver and brain were harvested and their fatty acid profiles were determined by gas chromatographic analysis. The results showed that PA and its metabolite pristanic acid accumulated in the adipose tissue of PA-fed mice, and that dietary PA decreased the hepatic compositions of several saturated fatty acids such as palmitic acid while increasing the compositions of polyunsaturated fatty acids including linoleic acid and docosahexaenoic acid. However, dietary PA neither accumulated nor had a high impact on the fatty acid profile in the brain. These results suggested that dietary PA could exert its biological activities in adipose tissue and liver, although the brain is relatively less affected by dietary PA. These data provide a basis for understanding the in vivo physiological actions of PA.
Background: Studies of interventions to decrease rates of surgical site infections (SSIs) must include thousands of patients to be statistically powered to demonstrate a significant reduction. Therefore, it is important to develop methodology to extract data available in the electronic medical record (EMR) to accurately measure SSI rates. Prior studies have created tools that optimize sensitivity to prioritize chart review for infection control purposes. However, for research studies, positive predictive value (PPV) with reasonable sensitivity is preferred to limit the impact of false-positive results on the assessment of intervention effectiveness. Using information from the prior tools, we aimed to determine whether an algorithm using data available in the Veterans Affairs (VA) EMR could accurately and efficiently identify deep incisional or organ-space SSIs found in the VA Surgical Quality Improvement Program (VASQIP) data set for cardiac and orthopedic surgery patients. Methods: We conducted a retrospective cohort study of patients who underwent cardiac surgery or total joint arthroplasty (TJA) at 11 VA hospitals between January 1, 2007, and April 30, 2017. We used EMR data that were recorded in the 30 days after surgery on inflammatory markers; microbiology; antibiotics prescribed after surgery; International Classification of Diseases (ICD) and current procedural terminology (CPT) codes for reoperation for an infection related purpose; and ICD codes for mediastinitis, prosthetic joint infection, and other SSIs. These metrics were used in an algorithm to determine whether a patient had a deep or organ-space SSI. Sensitivity, specificity, PPV and negative predictive values (NPV) were calculated for accuracy of the algorithm through comparison with 30-day SSI outcomes collected by nurse chart review in the VASQIP data set. Results: Among the 11 VA hospitals, there were 18,224 cardiac surgeries and 16,592 TJA during the study period. Of these, 20,043 were evaluated by VASQIP nurses and were included in our final cohort. Of the 8,803 cardiac surgeries included, manual review identified 44 (0.50%) mediastinitis cases. Of the 11,240 TJAs, manual review identified 71 (0.63%) deep or organ-space SSIs. Our algorithm identified 32 of the mediastinitis cases (73%) and 58 of the deep or organ-space SSI cases (82%). Sensitivity, specificity, PPV, and NPV are shown in Table 1. Of the patients that our algorithm identified as having a deep or organ-space SSI, only 21% (PPV) actually had an SSI after cardiac surgery or TJA. Conclusions: Use of the algorithm can identify most complex SSIs (73%–82%), but other data are necessary to separate false-positive from true-positive cases and to improve the efficiency of case detection to support research questions.