We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To minimize loss of life, mass casualty response requires swift identification, efficient triage categorization, and rapid hemorrhage control. Current training methods remain suboptimal. Our objective was to train first responders to triage a mass casualty incident using Virtual Reality (VR) simulation and obtain their impressions of the training’s quality and effectiveness.
Methods
We trained subjects in SALT Triage then had them respond to a terrorist bombing of a subway station using a fully immersive VR simulation. We gathered learner reactions to their VR experience and post-encounter debriefing with a custom electronic survey.
Results
Nearly 400 subjects experienced the VR encounter and completed evaluation surveys. Most participants (95%) recommended the experience for other first responders and rated the simulation (95%) and virtual patients (91%) as realistic. Ninety-four percent of participants rated the VR simulator as “excellent” or “good.” We observed no differences between those who owned a personal VR system and those who did not.
Conclusions
Our VR simulator (go.osu.edu/firstresponder) is an automated, customizable, fully immersive virtual reality system for training and assessing personnel in the proper response to a mass casualty incident. Participants perceived the encounter as effective for training, regardless of their prior experience with virtual reality.
There is a relative lack of research, targeted models and tools to manage beaches in estuaries and bays (BEBs). Many estuaries and bays have been highly modified and urbanised, for example port developments and coastal revetments. This paper outlines the complications and opportunities for conserving and managing BEBs in modified estuaries. To do this, we focus on eight diverse case studies from North and South America, Asia, Europe, Africa and Australia combined with the broader global literature. Our key findings are as follows: (1) BEBs are diverse and exist under a great variety of tide and wave conditions that differentiate them from open-coast beaches; (2) BEBs often lack statutory protection and many have already been sacrificed to development; (3) BEBs lack specific management tools and are often managed using tools developed for open-coast beaches; and (4) BEBs have the potential to become important in “nature-based” management solutions. We set the future research agenda for BEBs, which should include broadening research to include greater diversity of BEBs than in the past, standardising monitoring techniques, including the development of global databases using citizen science and developing specific management tools for BEBs. We must recognise BEBs as unique coastal features and develop the required fundamental knowledge and tools to effectively manage them, so they can continue providing their unique ecosystem services.
Higher cardiovascular burden and peripheral inflammation are associated with small vessel vascular disease, a predominantly dysexecutive cognitive profile, and a higher likelihood of conversion to vascular dementia. The digital clock drawing test, a digitized version of a standard neuropsychological tool, is useful in identifying cognitive dysfunction related to vascular etiology. However, little is known about the specific cognitive implications of vascular risk, peripheral inflammation, and varying levels of overall brain integrity. The current study aimed to examine the role of cardiovascular burden, peripheral inflammation, and brain integrity on digitally acquired clock drawing latency and graphomotor metrics in non-demented older adults.
Participants and Methods:
The final prospectively recruited IRB-consented participant sample included 184 non-demented older adults (age: 69±6 years, education: 16±3 years, 46% female, 94% white) who completed digital clock drawing, vascular assessment, blood draw, and brain MRI. Digital clock drawing variables of interest included: total completion time (TCT), pre-first hand latency (PFHL), digit misplacement, hour hand distance from center, and clock face area (CFA). Cardiovascular burden was calculated using the revised version of the Framingham Stroke Risk Profile (FSRP-10). Peripheral inflammation was operationalized using interleukin (IL)-6, IL-8, IL-10, tumor necrosis factor alpha (TNF-a), and high sensitivity C-reactive protein (hsCRP). The brain integrity composite was comprised of bilateral entorhinal cortex volume, bilateral ventricular volume, and whole brain leukoaraiosis.
Results:
Over and above age and cognitive reserve, hierarchical regressions showed FSRP-10, inflammatory markers, and brain integrity explained an additional 13.3% of the variance in command TCT (p< 0.001), with FSRP-10 (p=0.001), IL-10 (p= 0.019), and hsCRP (p= 0.019) as the main predictors in the model. FSRP-10, inflammatory markers, and brain integrity explained an additional 11.7% of the variance in command digit misplacement (p= 0.009), with findings largely driven by FSRP-10 (p< 0.001).
Conclusions:
Overall, in non-demented older adults, subtle behavioral nuances seen in digital clock drawing metrics (i.e., total completion time and digit misplacement) are partly explained by cardiovascular burden, peripheral inflammation, and brain integrity over and above age and cognitive reserve. These nuanced behaviors on digitally acquired clock drawing may associate with an emergent disease process or overall vulnerability.
Funding sources: Barber Fellowship; K07AG066813; R01 AG055337; R01 NR014810; American Psychological Foundation Dissertation Award; APA Dissertation Research Award
Research shows that highly educated individuals have at least 20 graphomotor features associated with clock drawing with hands set for '10 after 11' (Davoudi et al., 2021). Research has yet to understand clock drawing features in individuals with fewer years of education. In the current study, we compared older adults with < 8 years of education to those with > 9 years of education on number and pattern of graphomotor feature relationships in the clock drawing command condition.
Participants and Methods:
Participants age 65+ from the University of Florida (UF) and UF Health (N= 10,491) completed both command and copy conditions of the digital Clock Drawing Test (dCDT) as a part of a federally-funded investigation. Participants were categorized into two education groups: < 8 years of education (n= 304) and > 9 years of education (n= 10,187). Propensity score matching was then used to match participants from each subgroup (n= 266 for each subgroup) on the following demographic characteristics: age, sex, race, and ethnicity (n= 532, age= 74.99±6.21, education= 10.41±4.45, female= 42.7%, non-white= 32.0%). Network models were derived using Bayesian Structure Learning (BSL) with the hill-climbing algorithm to obtain optimal directed acyclic graphs (DAGs) from all possible solutions in each subgroup for the dCDT command condition.
Results:
Both education groups retained 13 of 91 possible edges (14.29%). For the < 8 years of education group (education= 6.65±1.74, ASA= 3.08±0.35), the network included 3 clock face (CF), 7 digit, and 3 hour hand (HH) and minute hand (MH) independent, or “parent,” features connected to the retained edges (BIC= -7395.24). In contrast, the > 9 years of education group (education= 14.17±2.88, ASA= 2.90±0.46) network retained 1 CF, 6 digit, 5 HH and MH, and 1 additional parent features representing the total number of pen strokes (BIC= -6689.92). Both groups showed that greater distance from the HH to the center of the clock also had greater distance from the MH to the center of the clock [ßz(< 8 years)= 0.73, ßz(> 9 years)= 0.76]. Groups were similar in the size of the digit height relative to the distance of the digits to the CF [ßz(< 8 years)= 0.27, ßz(> 9 years)= 0.56]. Larger HH angle was associated with larger MH angle across groups [ßz(< 8 years)= 0.28, ßz(> 9 years)= 0.23].
Conclusions:
Education groups differed in the ratio of dCDT parent feature types. Specifically, copy clock production in older adults with < 8 years of education relied more heavily on CF parent features. In contrast, older adults with > 9 years of education relied more heavily on HH and MH parent features. Individuals with < 8 years of education may more infrequently present the concept of time in the clock drawing command condition. This study highlights the importance of considering education level in interpreting dCDT scores and features.
Recent research has found that machine learning based analysis of patient speech can be used to classify Alzheimer’s Disease. We know of no studies, however, which systematically explore the value of pausing events in speech for detecting cognitive limitations. Using retrospectively acquired voice data from paragraph memory tests, we created two types of pause features: a) the number and duration of pauses, and b) frequency components in speech immediately following pausing. Multiple machine learning models were used to assess how these features could effectively discriminate individuals classified into two groups: Cognitively Compromised versus Cognitively Well.
Participants and Methods:
Participants (age> 65 years, n= 67) completed the Newcomer paragraph memory test and a neuropsychological protocol as part of a federally funded prospective IRB approved investigation at the University of Florida. Participant vocal recordings were acquired for the immediate and delay conditions of the test. Speaker diarization was performed on the immediate free recall test condition to separate voices of patients from examiners. Features extracted from both test conditions included a) 3 pause characteristics (total number of pauses, total pause duration, and length of the longest pause), and b) 20 Mel Frequency Cepstral Coefficients (MFCC) pertaining to speech immediately (2.7 seconds) following pauses. These were combined with demographics (age, sex, race, education, and handedness) to create a total of 105 features that were used as inputs for multiple machine learning analytic models (random forest, logistic regression, naive Bayes, AdaBoost, Gradient Boost, and multi-layered perceptron). External neuropsychological metrics were used to initially classify Cognitively Compromised (i.e., < -1.0 standard deviation on > two of five test metrics: total immediate, delay, discrimination Hopkins Verbal Learning Test-Revised (HVLT-R),
Controlled Oral Word Association (COWA) test, category fluency ('animals')). Pearson Product Moment Correlations were used to assess the linear relationships between pauses and speech frequency categories and neuropsychological metrics.
Results:
Neuropsychology metric classification using -1SD cut-off identified 27% (18/67 participants) as Cognitively Compromised. The Cognitively Compromised group and the Cognitively Well group did not show any difference in distributions of individual pause/frequency features (Mann Whitney U-test, p> 0.11). A negative correlation was found between total duration of short pauses and HVLT total immediate free recall, while a positive correlation was found between MFCC-10 and HVLT total immediate free recall. The best classification model was AdaBoost Classifier which predicted the Cognitively Compromised label with 0.91 area under receiver operating curve, 0.81 accuracy, 0.43 sensitivity, 1.0 specificity, 1.0 precision, 0.6 f1 score.
Conclusions:
Pause characteristics and frequency profiles of speech immediately following pauses from a paragraph memory test accurately identified older adults with compromised cognition, as measured by verbal learning and verbal fluency metrics. Furthermore, individuals with reduced HVLT immediate free recall generated more pauses, while individuals who recalled more words had higher power in mid-frequency bands (10th MFCC). Future research needs to replicate how paragraph recall pause characteristics and frequency the profile of speech immediately following pauses potentially provides a low resource alternative to automatic speech recognition models for detecting cognitive impairments.
Research shows that highly educated individuals have at least 20 graphomotor features associated with clock drawing with hands set for '10 after 11' (Davoudi et al., 2021). Research has yet to understand clock drawing features in individuals with fewer years of education. In the current study, we compared older adults with < 8 years of education to those with > 9 years of education on number and pattern of graphomotor feature relationships in the clock drawing copy condition.
Participants and Methods:
Participants age 65+ from the University of Florida (UF) and UF Health (N= 10,491) completed command and copy digital Clock Drawing Tests (dCDT) as a part of a federally-funded investigation. Participants were categorized into two groups: < 8 years of education (n= 304) and > 9 years of education (n= 10,187). Propensity score matching was used to match participants from each subgroup (n= 266 for each subgroup) on the following: age, sex, race, and ethnicity (n= 532, age= 74.99±6.21, education= 10.41±4.45, female= 42.7%, non-white= 32.0%). Network models were derived using Bayesian Structure Learning (BSL) with the hill-climbing algorithm to obtain optimal directed acyclic graphs (DAGs) from all possible solutions in each subgroup for the dCDT copy condition.
Results:
The < 8 years of education group (education= 6.65±1.74, ASA= 3.08±0.35), retained 12 of 91 possible edges (13.19%, BIC= -7775.50). The network retained 2 clock face (CF), 5 digit, and 5 hour hand (HH) and minute hand (MH) independent, or “parent,” features connected to the retained edges. In contrast, the > 9 years of education group (education= 14.17±2.88, ASA= 2.90±0.46) network retained 15 of 91 possible edges (16.48%, BIC= -8261.484). The network retained 2 CF, 6 digit, 4 HH and MH, and an additional 3 total stroke parent features. Both groups showed that greater distance from the HH to the clock center also had greater distance from the MH to the clock center (ßz= 0.73, both). Groups were similar in digit width size relative to digit height [ßz(< 8 years)= 0.72, ßz(> 9 years)= 0.74]. Digit height size related to CF area [ßz(< 8 years)= 0.44, ßz(> 9 years)= 0.62] and CF area related to the digit distance to the CF across groups [ßz(< 8 years)= 0.39, ßz(> 9 years)= 0.46]. Greater distance from the MH to the clock center was associated with smaller MH angle [ßz(< 8 years)= -0.35, ßz(> 9 years)= -0.31], whereas greater digit misplacement was associated with larger MH angle across groups [ßz(< 8 years)= 0.14, ßz(> 9 years)= 0.29].
Conclusions:
Education groups differed in the ratio of dCDT parent feature types. Specifically, copy clock production in older adults with < 8 years of education relied more evenly across CF, digit, and MH and HH parent features. In contrast, those with > 9 years of education differed in the additional reliance on total stroke parent features. Individuals with < 8 years of education may more heavily rely upon visual referencing when copying a clock. This study highlights the importance of considering education level in interpreting dCDT scores and features.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
Testing of asymptomatic patients for severe acute respiratory coronavirus virus 2 (SARS-CoV-2) (ie, “asymptomatic screening) to attempt to reduce the risk of nosocomial transmission has been extensive and resource intensive, and such testing is of unclear benefit when added to other layers of infection prevention mitigation controls. In addition, the logistic challenges and costs related to screening program implementation, data noting the lack of substantial aerosol generation with elective controlled intubation, extubation, and other procedures, and the adverse patient and facility consequences of asymptomatic screening call into question the utility of this infection prevention intervention. Consequently, the Society for Healthcare Epidemiology of America (SHEA) recommends against routine universal use of asymptomatic screening for SARS-CoV-2 in healthcare facilities. Specifically, preprocedure asymptomatic screening is unlikely to provide incremental benefit in preventing SARS-CoV-2 transmission in the procedural and perioperative environment when other infection prevention strategies are in place, and it should not be considered a requirement for all patients. Admission screening may be beneficial during times of increased virus transmission in some settings where other layers of controls are limited (eg, behavioral health, congregate care, or shared patient rooms), but widespread routine use of admission asymptomatic screening is not recommended over strengthening other infection prevention controls. In this commentary, we outline the challenges surrounding the use of asymptomatic screening, including logistics and costs of implementing a screening program, and adverse patient and facility consequences. We review data pertaining to the lack of substantial aerosol generation during elective controlled intubation, extubation, and other procedures, and we provide guidance for when asymptomatic screening for SARS-CoV-2 may be considered in a limited scope.
People with severe mental illness (SMI) have more physical health conditions than the general population, resulting in higher rates of hospitalisations and mortality. In this study, we aimed to determine the rate of emergency and planned physical health hospitalisations in those with SMI, compared to matched comparators, and to investigate how these rates differ by SMI diagnosis.
Methods
We used Clinical Practice Research DataLink Gold and Aurum databases to identify 20,668 patients in England diagnosed with SMI between January 2000 and March 2016, with linked hospital records in Hospital Episode Statistics. Patients were matched with up to four patients without SMI. Primary outcomes were emergency and planned physical health admissions. Avoidable (ambulatory care sensitive) admissions and emergency admissions for accidents, injuries and substance misuse were secondary outcomes. We performed negative binomial regression, adjusted for clinical and demographic variables, stratified by SMI diagnosis.
Results
Emergency physical health (aIRR:2.33; 95% CI 2.22–2.46) and avoidable (aIRR:2.88; 95% CI 2.60–3.19) admissions were higher in patients with SMI than comparators. Emergency admission rates did not differ by SMI diagnosis. Planned physical health admissions were lower in schizophrenia (aIRR:0.80; 95% CI 0.72–0.90) and higher in bipolar disorder (aIRR:1.33; 95% CI 1.24–1.43). Accident, injury and substance misuse emergency admissions were particularly high in the year after SMI diagnosis (aIRR: 6.18; 95% CI 5.46–6.98).
Conclusion
We found twice the incidence of emergency physical health admissions in patients with SMI compared to those without SMI. Avoidable admissions were particularly elevated, suggesting interventions in community settings could reduce hospitalisations. Importantly, we found underutilisation of planned inpatient care in patients with schizophrenia. Interventions are required to ensure appropriate healthcare use, and optimal diagnosis and treatment of physical health conditions in people with SMI, to reduce the mortality gap due to physical illness.
Campeche, one of the Spanish Empire's main Mexican ports, was a place where previously distinct cultures and populations intermingled during the colonial era (AD 1540–1680). Investigation of the town's central plaza revealed a Hispanic cemetery of multi-ethnic burials. The authors combine previous analyses with newly generated genome-wide data from 10 individuals to trace detailed life histories of the mostly young, local Indigenous Americans and first-generation European and African immigrants, none of whom show evidence of genetic admixture. These results provide insights into the individual lives and social divides of the town's founder communities and demonstrate how ancient DNA analyses can contribute to understanding early colonial encounters.
The alpine–subalpine Loch Vale watershed (LVW) of Colorado, USA, has relatively high natural lithogenic P5+ fluxes to surface waters. For 1992–2018, the largest number of stream samples with P5+ concentrations ([P5+]) above detection limits occurred in 2008, corresponding with the highest frost-cracking intensity (FCI). Therefore, relatively cold winters and warm summers with a comparatively low mean annual temperature partly influence stream [P5+]. Sediment cores were collected from The Loch, an outlet lake of the LVW. Iron-, Al-, and Mn-oxide-bound phosphorus (adsorbed and authigenic phosphates; NP) serves as a proxy measurement for paleolake [P5+]. The highest NP in the core occurred during the cold and dry Allerød interstade. The lowest NP concentrations in the core occurred during climatically very wet periods in the Late Pleistocene and Early Holocene. Therefore, [P5+] are highest with relatively cold winters followed by relatively warm summers, relatively low mean annual temperatures, and relatively little rainfall and/or cryospheric melting. Currently the LVW is experiencing warming and melting of the permanent cryosphere with a rapidly declining FCI since 2008. This has the potential to dramatically decrease [P5+] in surface water ecosystems of the LVW, reducing biological productivity, enhancing P-limitation, and increasing ecosystem reliance on aeolian P5+.
To determine whether the DCTclock can detect differences across groups of patients seen in the memory clinic for suspected dementia.
Method:
Patients (n = 123) were classified into the following groups: cognitively normal (CN), subtle cognitive impairment (SbCI), amnestic cognitive impairment (aMCI), and mixed/dysexecutive cognitive impairment (mx/dysMCI). Nine outcome variables included a combined command/copy total score and four command and four copy indices measuring drawing efficiency, simple/complex motor operations, information processing speed, and spatial reasoning.
Results:
Total combined command/copy score distinguished between groups in all comparisons with medium to large effects. The mx/dysMCI group had the lowest total combined command/copy scores out of all groups. The mx/dysMCI group scored lower than the CN group on all command indices (p < .050, all analyses); and lower than the SbCI group on drawing efficiency (p = .011). The aMCI group scored lower than the CN group on spatial reasoning (p = .019). Smaller effect sizes were obtained for the four copy indices.
Conclusions:
These results suggest that DCTclock command/copy parameters can dissociate CN, SbCI, and MCI subtypes. The larger effect sizes for command clock indices suggest these metrics are sensitive in detecting early cognitive decline. Additional research with a larger sample is warranted.
To describe the epidemiology of patients with nonintestinal carbapenem-resistant Enterobacterales (CRE) colonization and to compare clinical outcomes of these patients to those with CRE infection.
Design:
A secondary analysis of Consortium on Resistance Against Carbapenems in Klebsiella and other Enterobacteriaceae 2 (CRACKLE-2), a prospective observational cohort.
Setting:
A total of 49 US short-term acute-care hospitals.
Patients:
Patients hospitalized with CRE isolated from clinical cultures, April, 30, 2016, through August 31, 2017.
Methods:
We described characteristics of patients in CRACKLE-2 with nonintestinal CRE colonization and assessed the impact of site of colonization on clinical outcomes. We then compared outcomes of patients defined as having nonintestinal CRE colonization to all those defined as having infection. The primary outcome was a desirability of outcome ranking (DOOR) at 30 days. Secondary outcomes were 30-day mortality and 90-day readmission.
Results:
Of 547 patients with nonintestinal CRE colonization, 275 (50%) were from the urinary tract, 201 (37%) were from the respiratory tract, and 71 (13%) were from a wound. Patients with urinary tract colonization were more likely to have a more desirable clinical outcome at 30 days than those with respiratory tract colonization, with a DOOR probability of better outcome of 61% (95% confidence interval [CI], 53%–71%). When compared to 255 patients with CRE infection, patients with CRE colonization had a similar overall clinical outcome, as well as 30-day mortality and 90-day readmission rates when analyzed in aggregate or by culture site. Sensitivity analyses demonstrated similar results using different definitions of infection.
Conclusions:
Patients with nonintestinal CRE colonization had outcomes similar to those with CRE infection. Clinical outcomes may be influenced more by culture site than classification as “colonized” or “infected.”
The coronavirus disease 2019 (COVID-19) pandemic has resulted in shortages of personal protective equipment (PPE), underscoring the urgent need for simple, efficient, and inexpensive methods to decontaminate masks and respirators exposed to severe acute respiratory coronavirus virus 2 (SARS-CoV-2). We hypothesized that methylene blue (MB) photochemical treatment, which has various clinical applications, could decontaminate PPE contaminated with coronavirus.
Design:
The 2 arms of the study included (1) PPE inoculation with coronaviruses followed by MB with light (MBL) decontamination treatment and (2) PPE treatment with MBL for 5 cycles of decontamination to determine maintenance of PPE performance.
Methods:
MBL treatment was used to inactivate coronaviruses on 3 N95 filtering facepiece respirator (FFR) and 2 medical mask models. We inoculated FFR and medical mask materials with 3 coronaviruses, including SARS-CoV-2, and we treated them with 10 µM MB and exposed them to 50,000 lux of white light or 12,500 lux of red light for 30 minutes. In parallel, integrity was assessed after 5 cycles of decontamination using multiple US and international test methods, and the process was compared with the FDA-authorized vaporized hydrogen peroxide plus ozone (VHP+O3) decontamination method.
Results:
Overall, MBL robustly and consistently inactivated all 3 coronaviruses with 99.8% to >99.9% virus inactivation across all FFRs and medical masks tested. FFR and medical mask integrity was maintained after 5 cycles of MBL treatment, whereas 1 FFR model failed after 5 cycles of VHP+O3.
Conclusions:
MBL treatment decontaminated respirators and masks by inactivating 3 tested coronaviruses without compromising integrity through 5 cycles of decontamination. MBL decontamination is effective, is low cost, and does not require specialized equipment, making it applicable in low- to high-resource settings.
ABSTRACT IMPACT: The integration of patient-reported outcome measures into clinical care is feasible and can facilitate patient-centered care for individuals with systemic lupus erythematosus. OBJECTIVES/GOALS: Patient-reported outcome measures (PROMs) are powerful tools which can facilitate patient-centered care by highlighting individuals’ experience of illness. The aim of this study was to assess the feasibility and impact of implementing web-based PROMs in the routine clinical care of outpatients with systemic lupus erythematosus (SLE). METHODS/STUDY POPULATION: Outpatients with SLE were enrolled in this longitudinal cohort study at two academic medical centers. Participants completed PROMIS computerized adaptive tests assessing multiple quality of life domains at enrollment and prior to two consecutive routinely scheduled rheumatology visits using the ArthritisPower research registry mobile or web-based application. Score reports were shared with patients and providers before the visits. Patients and rheumatologists completed post-visit surveys evaluating the utility of PROMs in the clinical encounters. Proportions with confidence intervals were calculated to evaluate survey completion rates and responses. RESULTS/ANTICIPATED RESULTS: A total of 105 SLE patients and 16 rheumatologists participated in the study. Subjects completed PROMs in 159 of 184 eligible encounters (86%, 95% CI 81 - 91), including 90% of visit 1’s (95% CI 82 - 95) and 82% of visit 2’s (95% CI 72 - 90. Patients and rheumatologists found that PROMs were useful (91% and 83% of encounters respectively) and improved communication (86% and 72%). Rheumatologists reported that PROMs impacted patient management in 51% of visits, primarily by guiding conversations (84%), but also by influencing medication changes (15%) and prompting referrals (10%). There was no statistically significant difference in visit length before (mean=19.5 min) and after (mean=20.4 min) implementation of PROMs (p=0.52). DISCUSSION/SIGNIFICANCE OF FINDINGS: The remote capture and integration of web-based PROMs into clinical care was feasible in a diverse cohort of SLE outpatients. PROMs were useful to SLE patients and rheumatologists and promoted patient-centered care by facilitating communication.
Background: Healthcare-associated infections (HAIs) represent an ongoing problem for all clinics. Children’s clinics have waiting rooms that include toys and activities to entertain children, possibly representing reservoirs for HAIs. This study focuses on a newly constructed children’s outpatient clinic associated with a teaching hospital. We studied waiting room bacterial colonization of floors and play devices from the last phase of construction through 6 months of clinical use. Methods: Waiting room areas on the first 2 floors of the facility were studied due to high patient volume in those areas. In total, 16 locations were sampled: 11 on floors and 5 on play items. Using sterile double-transport swabs, all locations were sampled on 5 separate occasions over 2 months during the last phase of construction and 13 times over 6 months after the clinic was opened. After collection swabs were placed on ice, transported to a microbiology lab, and used to inoculate Hardy Diagnostics Cdiff Banana Broth (for Clostridium difficile - Cdiff), CHROM MRSA agar (for methicillin resistant Staphylococcus aureus - MRSA), Pseudomonas isolation agar (for Pseudomonas spp and P. aeruginosa), and tryptic soy agar to detect Bacillus spp. Media were incubated for 48 hours at 37°C and were scored for bacterial presence based on observation of colonies or change in the medium. Results: During the construction phase, waiting-room-floor bacterial colonies were dominated by Bacillus spp, and first-floor waiting rooms had nearly 7 times more colonies than those on the second floor (P < .05). A similar pattern was observed for C. difficile and MRSA. No Pseudomonas spp were observed during construction. Once patients were present, Bacillus spp contamination dropped for the first floor, but increased for the second floor. All other bacterial types (C. difficile, MRSA, Pseudomonas spp, and P. aeruginosa) increased on the second floor after the clinic opened (eg, from 23% to 42% for C. difficile and from 7% to 46% for MRSA; P < .05). The play devices showed small increases in bacterial load after clinic opening, most notably Pseudomonas spp. Conclusions: This study provides evidence that a shift from bacterial species associated with soil (eg, Bacillus spp) toward species commonly associated with humans occurred in waiting rooms after construction in this children’s outpatient clinic. Increases for MRSA, Pseudomonas spp, and P. aeruginosa were linked to patient presence. These data suggest that patients, their families, and clinic staff transport bacteria into clinic waiting rooms. This outpatient clinic environmental contamination may increase potential for HAIs and may represent a target for intervention.