We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
The association between cannabis and psychosis is established, but the role of underlying genetics is unclear. We used data from the EU-GEI case-control study and UK Biobank to examine the independent and combined effect of heavy cannabis use and schizophrenia polygenic risk score (PRS) on risk for psychosis.
Methods
Genome-wide association study summary statistics from the Psychiatric Genomics Consortium and the Genomic Psychiatry Cohort were used to calculate schizophrenia and cannabis use disorder (CUD) PRS for 1098 participants from the EU-GEI study and 143600 from the UK Biobank. Both datasets had information on cannabis use.
Results
In both samples, schizophrenia PRS and cannabis use independently increased risk of psychosis. Schizophrenia PRS was not associated with patterns of cannabis use in the EU-GEI cases or controls or UK Biobank cases. It was associated with lifetime and daily cannabis use among UK Biobank participants without psychosis, but the effect was substantially reduced when CUD PRS was included in the model. In the EU-GEI sample, regular users of high-potency cannabis had the highest odds of being a case independently of schizophrenia PRS (OR daily use high-potency cannabis adjusted for PRS = 5.09, 95% CI 3.08–8.43, p = 3.21 × 10−10). We found no evidence of interaction between schizophrenia PRS and patterns of cannabis use.
Conclusions
Regular use of high-potency cannabis remains a strong predictor of psychotic disorder independently of schizophrenia PRS, which does not seem to be associated with heavy cannabis use. These are important findings at a time of increasing use and potency of cannabis worldwide.
Social camouflaging (SC) is a set of behaviors used by autistic people to assimilate with their social environment. Using SC behaviours may put autistic people at risk for poor mental health outcomes. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, the goal of this systematic review was to investigate the development of SC and inform theory in this area by outlining the predictors, phenotype, and consequences of SC. This review fills a gap in existing literature by integrating quantitative and qualitative methodologies, including all gender identities/age groups of autistic individuals, incorporating a large scope of associated factors with SC, and expanding on theory/implications. Papers were sourced using Medline, PsycInfo, and ERIC. Results indicate that self-protection and desire for social connection motivate SC. Camouflaging behaviors include compensation, masking, and assimilation. Female individuals were found to be more likely to SC. Additionally, this review yielded novel insights including contextual factors of SC, interpersonal relational and identity-related consequences of SC, and possible bidirectional associations between SC and mental health, cognition, and age of diagnosis. Autistic youth and adults have similar SC motivations, outward expression of SC behavior, and experience similar consequences post-camouflaging. Further empirical exploration is needed to investigate the directionality between predictors and consequences of SC, and possible mitigating factors such as social stigma and gender identity.
Diagnostic criteria for major depressive disorder allow for heterogeneous symptom profiles but genetic analysis of major depressive symptoms has the potential to identify clinical and etiological subtypes. There are several challenges to integrating symptom data from genetically informative cohorts, such as sample size differences between clinical and community cohorts and various patterns of missing data.
Methods
We conducted genome-wide association studies of major depressive symptoms in three cohorts that were enriched for participants with a diagnosis of depression (Psychiatric Genomics Consortium, Australian Genetics of Depression Study, Generation Scotland) and three community cohorts who were not recruited on the basis of diagnosis (Avon Longitudinal Study of Parents and Children, Estonian Biobank, and UK Biobank). We fit a series of confirmatory factor models with factors that accounted for how symptom data was sampled and then compared alternative models with different symptom factors.
Results
The best fitting model had a distinct factor for Appetite/Weight symptoms and an additional measurement factor that accounted for the skip-structure in community cohorts (use of Depression and Anhedonia as gating symptoms).
Conclusion
The results show the importance of assessing the directionality of symptoms (such as hypersomnia versus insomnia) and of accounting for study and measurement design when meta-analyzing genetic association data.
Background: Frequent use of delayed sternal closure and prolonged stays in critical care units contribute to surgical site infections among pediatric patients undergoing cardiothoracic (CT) procedures. Bundled interventions to prevent or reduce surgical site infections (SSIs) have shown prior success, but limited data exist on sustainability of these efforts especially during the Coronavirus Disease 2019 (COVID-19) pandemic. Here, we re-examine the SSI rates for pediatric CT procedures after the onset of the pandemic. Methods: In a single academic center providing regional quaternary care, we created a multidisciplinary CT-surgery SSI Prevention workgroup in response to rising CT SSI rates. Bundle elements focused on daily chlorhexidine bathing, environmental cleaning, monthly room changes, linen management, antimicrobial prophylaxis, and sterile techniques for beside and operating room procedures. CDC surveillance definitions were used to identify superficial, deep or organ space SSIs. To assess the bundle’s sustainability, we compared SSI rates during years impacted by the COVID-19 pandemic (2021–2023, period 2) to pre-pandemic rates (2017–2019, period 1). Data from 2020 were excluded to account for bundle implementation, pandemic restrictions, and a minor decrease in surgical volumes. Rates were calculated as surgical site infection cases per 100 procedures. Mean rates across both periods were compared using paired t-tests (Stata/SE version 14.2). Results: Excluding the year 2020, the average SSI rate per 100 CT procedures increased from 1.07 in period 1 to 1.56 in period 2(p=0.55). Concurrently, the average SSI rate per 100 CT procedures with delayed closures increased from 1.49 in period 1 to 1.97 in period 2(p=0.67). Figure 1 shows SSI rates and procedure counts for 2017–2023. Coagulase negative Staphylococci most frequently caused SSIs in period 1 while methicillin-susceptible Staphylococcus aureus (MSSA) was most frequently identified in period 2. During period 2, the estimated compliance with SSI prevention bundle remained stable and reached 95% for pre-operative chlorhexidine baths and use of appropriate antimicrobial prophylaxis. Monthly room changes with dedicated environmental cleaning reached 100% compliance. Conclusion: Despite staffing shortages and resource limitations (e.g., discontinuation of contact isolation for MRSA colonization) during the COVID-19 pandemic, SSI rates for pediatric CT surgeries showed a slight, but non-statistically significant, increase in post-pandemic years as compared to pre-pandemic years. implementation of bundled interventions and improved surveillance methods may have sustainably impacted these SSI rates. Reinforcing bundle adherence as well as identifying additional prevention interventions to incorporate in pre-, intra-, and post-operative periods may improve patient outcomes.
Development of gastrointestinal illness after animal contact at petting farms is well described, as are factors such as handwashing and facility design that may modify transmission risk. However, further field evidence on other behaviours and interventions in the context of Cryptosporidium outbreaks linked to animal contact events is needed. Here, we describe a large outbreak of Cryptosporidium parvum (C. parvum) associated with a multi-day lamb petting event in the south-west of England in 2023 and present findings from a cohort study undertaken to investigate factors associated with illness. Detailed exposure questionnaires were distributed to email addresses of 647 single or multiple ticket bookings, and 157 complete responses were received. The outbreak investigation identified 23 laboratory-confirmed primary C. parvum cases. Separately, the cohort study identified 83 cases of cryptosporidiosis-like illness. Associations between illness and entering a lamb petting pen (compared to observing from outside the pen; odds ratio (OR) = 2.28, 95 per cent confidence interval (95% CI) 1.17 to 4.53) and self-reported awareness of diarrhoeal and vomiting disease transmission risk on farm sites at the time of visit (OR = 0.40, 95% CI 0.19 to 0.84) were observed. In a multivariable model adjusted for household clustering, awareness of disease transmission risk remained a significant protective factor (adjusted OR (aOR) = 0.07, 95% CI 0.01 to 0.78). The study demonstrates the likely under-ascertainment of cryptosporidiosis through laboratory surveillance and provides evidence of the impact that public health messaging could have.
This study identified 26 late invasive primary surgical site infection (IP-SSI) within 4–12 months of transplantation among 2073 SOT recipients at Duke University Hospital over the period 2015–2019. Thoracic organ transplants accounted for 25 late IP-SSI. Surveillance for late IP-SSI should be maintained for at least one year following transplant.
To evaluate the comparative epidemiology of hospital-onset bloodstream infection (HOBSI) and central line-associated bloodstream infection (CLABSI)
Design and Setting:
Retrospective observational study of HOBSI and CLABSI across a three-hospital healthcare system from 01/01/2017 to 12/31/2021
Methods:
HOBSIs were identified as any non-commensal positive blood culture event on or after hospital day 3. CLABSIs were identified based on National Healthcare Safety Network (NHSN) criteria. We performed a time-series analysis to assess comparative temporal trends among HOBSI and CLABSI incidence. Using univariable and multivariable regression analyses, we compared demographics, risk factors, and outcomes between non-CLABSI HOBSI and CLABSI, as HOBSI and CLABSI are not exclusive entities.
Results:
HOBSI incidence increased over the study period (IRR 1.006 HOBSI/1,000 patient days; 95% CI 1.001–1.012; P = .03), while no change in CLABSI incidence was observed (IRR .997 CLABSIs/1,000 central line days, 95% CI .992–1.002, P = .22). Differing demographic, microbiologic, and risk factor profiles were observed between CLABSIs and non-CLABSI HOBSIs. Multivariable analysis found lower odds of mortality among patients with CLABSIs when adjusted for covariates that approximate severity of illness (OR .27; 95% CI .11–.64; P < .01).
Conclusions:
HOBSI incidence increased over the study period without a concurrent increase in CLABSI in our study population. Furthermore, risk factor and outcome profiles varied between CLABSI and non-CLABSI HOBSI, which suggest that these metrics differ in important ways worth considering if HOBSI is adopted as a quality metric.
The origins and timing of inpatient room sink contamination with carbapenem-resistant organisms (CROs) are poorly understood.
Methods:
We performed a prospective observational study to describe the timing, rate, and frequency of CRO contamination of in-room handwashing sinks in 2 intensive care units (ICU) in a newly constructed hospital bed tower. Study units, A and B, were opened to patient care in succession. The patients in unit A were moved to a new unit in the same bed tower, unit B. Each unit was similarly designed with 26 rooms and in-room sinks. Microbiological samples were taken every 4 weeks from 3 locations from each study sink: the top of the bowl, the drain cover, and the p-trap. The primary outcome was sink conversion events (SCEs), defined as CRO contamination of a sink in which CRO had not previously been detected.
Results:
Sink samples were obtained 22 times from September 2020 to June 2022, giving 1,638 total environmental cultures. In total, 2,814 patients were admitted to study units while sink sampling occurred. We observed 35 SCEs (73%) overall; 9 sinks (41%) in unit A became contaminated with CRO by month 10, and all 26 sinks became contaminated in unit B by month 7. Overall, 299 CRO isolates were recovered; the most common species were Enterobacter cloacae and Pseudomonas aeruginosa.
Conclusion:
CRO contamination of sinks in 2 newly constructed ICUs was rapid and cumulative. Our findings support in-room sinks as reservoirs of CRO and emphasize the need for prevention strategies to mitigate contamination of hands and surfaces from CRO-colonized sinks.
Various water-based heater-cooler devices (HCDs) have been implicated in nontuberculous mycobacteria outbreaks. Ongoing rigorous surveillance for healthcare-associated M. abscessus (HA-Mab) put in place following a prior institutional outbreak of M. abscessus alerted investigators to a cluster of 3 extrapulmonary M. abscessus infections among patients who had undergone cardiothoracic surgery.
Methods:
Investigators convened a multidisciplinary team and launched a comprehensive investigation to identify potential sources of M. abscessus in the healthcare setting. Adherence to tap water avoidance protocols during patient care and HCD cleaning, disinfection, and maintenance practices were reviewed. Relevant environmental samples were obtained. Patient and environmental M. abscessus isolates were compared using multilocus-sequence typing and pulsed-field gel electrophoresis. Smoke testing was performed to evaluate the potential for aerosol generation and dispersion during HCD use. The entire HCD fleet was replaced to mitigate continued transmission.
Results:
Clinical presentations of case patients and epidemiologic data supported intraoperative acquisition. M. abscessus was isolated from HCDs used on patients and molecular comparison with patient isolates demonstrated clonality. Smoke testing simulated aerosolization of M. abscessus from HCDs during device operation. Because the HCD fleet was replaced, no additional extrapulmonary HA-Mab infections due to the unique clone identified in this cluster have been detected.
Conclusions:
Despite adhering to HCD cleaning and disinfection strategies beyond manufacturer instructions for use, HCDs became colonized with and ultimately transmitted M. abscessus to 3 patients. Design modifications to better contain aerosols or filter exhaust during device operation are needed to prevent NTM transmission events from water-based HCDs.
Children with neurodevelopmental disorders (NDDs) commonly experience attentional and executive function (EF) difficulties that are negatively associated with academic success, psychosocial functioning, and quality of life. Access to early and consistent interventions is a critical protective factor and there are recommendations to deliver cognitive interventions in schools; however, current cognitive interventions are expensive and/or inaccessible, particularly for those with limited resources and/or in remote communities. The current study evaluated the school-based implementation of two game-based interventions in children with NDDs: 1) a novel neurocognitive attention/EF intervention (Dino Island; DI), and 2) a commercial educational intervention (Adventure Academy; AA). DI is a game-based attention/EF intervention specifically developed for children for delivery in community-based settings.
Participants and Methods:
Thirty five children with NDDs (ages 5-13 years) and 17 EAs participated. EAs completed on-line training to deliver the interventions to assigned students at their respective schools (3x/week, 40-60 minutes/session, 8 weeks, 14 hours in total). We gathered baseline child and EA demographic data, completed pre-intervention EA interviews, and conducted regular fidelity checks throughout the interventions. Implementation data included paper-pencil tracking forms, computerized game analytic data, and online communications.
Results:
Using a mixed methods approach we evaluated the following implementation outcomes: fidelity, feasibility, acceptability, and barriers. Overall, no meaningful between-group differences were found in EA or child demographics, except for total number of years worked as an EA (M = 17.18 for AA and 9.15 for DI; t (22) = - 4.34, p < .01) and EA gender (χ2 (1) = 6.11, p < .05). For both groups, EA age was significantly associated with the number of sessions played [DI (r = .847, p < .01), AA (r = .986, p < .05)]. EAs who knew their student better completed longer sessions [DI (r = .646), AA (r = .973)], all ps < .05]. The number of years worked as an EA was negatively associated with the total intervention hours for both groups. Qualitative interview data indicated that most EAs found DI valuable and feasible to deliver in their classrooms, whereas more implementation challenges were identified with AA. Barriers common to both groups included technical difficulties (e.g., game access, internet firewalls), environmental barriers (e.g., distractions in surroundings, time of the year), child factors (e.g., lack of motivation, attentional difficulties, frustration), and game-specific factors (e.g., difficulty level progression). Barriers specific to DI included greater challenges in motivating children as a function of difficulty level progression. Furthermore, given the comprehensive nature of training required for delivery, EAs needed a longer time to complete the training for DI. Nevertheless, many EAs in the DI group found the training helpful, with a potential to generalize to other children in the classroom.
Conclusions:
The availability of affordable, accessible, and effective cognitive intervention is important for children with NDDs. We found that delivery of a novel cognitive intervention by EAs was feasible and acceptable, with similarities and differences in implementation facilitators/barriers between the cognitive and commercialized academic intervention. Recommendations regarding strategies for successful school-based implementation of neurocognitive intervention will be elaborated on in the poster.
Executive functions (EFs) are considered to be both unitary and diverse functions with common conceptualizations consisting of inhibitory control, working memory, and cognitive flexibility. Current research indicates that these abilities develop along different timelines and that working memory and inhibitory control may be foundational for cognitive flexibility, or the ability to shift attention between tasks or operations. Very few interventions target cognitive flexibility despite its importance for academic or occupational tasks, social skills, problem-solving, and goal-directed behavior in general, and the ability is commonly impaired in individuals with neurodevelopmental disorders (NDDs) such as autism spectrum disorder, attention deficit hyperactivity disorder, and learning disorders. The current study investigated a tablet-based cognitive flexibility intervention, Dino Island (DI), that combines a game-based, process-specific intervention with compensatory metacognitive strategies as delivered by classroom aides within a school setting.
Participants and Methods:
20 children between ages 6-12 years (x̄ = 10.83 years) with NDDs and identified executive function deficits and their assigned classroom aides (i.e., “interventionists”) were randomly assigned to either DI or an educational game control condition. Interventionists completed a 2-4 hour online training course and a brief, remote Q&A session with the research team, which provided key information for delivering the intervention such as game-play and metacognitive/behavioral strategy instruction. Fidelity checks were conducted weekly. Interventionists were instructed to deliver 14-16 hours of intervention during the school day over 6-8 weeks, divided into 3-4 weekly sessions of 30-60 minutes each. Baseline and post-intervention assessments consisted of cognitive measures of cognitive flexibility (Minnesota Executive Function Scale), working memory (Weschler Intelligence Scales for Children, 4th Edn. Integrated Spatial Span) and parent-completed EF rating scales (Behavior Rating Inventory of Executive Function).
Results:
Samples sizes were smaller than expected due to COVID-19 related disruptions within schools, so nonparametric analyses were conducted to explore trends in the data. Results of the Mann-Whitney U test indicated that participants within the DI condition made greater gains in cognitive flexibility with a trend towards significance (p = 0.115. After dummy coding for positive change, results also indicated that gains in spatial working memory differed by condition (p = 0.127). Similarly, gains in task monitoring trended towards significant difference by condition.
Conclusions:
DI, a novel EF intervention, may be beneficial to cognitive flexibility, working memory, and monitoring skills within youth with EF deficits. Though there were many absences and upheavals within the participating schools related to COVID-19, it is promising to see differences in outcomes with such a small sample. This poster will expand upon the current results as well as future directions for the DI intervention.
We compared the number of blood-culture events before and after the introduction of a blood-culture algorithm and provider feedback. Secondary objectives were the comparison of blood-culture positivity and negative safety signals before and after the intervention.
Design:
Prospective cohort design.
Setting:
Two surgical intensive care units (ICUs): general and trauma surgery and cardiothoracic surgery
Patients:
Patients aged ≥18 years and admitted to the ICU at the time of the blood-culture event.
Methods:
We used an interrupted time series to compare rates of blood-culture events (ie, blood-culture events per 1,000 patient days) before and after the algorithm implementation with weekly provider feedback.
Results:
The blood-culture event rate decreased from 100 to 55 blood-culture events per 1,000 patient days in the general surgery and trauma ICU (72% reduction; incidence rate ratio [IRR], 0.38; 95% confidence interval [CI], 0.32–0.46; P < .01) and from 102 to 77 blood-culture events per 1,000 patient days in the cardiothoracic surgery ICU (55% reduction; IRR, 0.45; 95% CI, 0.39–0.52; P < .01). We did not observe any differences in average monthly antibiotic days of therapy, mortality, or readmissions between the pre- and postintervention periods.
Conclusions:
We implemented a blood-culture algorithm with data feedback in 2 surgical ICUs, and we observed significant decreases in the rates of blood-culture events without an increase in negative safety signals, including ICU length of stay, mortality, antibiotic use, or readmissions.
Background: Blood cultures are commonly ordered for patients with low risk of bacteremia. Liberal blood-culture ordering increases the risk of false-positive results, which can lead to increased length of stay, excess antibiotics, and unnecessary diagnostic procedures. We implemented a blood-culture indication algorithm with data feedback and assessed the impact on ordering volume and percent positivity. Methods: We performed a prospective cohort study from February 2022 to November 2022 using historical controls from February 2020 to January 2022. We introduced the blood-culture algorithm (Fig. 1) in 2 adult surgical intensive care units (ICUs). Clinicians reviewed charts of eligible patients with blood cultures weekly to determine whether the blood-culture algorithm was followed. They provided feedback to the unit medical directors weekly. We defined a blood-culture event as ≥1 blood culture within 24 hours. We excluded patients aged <18 years, absolute neutrophil count <500, and heart and lung transplant recipients at the time of blood-culture review. Results: In total, 7,315 blood-culture events in the preintervention group and 2,506 blood-culture events in the postintervention group met eligibility criteria. The average monthly blood-culture rate decreased from 190 blood cultures per 1,000 patient days to 142 blood cultures per 1,000 patient days (P < .01) after the algorithm was implemented. (Fig. 2) The average monthly blood-culture positivity increased from 11.7% to 14.2% (P = .13). Average monthly days of antibiotic therapy (DOT) was lower in the postintervention period than in the preintervention period (2,200 vs 1,940; P < .01). (Fig. 3) The ICU length of stay did not change before the intervention compared to after the intervention: 10 days (IQR, 5–18) versus 10 days (IQR, 5–17; P = .63). The in-hospital mortality rate was lower during the postintervention period, but the difference was not statistically significant: 9.24% versus 8.34% (P = .17). The all-cause 30-day mortality was significantly lower during the intervention period: 11.9% versus 9.7% (P < .01). The unplanned 30-day readmission percentage was significantly lower during the intervention period (10.6% vs 7.6%; P < .01). Over the 9-month intervention, we reviewed 916 blood-culture events in 452 unique patients. Overall, 74.6% of blood cultures followed the algorithm. The most common reasons overall for ordering blood cultures were severe sepsis or septic shock (37%), isolated fever and/or leukocytosis (19%), and documenting clearance of bacteremia (15%) (Table 1). The most common indications for inappropriate blood cultures were isolated fever and/or leukocytosis (53%). Conclusions: We introduced a blood-culture algorithm with data feedback in 2 surgical ICUs and observed decreases in blood-culture volume without a negative impact on ICU LOS or mortality rate.
Neurodevelopmental challenges are the most prevalent comorbidity associated with a diagnosis of critical CHD, and there is a high incidence of gross and fine motor delays noted in early infancy. The frequency of motor delays in hospitalised infants with critical CHD requires close monitoring from developmental therapies (physical therapists, occupational therapists, and speech-language pathologists) to optimise motor development. Currently, minimal literature defines developmental therapists’ role in caring for infants with critical CHD in intensive or acute care hospital units.
Purpose:
This article describes typical infant motor skill development, how the hospital environment and events surrounding early cardiac surgical interventions impact those skills, and how developmental therapists support motor skill acquisition in infants with critical CHD. Recommendations for healthcare professionals and those who provide medical or developmental support in promotion of optimal motor skill development in hospitalised infants with critical CHD are discussed.
Conclusions:
Infants with critical CHD requiring neonatal surgical intervention experience interrupted motor skill interactions and developmental trajectories. As part of the interdisciplinary team working in intensive and acute care settings, developmental therapists assess, guide motor intervention, promote optimal motor skill acquisition, and support the infant’s overall development.
We assessed Oxivir Tb wipe disinfectant residue in a controlled laboratory setting to evaluate low environmental contamination of SARS-CoV-2. Frequency of viral RNA detection was not statistically different between intervention and control arms on day 3 (P=0.14). Environmental contamination viability is low; residual disinfectant did not significantly contribute to low contamination.
Crinoids were major constituents of late Carboniferous (Pennsylvanian) marine ecosystems, but their rapid disarticulation rates after death result in few well-preserved specimens, limiting the study of their growth. This is amplified for cladids, who had among the highest disarticulation rates of all Paleozoic crinoids due to the relatively loose suturing of the calyx plates. However, Erisocrinus typus Meek and Worthen, 1865 has been found in unusually large numbers, most preserved as cups but some as nearly complete crowns, in the Barnsdall Formation in Oklahoma. The Barnsdall Formation, a Koncentrat Lagerstätte, is composed predominantly of fine- to medium-grained sandstone, overlain by mudstone and shale; severe compaction of the fossils in the mudstone and shale layer in this formation allowed for exceptional preservation of the plates. Herein, we summarize a growth study based on 10 crowns of E. typus, showcasing a well-defined growth series of this species from the Barnsdall Formation, including fossils from juvenile stages of development, which are rarely preserved. We used high-resolution photographs imported into ImageJ and recorded measurements of the cup and arms for all nondistorted or disarticulated plates. Results show that the plates of the cup grew anisometrically with both positive and negative allometry. The primibrachial plates of E. typus grew with positive allometry. The brachial plates started as uniserial (i.e., cuneiform) as juveniles but shifted to be biserial. Erisocrinus typus broadly shares similar growth trajectories with other cladids. These growth patterns provide insight into feeding strategies and can aid in understanding crinoid evolutionary paleoecological trends.
In sub-Saharan Africa, there are no validated screening tools for delirium in older adults, despite the known vulnerability of older people to delirium and the associated adverse outcomes. This study aimed to assess the effectiveness of a brief smartphone-based assessment of arousal and attention (DelApp) in the identification of delirium amongst older adults admitted to the medical department of a tertiary referral hospital in Northern Tanzania.
Method:
Consecutive admissions were screened using the DelApp during a larger study of delirium prevalence and risk factors. All participants subsequently underwent detailed clinical assessment for delirium by a research doctor. Delirium and dementia were identified against DSM-5 criteria by consensus.
Results:
Complete data for 66 individuals were collected of whom 15 (22.7%) had delirium, 24.5% had dementia without delirium, and 10.6% had delirium superimposed on dementia. Sensitivity and specificity of the DelApp for delirium were 0.87 and 0.62, respectively (AUROC 0.77) and 0.88 and 0.73 (AUROC 0.85) for major cognitive impairment (dementia and delirium combined). Lower DelApp score was associated with age, significant visual impairment (<6/60 acuity), illness severity, reduced arousal and DSM-5 delirium on univariable analysis, but on multivariable logistic regression only arousal remained significant.
Conclusion:
In this setting, the DelApp performed well in identifying delirium and major cognitive impairment but did not differentiate delirium and dementia. Performance is likely to have been affected by confounders including uncorrected visual impairment and reduced level of arousal without delirium. Negative predictive value was nevertheless high, indicating excellent ‘rule out’ value in this setting.
In 2017, the Michigan Institute for Clinical and Health Research (MICHR) and community partners in Flint, Michigan collaborated to launch a research funding program and evaluate the dynamics of those research partnerships receiving funding. While validated assessments for community-engaged research (CEnR) partnerships were available, the study team found none sufficiently relevant to conducting CEnR in the context of the work. MICHR faculty and staff along with community partners living and working in Flint used a community-based participatory research (CBPR) approach to develop and administer a locally relevant assessment of CEnR partnerships that were active in Flint in 2019 and 2021.
Methods:
Surveys were administered each year to over a dozen partnerships funded by MICHR to evaluate how community and academic partners assessed the dynamics and impact of their study teams over time.
Results:
The results suggest that partners believed that their partnerships were engaging and highly impactful. Although many substantive differences between community and academic partners’ perceptions over time were identified, the most notable regarded the financial management of the partnerships.
Conclusion:
This work contributes to the field of translational science by evaluating how the financial management of community-engaged health research partnerships in a locally relevant context of Flint can be associated with these teams’ scientific productivity and impact with national implications for CEnR. This work presents evaluation methods which can be used by clinical and translational research centers that strive to implement and measure their use of CBPR approaches.