We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Paediatric ventricular assist device patients, including those with single ventricle anatomy, are increasingly managed outside of the ICU. We used retrospective chart review of our single centre experience to quantify adverse event rates and ICU readmissions for 22 complex paediatric patients on ventricular assist device support (15 two ventricles, 7 single ventricle) after floor transfer. The median age was 1.65 years. The majority utilised the Berlin EXCOR (17, 77.3%). There were 9 ICU readmissions with median length of stay of 2 days. Adverse events were noted in 9 patients (41%), with infection being most common (1.8 events per patient year). There were no deaths. Single ventricle patients had a higher proportion of ICU readmission and adverse events. ICU readmission rates were low, and adverse event rates were comparable to published rates suggesting ventricular assist device patients can be safely managed on the floor.
Stroke clinical registries are critical for systems planning, quality improvement, advocacy and informing policy. We describe the methodology and evolution of the Registry of the Canadian Stroke Network/Ontario Stroke Registry in Canada.
Methods:
At the launch of the registry in 2001, trained coordinators prospectively identified patients with acute stroke or transient ischemic attack (TIA) at comprehensive stroke centers across Canada and obtained consent for registry participation and follow-up interviews. From 2003 onward, patients were identified from administrative databases, and consent was waived for data collection on a sample of eligible patients across all hospitals in Ontario and in one site in Nova Scotia. In the most recent data collection cycle, consecutive eligible patients were included across Ontario, but patients with TIA and those seen in the emergency department without admission were excluded.
Results:
Between 2001 and 2013, the registry included 110,088 patients. Only 1,237 patients had follow-up interviews, but administrative data linkages allowed for indefinite follow-up of deaths and other measures of health services utilization. After a hiatus, the registry resumed data collection in 2019, with 13,828 charts abstracted to date with a focus on intracranial vascular imaging, identification of intracranial occlusions and treatment with thrombectomy.
Conclusion:
The Registry of the Canadian Stroke Network/Ontario Stroke Registry is a large population-based clinical database that has evolved throughout the last two decades to meet contemporary stroke needs. Registry data have been used to monitor stroke quality of care and conduct outcomes research to inform policy.
Previous studies have linked social behaviors to COVID-19 risk in the general population. The impact of these behaviors among healthcare personnel, who face higher workplace exposure risks and possess greater prevention awareness, remains less explored.
Design:
We conducted a Prospective cohort study from December 2021 to May 2022, using monthly surveys. Exposures included (1) a composite of nine common social activities in the past month and (2) similarity of social behavior compared to pre-pandemic. Outcomes included self-reported SARS-CoV-2 infection (primary)and testing for SARS-CoV-2 (secondary). Mixed-effect logistic regression assessed the association between social behavior and outcomes, adjusting for baseline and time-dependent covariates. To account for missed surveys, we employed inverse probability-of-censoring weighting with a propensity score approach.
Setting:
An academic healthcare system.
Participants:
Healthcare personnel.
Results:
Of 1,302 healthcare personnel who completed ≥2 surveys, 244 reported ≥1 positive test during the study, resulting in a cumulative incidence of 19%. More social activities in the past month and social behavior similar to pre-pandemic levels were associated with increased likelihood of SARS-CoV-2 infection (recent social activity composite: OR = 1.11, 95% CI 1.02–1.21; pre-pandemic social similarity: OR = 1.14, 95% CI 1.07–1.21). Neither was significantly associated with testing for SARS-CoV-2.
Conclusions:
Healthcare personnel social behavior outside work was associated with a higher risk for COVID-19. To protect the hospital workforce, risk mitigation strategies for healthcare personnel should focus on both the community and workplace.
This report describes the first long-term survival following a heart transplant for Williams syndrome-associated cardiac pathologies. An 11-year-old patient with severe global left ventricular dysfunction presented with heart failure and underwent heart transplantation. Her peri- and post-operative courses were complicated by hypertension related to underlying vascular pathology.
Neuropsychiatric symptoms are common after traumatic brain injury (TBI) and often resolve within 3 months post-injury. However, the degree to which individual patients follow this course is unknown. We characterized trajectories of neuropsychiatric symptoms over 12 months post-TBI. We hypothesized that a substantial proportion of individuals would display trajectories distinct from the group-average course, with some exhibiting less favorable courses.
Methods
Participants were level 1 trauma center patients with TBI (n = 1943), orthopedic trauma controls (n = 257), and non-injured friend controls (n = 300). Trajectories of six symptom dimensions (Depression, Anxiety, Fear, Sleep, Physical, and Pain) were identified using growth mixture modeling from 2 weeks to 12 months post-injury.
Results
Depression, Anxiety, Fear, and Physical symptoms displayed three trajectories: Stable-Low (86.2–88.6%), Worsening (5.6–10.9%), and Improving (2.6–6.4%). Among symptomatic trajectories (Worsening, Improving), lower-severity TBI was associated with higher prevalence of elevated symptoms at 2 weeks that steadily resolved over 12 months compared to all other groups, whereas higher-severity TBI was associated with higher prevalence of symptoms that gradually worsened from 3–12 months. Sleep and Pain displayed more variable recovery courses, and the most common trajectory entailed an average level of problems that remained stable over time (Stable-Average; 46.7–82.6%). Symptomatic Sleep and Pain trajectories (Stable-Average, Improving) were more common in traumatically injured groups.
Conclusions
Findings illustrate the nature and rates of distinct neuropsychiatric symptom trajectories and their relationship to traumatic injuries. Providers may use these results as a referent for gauging typical v. atypical recovery in the first 12 months post-injury.
An infection prevention bundle that consisted of the development of a response team, public–academic partnership, daily assessment, regular testing, isolation, and environmental controls was implemented in 26 skilled nursing facilities in Detroit, Michigan (March 2020–April 2021). This intervention was associated with sustained control of severe acute respiratory coronavirus virus 2 infection among residents and staff.
Empowering the Participant Voice (EPV) is an NCATS-funded six-CTSA collaboration to develop, demonstrate, and disseminate a low-cost infrastructure for collecting timely feedback from research participants, fostering trust, and providing data for improving clinical translational research. EPV leverages the validated Research Participant Perception Survey (RPPS) and the popular REDCap electronic data-capture platform. This report describes the development of infrastructure designed to overcome identified institutional barriers to routinely collecting participant feedback using RPPS and demonstration use cases. Sites engaged local stakeholders iteratively, incorporating feedback about anticipated value and potential concerns into project design. The team defined common standards and operations, developed software, and produced a detailed planning and implementation Guide. By May 2023, 2,575 participants diverse in age, race, ethnicity, and sex had responded to approximately 13,850 survey invitations (18.6%); 29% of responses included free-text comments. EPV infrastructure enabled sites to routinely access local and multi-site research participant experience data on an interactive analytics dashboard. The EPV learning collaborative continues to test initiatives to improve survey reach and optimize infrastructure and process. Broad uptake of EPV will expand the evidence base, enable hypothesis generation, and drive research-on-research locally and nationally to enhance the clinical research enterprise.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
In the United States, Black individuals have suffered from 300 years of racism, bias, segregation and have been systematically and intentionally denied opportunities to accrue wealth. These disadvantages have resulted in disparities in health outcomes. Over the last decade there has been a growing interest in examining social determinants of health as upstream factors that lead to downstream health disparities. It is of vital importance to quantify the contribution of SDH factors to racial disparities in order to inform policy and social justice initiatives. This demonstration project uses years of education and white matter hyperintensities (WMH) to illustrate two methods of quantifying the role of a SDH in producing health disparities.
Participants and Methods:
The current study is a secondary data analysis of baseline data from a subset of the National Alzheimer's Coordinating Center database with neuroimaging data collected from 2002-2019. Participants were 997 cognitively diverse, Black and White (10.4% Black) individuals, aged 60-94 (mean=73.86, 56.5% female), mean education of 15.18 years (range= 0-23, SD=3.55). First, mediation, was conducted in the SEM framework using the R package lavaan. Black/White race was the independent variable, education was the mediator, WMH volume was the dependent variable, and age/sex were the covariates. Bootstrapped standard errors were calculated using 1000 iterations. The indirect effect was then divided by the total effect to determine the proportion of the total effect attributable to education. Second, a population attributable fraction (PAF) or the expected reduction in WMH if we eliminated low education and structural racism for which Black serves as a proxy was calculated. Two logistic regressions with dichotomous (median split) WMH as the dependent variable, first with low (less than high school) versus high education, and second with Black/White race added as predictors. Age/sex were covariates. PAF of education, and then of Black/White race controlling for education were obtained. Subsequently, a combined PAF was calculated.
Results:
In the lavaan model, the total effect of Black/White race on WMH was not significant (B=.040, se=.113, p=.246); however, Black/White race significantly predicted education (B= -.108, se=.390, p=.001) and education significantly predicted WMH burden (B=-.084, se=.008, p=.002). This resulted in a significant indirect effect (effect=.009, se=.014, p=.032). 22.6 % of the relationship between Black/White race and WMH was mediated by education. In the logistic models, the PAF of education was 5.3% and the additional PAF of Black/White race was 2.7%. The combined PAF of Black race and low education was 7.8%.
Conclusions:
From our mediation we can conclude that 22.6% of the relationship between Black/White race and WMH volume is explained by education. Our PAF analysis suggests that we could reduce 7.8% of the cases with high WMH burden if we eliminated low education and the structural racism for which Black serves as a proxy. This is an under estimation of the role that education and structural racism play in WMH burden due to our positively selected sample and crude measure of education. However, these methods can help researchers quantify the contribution of SDH to disparities in older adulthood and provide targets for policy change.
Drawing on the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set (UDS), this study aimed to investigate the direct and indirect associations between vascular risk factors/cardiovascular disease (CVD), pharmacological treatment (of CVD), and white matter hyperintensity (WMH) burden on overall cognition and decline trajectories in a cognitively diverse sample of older adults.
Participants and Methods:
Participants were 1,049 cognitively diverse older adults drawn from a larger NACC data repository of 22,684 participants whose data was frozen as of December 2019. The subsample included only participants who were aged 60-97 (56.7% women) who completed at least one post-baseline neuropsychological evaluation, had medication data, and both T1 and FLAIR neuroimaging scans. Cognitive composites (Memory, Attention, Executive Function, Language) were derived factor analytically using harmonized data. Baseline WMH volumes were quantified using UBO Detector. Baseline health screening and medication data was used to determine overall CVD burden and total medication. Longitudinal latent growth curve models were estimated adjusting for demographics.
Results:
More CVD medication was associated with greater CVD burden; however, no direct effects of medication were found on any of the cognitive composites or WMH volume. While no direct effects of CVD burden on cognition (overall or rate of decline) were observed, instead we found that greater CVD burden had small, but significant, negative indirect effects on Memory, Attention, Executive Functioning and Language (all p’s < .01) after controlling for CVD medication use. Whole brain WMH volume served as the mediator of this relationship, as it did for an indirect effect of baseline CVD on 6-year rate of decline in Memory and Executive function.
Conclusions:
Findings from this study were generally consistent with previous literature and extend extant knowledge regarding the direct and indirect associations between CVD burden, pharmacological treatment, and neuropathology of presumed vascular origin on cognitive decline trajectories in an older adult sample. Results reveal the subtle importance of CVD risk factors on late life cognition even after accounting for treatment and WHM volume and highlight the need for additional research to determine sensitive windows of opportunity for intervention.
We provide an updated estimate of adult stroke event rates by age group, sex, and stroke type using Canadian administrative data. In the 2017–2018 fiscal year, there were an estimated 81,781 hospital or emergency department visits for stroke events in Canada, excluding Quebec. Our findings show that overall, the event rate of stroke is similar between women and men. There were slight differences in stroke event rate at various ages by sex and stroke type and emerging patterns warrant attention in future studies. Our findings emphasize the importance of continuous surveillance to monitor the epidemiology of stroke in Canada.
To determine antibiotic prescribing appropriateness for respiratory tract diagnoses (RTD) by season.
Design:
Retrospective cohort study.
Setting:
Primary care practices in a university health system.
Patients:
Patients who were seen at an office visit with diagnostic code for RTD.
Methods:
Office visits for the entire cohort were categorized based on ICD-10 codes by the likelihood that an antibiotic was indicated (tier 1: always indicated; tier 2: sometimes indicated; tier 3: rarely indicated). Medical records were reviewed for 1,200 randomly selected office visits to determine appropriateness. Based on this reference standard, metrics and prescriber characteristics associated with inappropriate antibiotic prescribing were determined. Characteristics of antibiotic prescribing were compared between winter and summer months.
Results:
A significantly greater proportion of RTD visits had an antibiotic prescribed in winter [20,558/51,090 (40.2%)] compared to summer months [11,728/38,537 (30.4%)][standardized difference (SD) = 0.21]. A significantly greater proportion of winter compared to summer visits was associated with tier 2 RTDs (29.4% vs 23.4%, SD = 0.14), but less tier 3 RTDs (68.4% vs 74.4%, SD = 0.13). A greater proportion of visits in winter compared to summer months had an antibiotic prescribed for tier 2 RTDs (80.2% vs 74.2%, SD = 0.14) and tier 3 RTDs (22.9% vs 16.2%, SD = 0.17). The proportion of inappropriate antibiotic prescribing was higher in winter compared to summer months (72.4% vs 62.0%, P < .01).
Conclusions:
Increases in antibiotic prescribing for RTD visits from summer to winter were likely driven by shifts in diagnoses as well as increases in prescribing for certain diagnoses. At least some of this increased prescribing was inappropriate.
A survey was conducted among Canadian tertiary neonatal intensive care units. Of the 27 sites who responded, 9 did not have any form of antimicrobial stewardship, and 11 used vancomycin for empirical coverage in late-onset-sepsis evaluations. We detected significant variations in the diagnostic criteria for urinary tract infection and ventilator-associated pneumonia.
The COVID-19 pandemic raised the importance of adaptive capacity and preparedness when engaging historically marginalized populations in research and practice. The Rapid Acceleration of Diagnostics in Underserved Populations’ COVID-19 Equity Evidence Academy Series (RADx-UP EA) is a virtual, national, interactive conference model designed to support and engage community-academic partnerships in a collaborative effort to improve practices that overcome disparities in SARS-CoV-2 testing and testing technologies. The RADx-UP EA promotes information sharing, critical reflection and discussion, and creation of translatable strategies for health equity. Staff and faculty from the RADx-UP Coordination and Data Collection Center developed three EA events with diverse geographic, racial, and ethnic representation of attendees from RADx-UP community-academic project teams: February 2021 (n = 319); November 2021 (n = 242); and September 2022 (n = 254). Each EA event included a data profile; 2-day, virtual event; event summary report; community dissemination product; and an evaluation strategy. Operational and translational delivery processes were iteratively adapted for each EA across one or more of five adaptive capacity domains: assets, knowledge and learning, social organization, flexibility, and innovation. The RADx-UP EA model can be generalized beyond RADx-UP and tailored by community and academic input to respond to local or national health emergencies.
Although age-standardized stroke occurrence has been decreasing, the absolute number of stroke events globally, and in Canada, is increasing. Stroke surveillance is necessary for health services planning, informing research design, and public health messaging. We used administrative data to estimate the number of stroke events resulting in hospital or emergency department presentation across Canada in the 2017–18 fiscal year.
Methods:
Hospitalization data were obtained from the Canadian Institute for Health Information (CIHI) Discharge Abstract Database and the Ministry of Health and Social Services in Quebec. Emergency department data were obtained from the CIHI National Ambulatory Care Reporting System (Alberta and Ontario). Stroke events were identified using ICD-10 coding. Data were linked into episodes of care to account for readmissions and interfacility transfers. Projections for emergency department visits for provinces/territories outside of Alberta and Ontario were generated based upon age and sex-standardized estimates from Alberta and Ontario.
Results:
In the 2017–18 fiscal year, there were 108,707 stroke events resulting in hospital or emergency department presentation across the country. This was made up of 54,357 events resulting in hospital admission and 54,350 events resulting in only emergency department presentation. The events resulting in only emergency department presentation consisted of 25,941 events observed in Alberta and Ontario and a projection of 28,409 events across the rest of the country.
Conclusions:
We estimate a stroke event resulting in hospital or emergency department presentation occurs every 5 minutes in Canada.
The 2022 update of the Canadian Stroke Best Practice Recommendations (CSBPR) for Acute Stroke Management, 7th edition, is a comprehensive summary of current evidence-based recommendations, appropriate for use by an interdisciplinary team of healthcare providers and system planners caring for persons with an acute stroke or transient ischemic attack. These recommendations are a timely opportunity to reassess current processes to ensure efficient access to acute stroke diagnostics, treatments, and management strategies, proven to reduce mortality and morbidity. The topics covered include prehospital care, emergency department care, intravenous thrombolysis and endovascular thrombectomy (EVT), prevention and management of inhospital complications, vascular risk factor reduction, early rehabilitation, and end-of-life care. These recommendations pertain primarily to an acute ischemic vascular event. Notable changes in the 7th edition include recommendations pertaining the use of tenecteplase, thrombolysis as a bridging therapy prior to mechanical thrombectomy, dual antiplatelet therapy for stroke prevention,1 the management of symptomatic intracerebral hemorrhage following thrombolysis, acute stroke imaging, care of patients undergoing EVT, medical assistance in dying, and virtual stroke care. An explicit effort was made to address sex and gender differences wherever possible. The theme of the 7th edition of the CSBPR is building connections to optimize individual outcomes, recognizing that many people who present with acute stroke often also have multiple comorbid conditions, are medically more complex, and require a coordinated interdisciplinary approach for optimal recovery. Additional materials to support timely implementation and quality monitoring of these recommendations are available at www.strokebestpractices.ca.
Despite the public health burden of traumatic brain injury (TBI) across broader society, most TBI studies have been isolated to a distinct subpopulation. The TBI research literature is fragmented further because often studies of distinct populations have used different assessment procedures and instruments. Addressing calls to harmonize the literature will require tools to link data collected from different instruments that measure the same construct, such as civilian mild traumatic brain injury (mTBI) and sports concussion symptom inventories.
Method:
We used item response theory (IRT) to link scores from the Rivermead Post Concussion Symptoms Questionnaire (RPQ) and the Sport Concussion Assessment Tool (SCAT) symptom checklist, widely used instruments for assessing civilian and sport-related mTBI symptoms, respectively. The sample included data from n = 397 patients who suffered a sports-related concussion, civilian mTBI, orthopedic injury control, or non-athlete control and completed the SCAT and/or RPQ.
Results:
The results of several analyses supported sufficient unidimensionality to treat the RPQ + SCAT combined item set as measuring a single construct. Fixed-parameter IRT was used to create a cross-walk table that maps RPQ total scores to SCAT symptom severity scores. Linked and observed scores were highly correlated (r = .92). Standard errors of the IRT scores were slightly higher for civilian mTBI patients and orthopedic controls, particularly for RPQ scores linked from the SCAT.
Conclusion:
By linking the RPQ to the SCAT we facilitated efforts to effectively combine samples and harmonize data relating to mTBI.
OBJECTIVES/GOALS: The Informatics Program in the Wake Forest CTSI is experiencing rapid growth. To accommodate an influx of both staff and clinical investigators this program Invests resources in self-service tools to increase researcher capabilities Automates resource intensive activities Creates transparency of operational processes for researchers. METHODS/STUDY POPULATION: Self-service tools (immediate/automated) The i2b2 tool queries clinical data for feasibility numbers and cohort identification; and provides demographic breakdowns of patient sets The Data Puller tool pulls identified patient data (with IRB approval) The SKAN NLP tool pulls aggregate numbers from over 3 million clinical notes Automation A custom-built tracking system automates parts of tracking requests for data and checking IRB protocols Operational transparency The Data Request Dashboard shows requesters information about their request and where it is in the process of being fulfilled The Data Quote tool was constructed leveraging the integrated CTSA informatics network and uses details of the request to estimate how long it will take to complete. RESULTS/ANTICIPATED RESULTS: i2b2 has had over 300 unique users each year; 80% are faculty or research staff, 20% are clinicians or students. From 2017-2021 there have been an average of 300 i2b2 queries and 45 Data Puller pulls each month. SKAN has had 58 unique users since its implementation in late 2020, averaging 5 new users per month. The automated data request tracking system took approximately 30 staff hours to create and saves an average of 4 hours of staff time per week. It also decreases human error by pulling/pushing information directly between systems. The Informatics program has received positive feedback from researchers who use the Data Request Dashboard. The Data Quote Tool is being used to give standardized quotes to researchers. DISCUSSION/SIGNIFICANCE: Investing resources in developing and implementing self-service tools and operational transparency ultimately reduces overall resource consumption, saving staff and investigator time and effort. This enables the Informatics program to maintain a high standard of service while experiencing rapid growth.
To evaluate a relatively new half–face-piece powered air-purifying respirator (PAPR) device called the HALO (CleanSpace). We assessed its communication performance, its degree of respiratory protection, and its usability and comfort level.
Design and setting:
This simulation study was conducted at the simulation center of the Royal Melbourne Hospital.
Participants:
In total, 8 voluntary healthcare workers participated in the study: 4 women and 4 men comprising 3 nursing staff and 5 medical staff.
Methods:
We performed the modified rhyme test, outlined by the National Institute for Occupational Safety and Health (NIOSH), for the communication assessment. We conducted quantitative fit test and simulated workplace protection factor studies to assess the degree of respiratory protection for participants at rest, during, and immediately after performing chest compression. We also invited the participants to complete a usability and comfort survey.
Results:
The HALO PAPR met the NIOSH minimum standard for speech intelligibility, which was significantly improved with the addition of wireless communication headsets. The HALO provided consistent and adequate level of respiratory protection at rest, during and after chest compression regardless of the device power mode. It was rated favorably for its usability and comfort. However, participants criticized doffing difficulty and perceived communication interference.
Conclusions:
The HALO device can be considered as an alternative to a filtering face-piece respirator. Thorough doffing training and mitigation planning to improve the device communication performance are recommended. Further research is required to examine its clinical outcomes and barriers that may potentially affect patient or healthcare worker safety.