We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Nipocalimab is a human IgG1 monoclonal antibody targeting FcRn that selectively reduces IgG levels without impacting antigen presentation, T- and B-cell functions. This study describes the effect of nipocalimab on vaccine response. Methods: Open-label, parallel, interventional study randomized participants 1:1 to receive intravenous 30mg/kg nipocalimab at Week0 and 15mg/kg at Week2 and Week4 (active) or no drug (control). On Day 3, participants received Tdap and PPSV®23 vaccinations and were followed through Wk16. Results: Twenty-nine participants completed the study and are included (active, n=15; control, n=14). Participants with a positive anti-tetanus IgG response was comparable between groups at Wk2 and Wk16, but lower at Wk4 (nipocalimab 3/15 [20%] vs control 7/14 [50%]; P=0.089). All maintained anti-tetanus IgG above the protective threshold (0.16IU/mL) through Wk16. While anti-pneumococcal-capsular-polysaccharide (PCP) IgG levels were lower during nipocalimab treatment, the percent increase from baseline at Wk2 and Wk16 was comparable between groups. Post-vaccination, anti-PCP IgG remained above 50mg/L and showed a 2-fold increase from baseline throughout the study in both groups. Nipocalimab co-administration with vaccines was safe and well-tolerated. Conclusions: These findings suggest that nipocalimab does not impact the development of an adequate IgG response to T-cell–dependent/independent vaccines and that nipocalimab-treated patients can follow recommended vaccination schedules.
Secondary stroke prevention can reduce subsequent vascular events, mortality and accumulation of disability. Current rates of adherence to secondary stroke prevention indicators are unknown. Our aim was to evaluate secondary stroke prevention care in Ontario, Canada.
Methods:
A retrospective cohort study using health administrative databases included all adults discharged alive following an ischemic stroke from April 2010 to March 2019. Indicators of secondary stroke prevention, including laboratory testing, physician visits and receipt of routine influenza vaccinations, were evaluated among survivors in the one year following a stroke event. The use of medication was also assessed among individuals over the age of 65 years and within subgroups of stroke survivors with diabetes and atrial fibrillation.
Results:
After exclusions, 54,712 individuals (mean age 68.4 years, 45.7% female) survived at least one year following their stroke event. In the 90 days following discharge from the hospital, most individuals (92.8%) were seen by a general practitioner, while 26.2% visited an emergency department. Within the year following discharge, 66.2% and 61.4% were tested for low-density lipoprotein and glycated hemoglobin, respectively, and 39.6% received an influenza vaccine. Among those over the age of 65 years, 85.5% were prescribed a lipid-lowering agent, and 88.7% were prescribed at least one antihypertensive medication. In those with diabetes, 70.3% were prescribed an antihyperglycemic medication, while 84.9% with atrial fibrillation were prescribed an anticoagulant.
Conclusion:
Secondary stroke prevention, especially for important laboratory values, remains suboptimal, despite thorough best practice guidelines. Future studies should explore barriers to better secondary stroke care.
This research aimed to assess the agronomic performance of the progeny (F3 and F4 generations) of 48 newly developed Aus rice lines, using a randomized-complete-block-design under rainfed conditions. We found a wide range of variations in yield and yield-contributing traits among the studied genotypes. High board sense heritability percentages were found for sterility percentage (99.50 and 97.20), thousand-grain-weight (88.10 and 90.20 g), plant-height (84.90 and 86.90 cm) and day-to-maturity (84.50 and 97.60 d) in both F3 and F4 generations, respectively. However, the highest genetic advance as mean percentage was observed for sterility (48.00 and 50.60), effective tillers number per hill (ET) (44.70 and 47.10), total tillers number per hill (TT) (43.00 and 45.40) and filled-grains per panicle (41.00 and 43.20) respectively. Notably, the correlation study also identified the traits, TT (r = 0.31 and 0.45), ET (r = 0.30 and 0.44), straw yield (r = 0.57 and 0.39) and harvest index (r = 0.63 and 0.67) as effective for improving grain yield in both F3 and F4 generations, respectively. We identified higher grain yield per hill (g) and shorter to moderate crop growth duration (days) in several distinct accessions, including R1-49-7-1-1, R3-26-4-3-1, R1-6-2-3-1, R1-13-1-1-1, R1-50-1-1-1, R3-49-4-3-1, R1-47-7-3-1, R2-26-6-2-2, R3-30-1-2-1 and R1-44-1-2-1, among the 48 genotypes in both the F3 and F4 generations. A further location-specific agronomic study is recommended to assess the drought tolerance of these promising genotypes. This will further assess their suitability as potential breeding materials when developing rice varieties adapted to grow under fluctuating rainfalls conditions.
Gaming disorder has become a global concern and it could have a variety of health and social consequences. The trauma model has been applied to the understanding of different types of addictions as behavioral addictions can sometimes be conceptualized as self-soothing strategies to avoid trauma-related stressors or triggers. However, much less is known about the relationship between trauma exposure and gaming disorder.
Objectives
To inform prevention and intervention strategies and to facilitate further research, we conducted the first scoping review to explore and summarize the literature on the relationship between trauma and gaming disorder.
Methods
A systematic search was conducted on the Web of Science, Scopus and ProQuest. We looked for original studies published in English that included a measure of trauma exposure and a measure of gaming disorder symptoms, as well as quantitative data regarding the relationship between trauma exposure and gaming disorder.
Results
The initial search generated 412 articles, of which 15 met the inclusion criteria. All of them were cross-sectional studies, recruiting participants from both clinical and non-clinical populations. Twelve of them (80%) reported significant correlations between trauma exposure and the severity of gaming disorder symptoms (r = 0.18 to 0.46, p < 0.010). Several potential mediators, including depressive symptoms and dissociative experiences, have been identified. One study found that parental monitoring moderated the relationship between trauma and gaming disorder symptoms. No studies reported the prevalence of trauma or trauma-related symptoms among people with gaming disorder.
Conclusions
There is some evidence supporting the association between trauma and gaming disorder, at small to medium effect sizes. Future studies should investigate the mediators and moderators underlying the relationship between trauma and gaming disorder. The longitudinal relationship between trauma exposure and the development of gaming disorder should be clarified. A trauma-informed approach may be a helpful strategy to alleviate gaming disorder symptoms.
Depression is the largest global contributor to non-fatal disease burden(1). A growing body of evidence suggests that dietary behaviours, such as higher fruit and vegetable intake, may be protective against the risk of depression(2). However, this evidence is primarily from high-income countries, despite over 80% of the burden of depression being experienced in low- and middle-income countries(1). There are also limited studies to date focusing on older adults. The aim of this study was to prospectively examine the associations between baseline fruit and vegetable intake and incidence of depression in adults aged 45-years and older from 10 cohorts across six continents, including four cohorts from low and middle-income countries. The association between baseline fruit and vegetable intake and incident depression over a 3–6-year follow-up period was examined using Cox proportional hazard regression after controlling for a range of potential confounders. Participants were 7771 community-based adults aged 45+ years from 10 diverse cohorts. All cohorts were members of the Cohort Studies of Memory in an International Consortium collaboration(3). Fruit intake (excluding juice) and vegetable intake was collected using either a comprehensive food frequency questionnaire, short food questionnaire or diet history. Depressive symptoms were assessed using validated depression measures, and depression was defined as a score greater than or equal to a validated cut-off. Prior to analysis all data were harmonised. Analysis was performed by cohort and then cohort results were combined using meta-analysis. Subgroup analysis was performed by sex, age (45 – 64 versus 65+ years) and income level of country (high income countries versus low- and middle-income countries). There were 1537 incident cases of depression over 32,420 person-years of follow-up. Mean daily intakes of fruit were 1.7 ± 1.5 serves and vegetables 1.9 ± 1.4. serves. We found no association between fruit and vegetable intakes and risk of incident depression in any of the analyses, and this was consistent across the subgroup analyses. The low intake of fruit and vegetables of participants, diverse measures used across the different cohorts, and modest sample size of our study compared with prior studies in the literature, may have prevented an association being detected. Further investigation using standardised measures in larger cohorts of older adults from low- to middle-income countries is needed. Future research should consider the potential relationship between different types of fruits and vegetables and depression.
Direct numerical simulations (DNSs) of three-dimensional cylindrical release gravity currents in a linearly stratified ambient are presented. The simulations cover a range of stratification strengths $0< S\leq 0.8$ (where $S=(\rho _b^*-\rho _0^*)/(\rho _c^*-\rho _0^*), \rho _b^*, \rho _0^*$ and $\rho _c^*$ are the dimensional density at the bottom of the domain, top of the domain and the dense fluid, respectively) at two different Reynolds numbers. A comparison between the stratified and unstratified cases illustrates the influence of stratification strength on the dynamics of cylindrical gravity currents. Specifically, the front velocity in the slumping phase decreases with increasing stratification strength whereas the duration of the slumping phase increases with increments of $S$. The Froude number calculated in this phase shows a good agreement with models proposed by Ungarish & Huppert (J. Fluid Mech., vol. 458, 2002, pp. 283–301) and Ungarish (J. Fluid Mech., vol. 548, 2006, pp. 49–68), originally developed for planar gravity currents in a stratified ambient. In the inertial phase, the front velocity across cases with different stratification strengths adheres to a power-law scaling with an exponent of $-$1/2. Higher Reynolds numbers led to more frequent lobe splitting and merging, with lobe size diminishing as stratification strength increased. Strong interactions among inner vortex rings occurred during the slumping phase, leading to the early formation of hairpin vortices in weakly stratified cases, while strongly stratified cases exhibited delayed vortex formation and less turbulence.
This chapter describes lab verification and clinical validation of tests for the detection of SARS-CoV-2. As new SARS-CoV-2 tests were being developed early in the pandemic, extensive lab verification studies to “test the tests” were conducted at ACME POCT at Emory University. Initial testing was performed in a Biosafety Level 3 facility to determine if the assays could detect propagated SARS-CoV-2 in ideal conditions and evaluate the specificity of these tests. We then describe the establishment of a Biorepository to bank SARS-CoV-2 variant samples and use these samples to determine whether tests could detect new variants with equal sensitivity as the original wild-type virus. This chapter also describes the clinical validation of tests using samples collected from individuals at testing centers. The clinical validation core requires careful planning for staffing and personnel training, semi-permanent and mobile clinical sites, defining inclusion and exclusion parameters, and data collection and reporting. Our experience demonstrated the importance of developing strong relationships with academic and private partners to facilitate clinical site setup, marketing, and purchasing.
Population-wide restrictions during the COVID-19 pandemic may create barriers to mental health diagnosis. This study aims to examine changes in the number of incident cases and the incidence rates of mental health diagnoses during the COVID-19 pandemic.
Methods
By using electronic health records from France, Germany, Italy, South Korea and the UK and claims data from the US, this study conducted interrupted time-series analyses to compare the monthly incident cases and the incidence of depressive disorders, anxiety disorders, alcohol misuse or dependence, substance misuse or dependence, bipolar disorders, personality disorders and psychoses diagnoses before (January 2017 to February 2020) and after (April 2020 to the latest available date of each database [up to November 2021]) the introduction of COVID-related restrictions.
Results
A total of 629,712,954 individuals were enrolled across nine databases. Following the introduction of restrictions, an immediate decline was observed in the number of incident cases of all mental health diagnoses in the US (rate ratios (RRs) ranged from 0.005 to 0.677) and in the incidence of all conditions in France, Germany, Italy and the US (RRs ranged from 0.002 to 0.422). In the UK, significant reductions were only observed in common mental illnesses. The number of incident cases and the incidence began to return to or exceed pre-pandemic levels in most countries from mid-2020 through 2021.
Conclusions
Healthcare providers should be prepared to deliver service adaptations to mitigate burdens directly or indirectly caused by delays in the diagnosis and treatment of mental health conditions.
Despite replicated cross-sectional evidence of aberrant levels of peripheral inflammatory markers in individuals with major depressive disorder (MDD), there is limited literature on associations between inflammatory tone and response to sequential pharmacotherapies.
Objectives
To assess associations between plasma levels of pro-inflammatory markers and treatment response to escitalopram and adjunctive aripiprazole in adults with MDD.
Methods
In a 16-week open-label clinical trial, 211 participants with MDD were treated with escitalopram 10– 20 mg daily for 8 weeks. Responders continued on escitalopram while non-responders received adjunctive aripiprazole 2–10 mg daily for 8 weeks. Plasma levels of pro-inflammatory markers – C-reactive protein, Interleukin (IL)-1β, IL-6, IL-17, Interferon gamma (IFN)-Γ, Tumour Necrosis Factor (TNF)-α, and Chemokine C–C motif ligand-2 (CCL-2) - measured at baseline, and after 2, 8 and 16 weeks were included in logistic regression analyses to assess associations between inflammatory markers and treatment response.
Results
Pre-treatment levels of IFN-Γ and CCL-2 were significantly higher in escitalopram non-responders compared to responders. Pre-treatment IFN-Γ and CCL-2 levels were significantly associated with a lower of odds of response to escitalopram at 8 weeks. Increases in CCL-2 levels from weeks 8 to 16 in escitalopram non-responders were significantly associated with higher odds of non-response to adjunctive aripiprazole at week 16.
Conclusions
Pre-treatment levels of IFN-Γ and CCL-2 were predictive of response to escitalopram. Increasing levels of these pro-inflammatory markers may predict non-response to adjunctive aripiprazole. These findings require validation in independent clinical populations.
Multiple organizations track neurosurgical surgical-site infection (SSI) rates, but significant variation exists among reporting criteria. We report our center’s experience with the variation in cases captured by 2 major definitions. Standardization could support improvement activities and SSI reduction.
Background: There are many cognitive tests that detect mild cognitive impairment (MCI) and dementia such as the Montreal Cognitive Assessment (MoCA) and Rowland Universal Dementia Assessment Scale (RUDAS). The comparative performance of these screening tests for identifying MCI and dementia is unknown. Methods: The MoCA and RUDAS were administered during baseline visits for patients in the Calgary Neurosciences Program. Those that enrolled in the Prospective Registry of Persons with Memory Symptoms (PROMPT) had their scores related to their final clinical diagnosis. Cut-off scores of 26 for the MoCA and 22 for the RUDAS were used to indicate a positive result. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of both cognitive scores were compared. Results: The sensitivity, specificity, PPV, NPV, and accuracy of the MoCA (n = 125) was 89.3%, 72.7%, 93.9%, 59.3%, and 86.4%, respectively. The sensitivity, specificity, PPV, NPV, and accuracy of the RUDAS (n = 125) was 47.6%, 100%, 100%, 29.0%, 56.8%, respectively. Conclusions: In patients with cognitive complaints presenting to a specialist clinic, the MoCA was more sensitive and accurate than the RUDAS for a final clinician diagnosis of mild cognitive impairment or dementia when using the standard cut-offs.
Background: The Montreal Cognitive Assessment (MoCA) and Rowland Universal Dementia Assessment Scale (RUDAS) are tests used to detect mild cognitive impairment (MCI) and dementia. However, it has been suggested that the MoCA may not be appropriate for diverse populations, and that the relatively newer RUDAS may be better suited as a universal cognitive test. Methods: The MoCA and RUDAS were administered at baseline visits for participants enrolled in the Prospective Registry of Persons with Memory Symptoms (PROMPT). Test scores were compared for patients with different levels of educational attainment, first language, and race using the Kruskal-Wallis test. Results: The difference in MoCA (0.029) and RUDAS (0.0041) scores between patients with different levels of educational attainment (n = 141) was significant. The difference in MoCA (0.62) and RUDAS (0.78) scores between patients with a different first language (n = 141) was not significant. The difference in MoCA (0.64) and RUDAS (0.96) scores between patients of different race (n = 141) was not significant. Conclusions: The difference between MoCA and RUDAS scores remained consistent regardless of level of educational attainment, first language and race. The results suggest that the RUDAS may not be more appropriate than the MoCA in detecting MCI and dementia across diverse populations.
Fontan baffle punctures and creation of Fontan fenestration for cardiac catheterisation procedures remain challenging especially due to the heavy calcification of prosthetic material and complex anatomy.
Objectives:
We sought to evaluate our experience using radiofrequency current via surgical electrocautery needle for Fontan baffle puncture to facilitate diagnostic, electrophysiology, and interventional procedures.
Methods:
A retrospective chart review of all Fontan patients (pts) who underwent Fontan baffle puncture using radiofrequency energy via surgical electrocautery from three centres were performed from January 2011 to July 2021.
Results:
A total of 19 pts underwent 22 successful Fontan baffle puncture. The median age and weight were 17 (3–36 years) and 55 (14–88) kg, respectively. The procedural indications for Fontan fenestration creation included: diagnostic study (n = 1), atrial septostomy and stenting (n = 1), electrophysiology study and ablation procedures (n = 8), Fontan baffle stenting for Fontan failure including protein-losing enteropathy (n = 7), and occlusion of veno-venous collaterals (n = 2) for cyanosis. The type of Fontan baffles included: extra-cardiac conduits (n = 12), lateral tunnel (n = 5), classic atrio-pulmonary connection (n = 1), and intra-cardiac baffle (n = 1). A Fontan baffle puncture was initially attempted using traditional method in 6 pts and Baylis radiofrequency trans-septal system in 2 pts unsuccessfully. In all pts, Fontan baffle puncture using radiofrequency energy via electrocautery needle was successful. The radiofrequency energy utilised was (10–50 W) and required 1–5 attempts for 2–5 seconds. There were no vascular or neurological complications.
Conclusions:
Radiofrequency current delivery using surgical electrocautery facilitates Fontan baffle puncture in patients with complex and calcified Fontan baffles for diagnostic, interventional, and electrophysiology procedures.
This study quantifies the effect of fertilizer and irrigation management on water use efficiency (WUE), crop growth and crop yield in sub-humid to semi-arid conditions of Limpopo Province, South Africa. An approach of coupling a cropping system model (DSSAT) with an agro-hydrological model (SWAT) was developed and applied to simulate crop yield at the field and catchment scale. Simulation results indicated that the application of fertilizer has a greater positive effect on maize yield than irrigation. WUE ranged from 0.10–0.57 kg/m3 (rainfed) to 0.84–1.39 kg/m3 (irrigated) and was positively correlated with fertilizer application rate. The combined application of the variants with deficit irrigation and fertilizer rate (120:60 kg N:P/ha) for maize turned out to be the best option, giving the highest WUE and increasing average yield by up to 5.7 t/ha compared to no fertilization and rainfed cultivation (1.3 t/ha). The simulated results at the catchment scale showed the considerable spatial variability of maize yield across agricultural fields with different soils, slopes and climate conditions. The average annual simulated maize yield across the catchment corresponding to the highest WUE ranged from 4.0 to 7.0 t/ha. The yield gaps ranged from 3.0 to 6.0 t/ha under deficit irrigation combined with 120N:60P kg/ha and ranged from 0.2 to 1.5 t/ha when only applying deficit irrigation but no fertilizer. This information can support regional decision makers to find appropriate interventions that aim at improving crop yield and WUE for catchments/regions.
Background: Phase 3 PREEMPT established safety and efficacy of 155-195U onabotulinumtoxinA in adults with chronic migraine (CM). This analysis of the PREDICT study (NCT02502123) evaluates real-world effectiveness and safety of 155U, 156-195U and 195U-onabotulinumtoxinA in CM. Methods: Patients received onabotulinumtoxinA approximately every 12-weeks (≤7 treatment cycles [Tx]) per Canadian product monograph). Primary endpoint was mean change from baseline in Migraine-Specific Quality of Life (MSQ) at Tx4. Headache days, physician and patient satisfaction were evaluated. Analysis stratified safety population (≥1 onabotulinumtoxin A dose) into 3 groups (155U,156-195U,195U) by dose received on ≥3 of the first 4 Tx. Results: 184 patients received ≥1 onabotulinumtoxin A dose (155U, n=68; 156-195U, n=156; 195U, n=13 on ≥3 Tx). Headache days decreased over time compared to baseline (Tx4: -7.1[6.7] 155U; -6.5[6.7] 156-195U; -11.2[6.4] 195U). Physicians rated most patients as improved, and majority of patients were satisfied at final visit (80.8% 155U; 83.6% 156-195U; 90% 195U). Treatment-emergent adverse events (TEAEs) were reported in 18/68(26.5%) patients in 155U-group, 41/65(63.1%) in 156-195U-group and 10/13(76.9%) in 195U-group; treatment-related TEAEs were 9(13.2%), 10(15.4%) and 3(23.1%) respectively; serious TEAEs were 0, 3(4.6%) and 1(7.7%), none treatment-related. Conclusions: Long-term treatment with 155U, 156-195U, and 195U-onabotulinumtoxinA in PREDICT was safe and effective CM treatment. No new safety signals were identified.
The body of evidence regarding self-management programs (SMPs) for adult chronic non-cancer pain (CNCP) is steadily growing, and regular updates are needed for effective decision-making.
Objectives:
To systematically identify, critically appraise, and summarize the findings from randomized controlled trials (RCTs) of SMPs for CNCP.
Methods:
We searched relevant databases from 2009 to August 2021 and included English-language RCT publications of SMPs compared with usual care for CNCP among adults (18+ years old). The primary outcome was health-related quality of life (HR-QoL). We conducted meta-analysis using an inverse variance, random-effects model and calculated the standardized mean difference (SMD) and associated 95% confidence interval (CI) and statistical heterogeneity using the I2 statistic.
Results:
From 8538 citations, we included 28 RCTs with varying patient populations, standards for SMPs, and usual care. No RCTs were classified as having a low risk of bias. There was no evidence of a significant improvement in overall HR-QoL, irrespective of pain type, immediately post-intervention (SMD 0.01, 95%CI −0.21 to 0.24; I2 57%; 11 RCTs; 979 participants), 1–4 months post-intervention (SMD 0.02, 95%CI −0.16 to 0.20; I2 48.7%; 12 RCTs; 1160 participants), and 6–12 months post-intervention (SMD 0.07, 95%CI −0.06 to 0.21; I2 26.1%; 9 RCTs; 1404 participants). Similar findings were made for physical and mental HR-QoL, and for specific QoL assessment scales (e.g., SF-36).
Conclusions:
There is a lack of evidence that SMPs are efficacious for CNCP compared with usual care. Standardization of SMPs for CNCP and better planned/conducted RCTs are needed to confirm these conclusions.
Volunteering is a popular activity among middle-aged and older adults as means to contribute to the society and to maintain personal health and wellbeing. While the benefits of volunteering have been well-documented in the current literature, it does not tend to distinguish between various types of volunteering activities. This cross-sectional study aims to compare the effects of instrumental (e.g. food preparation, fundraising) and cognitively demanding volunteering activities (e.g. befriending, mentoring) in a sample of 487 middle-aged and older Hong Kong Chinese adults. Participation in instrumental and cognitively demanding volunteering, life satisfaction, depressive symptoms, cognitive functioning and hand-grip strength were measured. The results of two-way between-subject robust analyses of variance demonstrated significant main effects of volunteering type and their interaction effect with age on life satisfaction and depressive symptoms. Comparisons among four volunteering groups (no volunteering, instrumental volunteering, cognitively demanding volunteering and both types) revealed that individuals engaging in instrumental volunteering exhibited lower life satisfaction and more depressive symptoms compared to those who engaged in cognitively demanding volunteering and those who did not volunteer at all. This detrimental pattern of instrumental volunteering was only seen in middle-aged adults, but not in older adults. Findings of this study revealed distinctive effects of two volunteering types, and provide valuable directions for designing future volunteering programmes.
Brief measurements of the subjective experience of stress with good predictive capability are important in a range of community mental health and research settings. The potential for large-scale implementation of such a measure for screening may facilitate early risk detection and intervention opportunities. Few such measures however have been developed and validated in epidemiological and longitudinal community samples. We designed a new single-item measure of the subjective level of stress (SLS-1) and tested its validity and ability to predict long-term mental health outcomes of up to 12 months through two separate studies.
Methods
We first examined the content and face validity of the SLS-1 with a panel consisting of mental health experts and laypersons. Two studies were conducted to examine its validity and predictive utility. In study 1, we tested the convergent and divergent validity as well as incremental validity of the SLS-1 in a large epidemiological sample of young people in Hong Kong (n = 1445). In study 2, in a consecutively recruited longitudinal community sample of young people (n = 258), we first performed the same procedures as in study 1 to ensure replicability of the findings. We then examined in this longitudinal sample the utility of the SLS-1 in predicting long-term depressive, anxiety and stress outcomes assessed at 3 months and 6 months (n = 182) and at 12 months (n = 84).
Results
The SLS-1 demonstrated good content and face validity. Findings from the two studies showed that SLS-1 was moderately to strongly correlated with a range of mental health outcomes, including depressive, anxiety, stress and distress symptoms. We also demonstrated its ability to explain the variance explained in symptoms beyond other known personal and psychological factors. Using the longitudinal sample in study 2, we further showed the significant predictive capability of the SLS-1 for long-term symptom outcomes for up to 12 months even when accounting for demographic characteristics.
Conclusions
The findings altogether support the validity and predictive utility of the SLS-1 as a brief measure of stress with strong indications of both concurrent and long-term mental health outcomes. Given the value of brief measures of mental health risks at a population level, the SLS-1 may have potential for use as an early screening tool to inform early preventative intervention work.
Damage to the corticospinal tract (CST) from stroke leads to motor deficits. The damage can be quantified as the amount of overlap between the stroke lesion and CST (CST Injury). Previous literature has shown that the degree of motor deficits post-stroke is related to the amount of CST Injury. These studies delineate the stroke lesion from structural T1-weighted magnetic resonance imaging (MRI) scans, often acquired for research. In Canada, computed tomography (CT) is the most common imaging modality used in routine acute stroke care. In this proof-of-principle study, we determine whether CST Injury, using lesions delineated from CT scans, significantly explains the variability in motor impairment in individuals with stroke.
Methods:
Thirty-seven participants with stroke were included in this study. These individuals had a CT scan within the acute stage (7 days) of their stroke and underwent motor assessments. Brain images from CT scans were registered to MRI space. We performed a stepwise regression analysis to determine the contribution of CST injury and demographic variables in explaining motor impairment variability.
Results:
Using clinically available CT scans, we found modest evidence that CST Injury explains variability in motor impairment (R2adj = 0.12, p = 0.02). None of the participant demographic variables entered the model.
Conclusion:
We show for the first time a relationship between CST Injury and motor impairment using CT scans. Further work is required to evaluate the utility of data derived from clinical CT scans as a biomarker of stroke motor recovery.
Background: Unintentional opioid overdoses in and around acute care hospitals, including in the ED, are of increasing concern. In April 2018, the Addiction Recovery and Community Health (ARCH) Team at the Royal Alexandra Hospital opened the first acute care Supervised Consumption Service (SCS) in North America available to inpatients. In the SCS, patients can consume substances by injection, oral or intranasal routes under nursing supervision; immediate assistance is provided if an overdose occurs. After a quality assurance review, work began to expand SCS access to ED patients as well. Aim Statement: By expanding SCS access to ED patients, we aim to reduce unintentional and unwitnessed opioid overdoses in registered ED patients to 0 per month by the end of 2020. Measures & Design: Between June 13-July 15, 2019, ARCH ED Registered Nurses were asked to identify ED patients with a history of active substance use who may potentially require SCS access. Nurses identified 69 patients over 43 8-hour shifts (range 0-4 patients per shift); thus, we anticipated an average of 5 ED patients per 24-hour period to potentially require SCS access. Based on this evidence of need, ARCH leadership worked with a) hospital legal team and Health Canada to expand SCS access to ED patients; b) ED leadership to develop a procedure and flowchart for ED SCS access. ED patients were able to access the SCS effective October 1, 2019. Evaluation/Results: From October 1 to December 1, 2019, the SCS had 35 visits by 23 unique ED patients. The median time spent in the SCS was 42.5 minutes (range 14.0-140.0 minutes). Methamphetamine was the most commonly used substance (19, 45.2%), followed by fentanyl (10, 23.8%); substances were all injected (91.4% into a vein and 8.6% into an existing IV). In this time period, there were zero unintentional, unwitnessed opioid poisonings in registered ED patients. Data collection is ongoing and will expand to include chief complaint, ED length of stay and discharge status. Discussion/Impact: Being able to reduce unintentional overdoses and unwitnessed injection drug use in the ED has the potential to improve both patient and staff safety. Next steps include a case series designed to examine the impact of SCS access on emergency care, retention in treatment and uptake into addiction treatment.