We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Prior reports of healthcare-associated respiratory syncytial virus (RSV) have been limited to cases diagnosed after the third day of hospitalization. The omission of other healthcare settings where RSV transmission may occur underestimates the true incidence of healthcare-associated RSV.
Design:
Retrospective cross-sectional study.
Setting:
United States RSV Hospitalization Surveillance Network (RSV-NET) during 2016–2017 through 2018–2019 seasons.
Patients:
Laboratory-confirmed RSV-related hospitalizations in an eight-county catchment area in Tennessee.
Methods:
Surveillance data from RSV-NET were used to evaluate the population-level burden of healthcare-associated RSV. The incidence of healthcare-associated RSV was determined using the traditional definition (i.e., positive RSV test after hospital day 3) in addition to often under-recognized cases associated with recent post-acute care facility admission or a recent acute care hospitalization for a non-RSV illness in the preceding 7 days.
Results:
Among the 900 laboratory-confirmed RSV-related hospitalizations, 41 (4.6%) had traditionally defined healthcare-associated RSV. Including patients with a positive RSV test obtained in the first 3 days of hospitalization and who were either transferred to the hospital directly from a post-acute care facility or who were recently discharged from an acute care facility for a non-RSV illness in the preceding 7 days identified an additional 95 cases (10.6% of all RSV-related hospitalizations).
Conclusions:
RSV is an often under-recognized healthcare-associated infection. Capturing other healthcare exposures that may serve as the initial site of viral transmission may provide more comprehensive estimates of the burden of healthcare-associated RSV and inform improved infection prevention strategies and vaccination efforts.
Posttraumatic stress disorder (PTSD) has been associated with advanced epigenetic age cross-sectionally, but the association between these variables over time is unclear. This study conducted meta-analyses to test whether new-onset PTSD diagnosis and changes in PTSD symptom severity over time were associated with changes in two metrics of epigenetic aging over two time points.
Methods
We conducted meta-analyses of the association between change in PTSD diagnosis and symptom severity and change in epigenetic age acceleration/deceleration (age-adjusted DNA methylation age residuals as per the Horvath and GrimAge metrics) using data from 7 military and civilian cohorts participating in the Psychiatric Genomics Consortium PTSD Epigenetics Workgroup (total N = 1,367).
Results
Meta-analysis revealed that the interaction between Time 1 (T1) Horvath age residuals and new-onset PTSD over time was significantly associated with Horvath age residuals at T2 (meta β = 0.16, meta p = 0.02, p-adj = 0.03). The interaction between T1 Horvath age residuals and changes in PTSD symptom severity over time was significantly related to Horvath age residuals at T2 (meta β = 0.24, meta p = 0.05). No associations were observed for GrimAge residuals.
Conclusions
Results indicated that individuals who developed new-onset PTSD or showed increased PTSD symptom severity over time evidenced greater epigenetic age acceleration at follow-up than would be expected based on baseline age acceleration. This suggests that PTSD may accelerate biological aging over time and highlights the need for intervention studies to determine if PTSD treatment has a beneficial effect on the aging methylome.
Commercializing targeted sprayer systems allows producers to reduce herbicide inputs but risks the possibility of not treating emerging weeds. Currently, targeted applications with the John Deere system allow for five spray sensitivity settings, and no published literature discusses the impact of these settings on detecting and spraying weeds of varying species, sizes, and positions in crops. Research was conducted in AR, IL, IN, MS, and NC in corn, cotton, and soybean to determine how various factors might influence the ability of targeted applications to treat weeds. These data included 21 weed species aggregated to six classes with height, width, and densities, ranging from 25 to 0.25 cm, 25 to 0.25 cm, and 14.3 to 0.04 plants m-2, respectively. Crop and weed density did not influence the likelihood of treating the weeds. As expected, the sensitivity setting alters the ability to treat weeds. Targeted applications (across sensitivity settings, median weed height and width, and density of 2.4 plants m-2) resulted in a treatment success of 99.6% to 84.4%, 99.1% to 68.8%, 98.9% to 62.9%, 99.1% to 70.3%, 98.0% to 48.3%, and 98.5% to 55.8% for Convolvulaceae, decumbent broadleaf weeds, Malvaceae, Poaceae, Amaranthaceae, and yellow nutsedge, respectively. Reducing the sensitivity setting reduced the ability to treat weeds. Size of weeds aided targeted application success, with larger weeds being more readily treated through easier detection. Based on these findings, various conditions could impact the outcome of targeted multi-nozzle applications. Additionally, the analyses highlight some of the parameters to consider when using these technologies.
The clinical high risk for psychosis (CHR-p) syndrome enables early identification of individuals at risk of schizophrenia and related disorders. We differentiate between the stigma associated with the at-risk identification itself (‘labelling-related’ stigma) versus stigma attributed to experiencing mental health symptoms (‘symptom-related’ stigma) and examine their relationships with key psychosocial variables.
Aims
We compare labelling- and symptom-related stigma in rates of endorsement and associations with self-esteem, social support loss and quality of life.
Method
We assessed stigma domains of shame-related emotions, secrecy and experienced discrimination for both types of stigma. Individuals at CHR-p were recruited across three sites (N = 150); primary analyses included those who endorsed awareness of psychosis risk (n = 113). Paired-sample t-tests examined differences in labelling- versus symptom-related stigma; regressions examined associations with psychosocial variables, controlling for covariates, including CHR-p symptoms.
Results
Respondents reported greater symptom-related shame, but more labelling-related secrecy. Of the nine significant associations between stigma and psychosocial variables, eight were attributable to symptom-related stigma, even after adjusting for CHR-p symptoms.
Conclusions
Stigma attributed to symptoms had a stronger negative association with psychosocial variables than did labelling-related stigma among individuals recently identified as CHR-p. That secrecy related to the CHR-p designation was greater than its symptom-related counterpart suggests that labelling-related stigma may still be problematic for some CHR-p participants. To optimise this pivotal early intervention effort, interventions should address the holistic ‘stigmatising experience’ of having symptoms, namely any harmful reactions received as well as participants’ socially influenced concerns about what their experiences mean, in addition to the symptoms themselves.
New machine-vision technologies like the John Deere See & Spray™ could provide the opportunity to reduce herbicide use by detecting weeds and target-spraying herbicides simultaneously. Experiments were conducted for 2 yr in Keiser, AR, and Greenville, MS, to compare residual herbicide timings and targeted spray applications versus traditional broadcast herbicide programs in glyphosate/glufosinate/dicamba-resistant soybean. Treatments utilized consistent herbicides and rates with a preemergence (PRE) application followed by an early postemergence (EPOST) dicamba application followed by a mid-postemergence (MPOST) glufosinate application. All treatments included a residual at PRE and excluded or included a residual EPOST and MPOST. Additionally, the herbicide application method was considered, with traditional broadcast applications, broadcasted residual + targeted applications of postemergence herbicides (dual tank), or targeted applications of all herbicides (single tank). Targeted applications provided comparable control to broadcast applications with a ≤1% decrease in efficacy and overall control ≥93% for Palmer amaranth, broadleaf signalgrass, morningglory species, and purslane species. Additionally, targeted sprays slightly reduced soybean injury by at most 5 percentage points across all evaluations, and these effects did not translate to a yield increase at harvest. The relationship between weed area and targeted sprayed area also indicates that nozzle angle can influence potential herbicide savings, with narrower nozzle angles spraying less area. On average, targeted sprays saved a range of 28.4% to 62.4% on postemergence herbicides. On the basis of these results, with specific machine settings, targeted application programs could reduce the amount of herbicide applied while providing weed control comparable to that of traditional broadcast applications.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
Passive oxygenation with non-rebreather face mask (NRFM) has been used during cardiac arrest as an alternative to positive pressure ventilation (PPV) with bag-valve-mask (BVM) to minimize chest compression disruptions. A dual-channel pharyngeal oxygen delivery device (PODD) was created to open obstructed upper airways and provide oxygen at the glottic opening. It was hypothesized for this study that the PODD can deliver oxygen as efficiently as BVM or NRFM and oropharyngeal airway (OPA) in a cardiopulmonary resuscitation (CPR) manikin model.
Methods:
Oxygen concentration was measured in test lungs within a resuscitation manikin. These lungs were modified to mimic physiologic volumes, expansion, collapse, and recoil. Automated compressions were administered. Five trials were performed for each of five arms: (1) CPR with 30:2 compression-to-ventilation ratio using BVM with 15 liters per minute (LPM) oxygen; continuous compressions with passive oxygenation using (2) NRFM and OPA with 15 LPM oxygen, (3) PODD with 10 LPM oxygen, (4) PODD with 15 LPM oxygen; and (5) control arm with compressions only.
Results:
Mean peak oxygen concentrations were: (1) 30:2 CPR with BVM 49.3% (SD = 2.6%); (2) NRFM 47.7% (SD = 0.2%); (3) PODD with 10 LPM oxygen 52.3% (SD = 0.4%); (4) PODD with 15 LPM oxygen 62.7% (SD = 0.3%); and (5) control 21% (SD = 0%). Oxygen concentrations rose rapidly and remained steady with passive oxygenation, unlike 30:2 CPR with BVM, which rose after each ventilation and decreased until the next ventilation cycle (sawtooth pattern, mean concentration 40% [SD = 3%]).
Conclusions:
Continuous compressions and passive oxygenation with the PODD resulted in higher lung oxygen concentrations than NRFM and BVM while minimizing CPR interruptions in a manikin model.
Major depressive disorder (MDD) is the leading cause of disability globally, with moderate heritability and well-established socio-environmental risk factors. Genetic studies have been mostly restricted to European settings, with polygenic scores (PGS) demonstrating low portability across diverse global populations.
Methods
This study examines genetic architecture, polygenic prediction, and socio-environmental correlates of MDD in a family-based sample of 10 032 individuals from Nepal with array genotyping data. We used genome-based restricted maximum likelihood to estimate heritability, applied S-LDXR to estimate the cross-ancestry genetic correlation between Nepalese and European samples, and modeled PGS trained on a GWAS meta-analysis of European and East Asian ancestry samples.
Results
We estimated the narrow-sense heritability of lifetime MDD in Nepal to be 0.26 (95% CI 0.18–0.34, p = 8.5 × 10−6). Our analysis was underpowered to estimate the cross-ancestry genetic correlation (rg = 0.26, 95% CI −0.29 to 0.81). MDD risk was associated with higher age (beta = 0.071, 95% CI 0.06–0.08), female sex (beta = 0.160, 95% CI 0.15–0.17), and childhood exposure to potentially traumatic events (beta = 0.050, 95% CI 0.03–0.07), while neither the depression PGS (beta = 0.004, 95% CI −0.004 to 0.01) or its interaction with childhood trauma (beta = 0.007, 95% CI −0.01 to 0.03) were strongly associated with MDD.
Conclusions
Estimates of lifetime MDD heritability in this Nepalese sample were similar to previous European ancestry samples, but PGS trained on European data did not predict MDD in this sample. This may be due to differences in ancestry-linked causal variants, differences in depression phenotyping between the training and target data, or setting-specific environmental factors that modulate genetic effects. Additional research among under-represented global populations will ensure equitable translation of genomic findings.
Can the experience of being ostracized – ignored and excluded – lead to people being more open to extremism? In this chapter we review the theoretical basis and experimental evidence for such a connection. According to the temporal need-threat model (Williams, 2009), ostracism is a painful experience that threatens fundamental social needs. Extreme groups have the potential to be powerful sources of inclusion and could therefore address these needs, thereby making them especially attractive to recent targets of ostracism. We also identify a set of factors that is theoretically likely to affect this link and review evidence for the opposite causal path: People are especially likely to ostracize others who belong to extreme groups. Together, this suggests a possible negative cycle in which ostracism may push people toward extreme groups, on which they become more reliant as social contacts outside the group further ostracize them.
We sought to validate available tools for predicting recurrent C. difficile infection (CDI) including recurrence risk scores (by Larrainzar-Coghen, Reveles, D’Agostino, Cobo, and Eyre et al) alongside consensus guidelines risk criteria, the leading severity score (ATLAS), and PCR cycle threshold (as marker of fecal organism burden) using electronic medical records.
Design:
Retrospective cohort study validating previously described tools.
Setting:
Tertiary care academic hospital.
Patients:
Hospitalized adult patients with CDI at University of Virginia Medical Center.
Methods:
Risk scores were calculated within ±48 hours of index CDI diagnosis using a large retrospective cohort of 1,519 inpatient infections spanning 7 years and compared using area under the receiver operating characteristic curve (AUROC) and the DeLong test. Recurrent CDI events (defined as a repeat positive test or symptom relapse within 60 days requiring retreatment) were confirmed by clinician chart review.
Results:
Reveles et al tool achieved the highest AUROC of 0.523 (and 0.537 among a subcohort of 1,230 patients with their first occurrence of CDI), which was not substantially better than other tools including the current IDSA/SHEA C. difficile guidelines or PCR cycle threshold (AUROC: 0.564), regardless of prior infection history.
Conclusions:
All tools performed poorly for predicting recurrent C. difficile infection (AUROC range: 0.488–0.564), especially among patients with a prior history of infection (AUROC range: 0.436–0.591). Future studies may benefit from considering novel biomarkers and/or higher-dimensional models that could augment or replace existing tools that underperform.
The Minnesota Longitudinal Study of Risk and Adaptation (MLSRA) is a landmark prospective, longitudinal study of human development focused on a sample of mothers experiencing poverty and their firstborn children. Although the MLSRA pioneered a number of important topics in the area of social and emotional development, it began with the more specific goal of examining the antecedents of child maltreatment. From that foundation and for more than 40 years, the study has produced a significant body of research on the origins, sequelae, and measurement of childhood abuse and neglect. The principal objectives of this report are to document the early history of the MLSRA and its contributions to the study of child maltreatment and to review and summarize results from the recently updated childhood abuse and neglect coding of the cohort, with particular emphasis on findings related to adult adjustment. While doing so, we highlight key themes and contributions from Dr Dante Cicchetti’s body of research and developmental psychopathology perspective to the MLSRA, a project launched during his tenure as a graduate student at the University of Minnesota.
OBJECTIVES/GOALS: Contingency management (CM) procedures yield measurable reductions in cocaine use. This poster describes a trial aimed at using CM as a vehicle to show the biopsychosocial health benefits of reduced use, rather than total abstinence, the currently accepted metric for treatment efficacy. METHODS/STUDY POPULATION: In this 12-week, randomized controlled trial, CM was used to reduce cocaine use and evaluate associated improvements in cardiovascular, immune, and psychosocial well-being. Adults aged 18 and older who sought treatment for cocaine use (N=127) were randomized into three groups in a 1:1:1 ratio: High Value ($55) or Low Value ($13) CM incentives for cocaine-negative urine samples or a non-contingent control group. They completed outpatient sessions three days per week across the 12-week intervention period, totaling 36 clinic visits and four post-treatment follow-up visits. During each visit, participants provided observed urine samples and completed several assays of biopsychosocial health. RESULTS/ANTICIPATED RESULTS: Preliminary findings from generalized linear mixed effect modeling demonstrate the feasibility of the CM platform. Abstinence rates from cocaine use were significantly greater in the High Value group (47% negative; OR = 2.80; p = 0.01) relative to the Low Value (23% negative) and Control groups (24% negative;). In the planned primary analysis, the level of cocaine use reduction based on cocaine-negative urine samples will serve as the primary predictor of cardiovascular (e.g., endothelin-1 levels), immune (e.g., IL-10 levels) and psychosocial (e.g., Addiction Severity Index) outcomes using results from the fitted models. DISCUSSION/SIGNIFICANCE: This research will advance the field by prospectively and comprehensively demonstrating the beneficial effects of reduced cocaine use. These outcomes can, in turn, support the adoption of reduced cocaine use as a viable alternative endpoint in cocaine treatment trials.
The reading the mind in the eyes test (RMET) – which assesses the theory of mind component of social cognition – is often used to compare social cognition between patients with schizophrenia and healthy controls. There is, however, no systematic review integrating the results of these studies. We identified 198 studies published before July 2020 that administered RMET to patients with schizophrenia or healthy controls from three English-language and two Chinese-language databases. These studies included 41 separate samples of patients with schizophrenia (total n = 1836) and 197 separate samples of healthy controls (total n = 23 675). The pooled RMET score was 19.76 (95% CI 18.91–20.60) in patients and 25.53 (95% CI 25.19–25.87) in controls (z = 12.41, p < 0.001). After excluding small-sample outlier studies, this difference in RMET performance was greater in studies using non-English v. English versions of RMET (Chi [Q] = 8.54, p < 0.001). Meta-regression analyses found a negative association of age with RMET score and a positive association of years of schooling with RMET score in both patients and controls. A secondary meta-analysis using a spline construction of 180 healthy control samples identified a non-monotonic relationship between age and RMET score – RMET scores increased with age before 31 and decreased with age after 31. These results indicate that patients with schizophrenia have substantial deficits in theory of mind compared with healthy controls, supporting the construct validity of RMET as a measure of social cognition. The different results for English versus non-English versions of RMET and the non-monotonic relationship between age and RMET score highlight the importance of the language of administration of RMET and the possibility that the relationship of aging with theory of mind is different from the relationship of aging with other types of cognitive functioning.
Profiling patients on a proposed ‘immunometabolic depression’ (IMD) dimension, described as a cluster of atypical depressive symptoms related to energy regulation and immunometabolic dysregulations, may optimise personalised treatment.
Aims
To test the hypothesis that baseline IMD features predict poorer treatment outcomes with antidepressants.
Method
Data on 2551 individuals with depression across the iSPOT-D (n = 967), CO-MED (n = 665), GENDEP (n = 773) and EMBARC (n = 146) clinical trials were used. Predictors included baseline severity of atypical energy-related symptoms (AES), body mass index (BMI) and C-reactive protein levels (CRP, three trials only) separately and aggregated into an IMD index. Mixed models on the primary outcome (change in depressive symptom severity) and logistic regressions on secondary outcomes (response and remission) were conducted for the individual trial data-sets and pooled using random-effects meta-analyses.
Results
Although AES severity and BMI did not predict changes in depressive symptom severity, higher baseline CRP predicted smaller reductions in depressive symptoms (n = 376, βpooled = 0.06, P = 0.049, 95% CI 0.0001–0.12, I2 = 3.61%); this was also found for an IMD index combining these features (n = 372, βpooled = 0.12, s.e. = 0.12, P = 0.031, 95% CI 0.01–0.22, I2= 23.91%), with a higher – but still small – effect size compared with CRP. Confining analyses to selective serotonin reuptake inhibitor users indicated larger effects of CRP (βpooled = 0.16) and the IMD index (βpooled = 0.20). Baseline IMD features, both separately and combined, did not predict response or remission.
Conclusions
Depressive symptoms of people with more IMD features improved less when treated with antidepressants. However, clinical relevance is limited owing to small effect sizes in inconsistent associations. Whether these patients would benefit more from treatments targeting immunometabolic pathways remains to be investigated.
Definitive diagnosis of Alzheimer’s disease (AD) is often unavailable, so clinical diagnoses with some degree of inaccuracy are often used in research instead. When researchers test methods that may improve clinical accuracy, the error in initial diagnosis can penalize predictions that are more accurate to true diagnoses but differ from clinical diagnoses. To address this challenge, the current study investigated the use of a simple bias adjustment for use in logistic regression that accounts for known inaccuracy in initial diagnoses.
Participants and Methods:
A Bayesian logistic regression model was developed to predict unobserved/true diagnostic status given the sensitivity and specificity of an imperfect reference. This model considers cases as a mixture of true (with rate = sensitivity) and false positives (rate = 1 - specificity) while controls are mixtures of true (rate = specificity) and false negatives (rate = 1 - sensitivity). This bias adjustment was tested using Monte Carlo simulations over four conditions that varied the accuracy of clinical diagnoses. Conditions utilized 1000 iterations each generating a random dataset of n = 1000 based on a true logistic model with an intercept and three arbitrary predictors. Coefficients for parameters were randomly selected in each iteration and used to produce a set of two diagnoses: true diagnoses and observed diagnoses with imperfect accuracy. Sensitivity and specificity of the simulated clinical diagnosis varied with each of the four conditions (C): C1 = (0.77, 0.60), C2 = (0.87, 0.44), C3 = (0.71, 0.71), and C4 = (0.83, 0.55), which are derived from published values for clinical AD diagnoses against autopsy-confirmed pathology. Unadjusted and bias-adjusted logistic regressions were then fit to the simulated data to determine the models’ accuracy in estimating regression parameters and prediction of true diagnosis.
Results:
Under all conditions, the bias-adjusted logistic regression model outperformed its unadjusted counterpart. Root mean square error (the variability of estimated coefficients around their true parameter values) ranged from 0.23 to 0.79 for the unadjusted model versus 0.24 to 0.29 for the bias-adjusted model. The empirical coverage rate (the proportion of 95% credible intervals that include their true parameter) ranged from 0.00 to 0.47 for the unadjusted model versus 0.95 to 0.96 for the bias-adjusted model. Finally, the bias-adjusted model produced the best overall diagnostic accuracy with correct classification of true diagnostic values about 78% of the time versus 62-72% without adjustment.
Conclusions:
Results of this simulation study, which used published AD sensitivity and specificity statistics, provide evidence that bias-adjustments to logistic regression models are needed when research involves diagnoses from an imperfect standard. Results showed that unadjusted methods rarely identified true effects with credible intervals for coefficients including the true value anywhere from never to less than half of the time. Additional simulations are needed to examine the bias-adjusted model’s performance under additional conditions. Future research is needed to extend the bias adjustment to multinomial logistic regressions and to scenarios where the rate of misdiagnosis is unknown. Such methods may be valuable for improving detection of other neurological disorders with greater diagnostic error as well.
Learning curve patterns on list-learning tasks can help clinicians determine the nature of memory difficulties, as an “impaired” score may actually reflect attention and/or executive difficulties rather than a true memory impairment. Though such pattern analysis is often qualitative, there are quantitative methods to assess these concepts that have been generally underutilized. This study aimed to develop a model that decomposes learning over repeated trials into separate cognitive processes and then include other testing data to predict performance at each trial as a function of general cognitive functioning.
Participants and Methods:
Data for CVLT-II learning trials were obtained from an outpatient neuropsychology service within an academic medical center referred for clinical reasons. Participants with a cognitive diagnosis of non-demented (ND) or probable Alzheimer’s disease (AD) were included. The final sample consisted of 323 ND [Mage = 58.6 (14.8); Medu = 15.4 (2.7); 55.7% female] and 915 AD [Mage = 72.6 (9.0); Medu = 14.2 (3.1); 60.1% female cases. A Bayesian non-linear beta-binomial multilevel model was used, which uses three parameters to predict CVLT-II recall-by-trial: verbal attention span (VAS), maximal learning potential (MLP), and learning rate (LR). Briefly, VAS predicts expected first trial performance while MLP, conversely, predicts the expected best performance as trials are repeated, and LR weights the influence of VAS versus MLR over repeated trials. Predictors of these parameters included age, education, sex, race, and clinical diagnosis, in addition to raw scores on Trail Making Test Parts A and B, phonemic (FAS) fluency, animal fluency, Boston Naming Test, Wisconsin Card Sorting Test (WCST) Categories Completed, and then age-adjusted scaled scores from WAIS-IV Digit Span, Block Design, Vocabulary, and Coding. Random intercepts were included for each parameter and extracted for comparison of residual differences by diagnosis.
Results:
The model explained 84% of the variance in CVLT-II raw scores. VAS reduced with age and time-to-complete Trails B but improved with both verbal fluencies and confrontation naming. MLP increased as a function of WAIS Digit Span, animal fluency, confrontation naming, and WCST categories completed. Finally, LR was greater for females and WAIS-IV Coding and Vocabulary performances but reduced with age. Participants with AD had lower estimates of all three parameters: Cohen’s d = 2.49 (VAS) - 3.48 (LR), though including demographic and neuropsychological tests attenuated differences, Cohen’s d = 0.34 (LR) - 0.95 (MLP).
Conclusions:
The resulting model highlights how non-memory neuropsychological deficits affect list-learning test performance. At the same time, the model demonstrated that memory patterns on the CVLT-II can still be identified beyond other confounding deficits since having AD affected all parameters independent of other cognitive impairments. The modeling approach can generate conditional learning curves for individual patient data, and when multiple diagnoses are included in the model, a person-fit statistic can be computed to return the mostly likely diagnosis for an individual. The model can also be used in research to quantify or adjust for the effect of other patient data (e.g., neuroimaging, biomarkers, medications).
Children and young people with CHD benefit from regular physical activity. Parents are reported as facilitators and barriers to their children’s physical activity. The aim of this study was to explore parental factors, child factors, and their clinical experience on physical activity participation in young people with CHD.
Methods:
An online questionnaire was co-developed with parents (n = 3) who have children with CHD. The survey was then distributed in the United Kingdom by social media and CHD networks, between October 2021 and February 2022. Data were analysed using mixed methods.
Results:
Eighty-three parents/guardians responded (94% mothers). Young people with CHD were 7.3 ± 5.0 years old (range 0–20 years; 53% female) and 84% performed activity. Parental participation in activity (X2(1) = 6.9, P < 0.05) and perceiving activity as important for their child were positively associated with activity (Fisher’s Exact, P < 0.05). Some parents (∼15%) were unsure of the safety of activity, and most (∼70%) were unsure where to access further information about activity. Fifty-two parents (72%) had never received activity advice in clinic, and of the 20 who received advice, 10 said it was inconsistent. Qualitative analysis produced the theme “Knowledge is power and comfort.” Parents described not knowing what activity was appropriate or the impact of it on their child.
Conclusion:
Parental participation and attitudes towards activity potentially influence their child’s activity. A large proportion of young people performed activity despite a lack and inconsistency of activity advice offered by CHD clinics. Young people with CHD would benefit from activity advice with their families in clinics.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
Background: Sex differences in treatment response to intravenous thrombolysis (IVT) are poorly characterized. We compared sex-disaggregated outcomes in patients receiving IVT for acute ischemic stroke in the Alteplase compared to Tenecteplase (AcT) trial, a Canadian multicentre, randomised trial. Methods: In this post-hoc analysis, the primary outcome was excellent functional outcome (modified Rankin Score [mRS] 0-1) at 90 days. Secondary and safety outcomes included return to baseline function, successful reperfusion (eTICI≥2b), death and symptomatic intracerebral hemorrhage. Results: Of 1577 patients, there were 755 women and 822 men (median age 77 [68-86]; 70 [59-79]). There were no differences in rates of mRS 0-1 (aRR 0.95 [0.86-1.06]), return to baseline function (aRR 0.94 [0.84-1.06]), reperfusion (aRR 0.98 [0.80-1.19]) and death (aRR 0.91 [0.79-1.18]). There was no effect modification by treatment type on the association between sex and outcomes. The probability of excellent functional outcome decreased with increasing onset-to-needle time. This relation did not vary by sex (pinteraction 0.42). Conclusions: The AcT trial demonstrated comparable functional, safety and angiographic outcomes by sex. This effect did not differ between alteplase and tenecteplase. The pragmatic enrolment and broad national participation in AcT provide reassurance that there do not appear to be sex differences in outcomes amongst Canadians receiving IVT.