We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
The delivery of paediatric cardiac care across the world occurs in settings with significant variability in available resources. Irrespective of the resources locally available, we must always strive to improve the quality of care we provide to our patients and simultaneously deliver such care in the most efficient and cost-effective manner. The development of cardiac networks is used widely to achieve these aims.
Methods:
This paper reports three talks presented during the 56th meeting of the Association for European Paediatric and Congenital Cardiology held in Dublin in April 2023.
Results:
The three talks describe how centres of congenital cardiac excellence can be developed in low-income countries, middle-income countries, and well-resourced environments, and also reports how centres across different countries can come together to collaborate and deliver high-quality care. It is a fact that barriers to creating effective networks may arise from competition that may exist among programmes in unregulated and especially privatised health care environments. Nevertheless, reflecting on the creation of networks has important implications because collaboration between different centres can facilitate the maintenance of sustainable programmes of paediatric and congenital cardiac care.
Conclusion:
This article examines the delivery of paediatric and congenital cardiac care in resource limited environments, well-resourced environments, and within collaborative networks, with the hope that the lessons learned from these examples can be helpful to other institutions across the world. It is important to emphasise that irrespective of the differences in resources across different continents, the critical principles underlying provision of excellent care in different environments remain the same.
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item and ability parameters. Simulated data sets were analyzed via two joint and two marginal Bayesian estimation procedures. The marginal Bayesian estimation procedures yielded consistently smaller root mean square differences than the joint Bayesian estimation procedures for item and ability estimates. As the sample size and test length increased, the four Bayes procedures yielded essentially the same result.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
The Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) Cross-Trial Statistics Group gathered lessons learned from statisticians responsible for the design and analysis of the 11 ACTIV therapeutic master protocols to inform contemporary trial design as well as preparation for a future pandemic. The ACTIV master protocols were designed to rapidly assess what treatments might save lives, keep people out of the hospital, and help them feel better faster. Study teams initially worked without knowledge of the natural history of disease and thus without key information for design decisions. Moreover, the science of platform trial design was in its infancy. Here, we discuss the statistical design choices made and the adaptations forced by the changing pandemic context. Lessons around critical aspects of trial design are summarized, and recommendations are made for the organization of master protocols in the future.
Recent studies suggest that meta-learning may provide an original solution to an enduring puzzle about whether neural networks can explain compositionality – in particular, by raising the prospect that compositionality can be understood as an emergent property of an inner-loop learning algorithm. We elaborate on this hypothesis and consider its empirical predictions regarding the neural mechanisms and development of human compositionality.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
The Pediatric Epilepsy Research Consortium (PERC) Epilepsy Surgery Database Project is a multisite collaborative that includes neuropsychological evaluations of children presenting for epilepsy surgery. There is some evidence for specific neuropsychological phenotypes within epilepsy (Hermann et al, 2016); however, this is less clear in pediatric patients. As a first step, we applied an empirically-based subtyping approach to determine if there were specific profiles using indices from the Wechsler scales [Verbal IQ (VIQ), Nonverbal IQ (NVIQ), Processing Speed Index (PSI), Working Memory Index (WMI)]. We hypothesized that there would be at least four profiles that are distinguished by slow processing speed and poor working memory as well as profiles with significant differences between verbal and nonverbal reasoning abilities.
Participants and Methods:
Our study included 372 children (M=12.1 years SD=4.1; 77.4% White; 48% male) who completed an age-appropriate Wechsler measure, enough to render at least two index scores. Epilepsy characteristics included 84.4% with focal epilepsy (evenly distributed between left and right focus) and 13.5% with generalized or mixed seizure types; mean age of onset = 6.7 years, SD = 4.5; seizure frequency ranged from daily to less than monthly; 53% had structural etiology; 71% had an abnormal MRI; and mean number of antiseizure medications was two. Latent profile analysis was used to identify discrete underlying cognitive profiles based on intellectual functioning. Demographic and epilepsy characteristics were compared among profiles.
Results:
Based on class enumeration procedures, a 3-cluster solution provided the best fit for the data, with profiles characterized by generally Average, Low Average, or Below Average functioning. 32.8% were in the Average profile with mean index scores ranging from 91.7-103.2; 47.6% were in the Low Average profile with mean index ranging from 80.7 to 84.5; and 19.6% were in the Below Average profile with mean index scores ranging from 55.0-63.1. Across all profiles, the lowest mean score was the PSI, followed by WMI. VIQ and NVIQ represented relatively higher scores for all three profiles. Mean discrepancy between indices within a profile was as large as 11.5 IQ points. No demographics or epilepsy characteristics were significantly different across cognitive phenotypes.
Conclusions:
Latent cognitive phenotypes in a pediatric presurgical cohort were differentiated by general level of functioning; however, across profiles, processing speed was consistently the lowest index followed by working memory. These findings across phenotypes suggest a common relative weakness which may result from a global effect of antiseizure medications and/or the widespread impact of seizures on neural networks even in a largely focal epilepsy cohort; similar to adult studies with temporal lobe epilepsy (Hermann et al, 2007). Future work will use latent profile analysis to examine phenotypes across other domains relevant to pediatric epilepsy including attention, naming, motor, and memory functioning. These findings are in line with collaborative efforts towards cognitive phenotyping which is the aim of our PERC Epilepsy Surgery Database Project that has already established one of the largest pediatric epilepsy surgery cohorts.
The focus of this chapter is on neurobiologically informed and constrained models of working memory as defined by Miller, Galanter, and Pribram (1960): the holding of goals and subgoals in mind in service of planning and executing complex behaviors. In particular, the chapter focuses on models specifically addressing critical challenges and mechanisms following from the need for rapid and selective gating of working memory contents. To start, the important computational challenges posed by the tradeoff between maintaining vs. updating are discussed, providing motivation for the rest of the chapter.After that, several seminal models that have contributed to current thinking are reviewed, including the authors’ own PBWM framework that has proven influential. Finally, several recent developments from the deep learning and neurophysiology literatures are addressed and critical questions and some directions for future progress are discussed.
The aim of this study was to quantify the time delay between screening and initiation of contact isolation for carriers of extended-spectrum beta-lactamase (ESBL)–producing Enterobacterales (ESBL-E).
Methods:
This study was a secondary analysis of contact isolation periods in a cluster-randomized controlled trial that compared 2 strategies to control ESBL-E (trial no. ISRCTN57648070). Patients admitted to 20 non-ICU wards in Germany, the Netherlands, Spain, and Switzerland were screened for ESBL-E carriage on admission, weekly thereafter, and on discharge. Data collection included the day of sampling, the day the wards were notified of the result, and subsequent ESBL-E isolation days.
Results:
Between January 2014 and August 2016, 19,122 patients, with a length of stay ≥2 days were included. At least 1 culture was collected for 16,091 patients (84%), with a median duration between the admission day and the day of first sample collection of 2 days (interquartile range [IQR], 1–3). Moreover, 854 (41%) of all 2,078 ESBL-E carriers remained without isolation during their hospital stay. In total, 6,040 ESBL-E days (32% of all ESBL-E days) accrued for patients who were not isolated. Of 2,078 ESBL-E-carriers, 1,478 ESBL-E carriers (71%) had no previous history of ESBL-E carriage. Also, 697 (34%) were placed in contact isolation with a delay of 4 days (IQR, 2–5), accounting for 2,723 nonisolation days (15% of ESBL-E days).
Conclusions:
Even with extensive surveillance screening, almost one-third of all ESBL-E days were nonisolation days. Limitations in routine culture-based ESBL-E detection impeded timely and exhaustive implementation of targeted contact isolation.
Only 6 to 8 % of the UK adults meet the daily recommendation for dietary fibre. Fava bean processing lead to vast amounts of high-fibre by-products such as hulls. Bean hull fortified bread was formulated to increase and diversify dietary fibre while reducing waste. This study assessed the bean hull: suitability as a source of dietary fibre; the systemic and microbial metabolism of its components and postprandial events following bean hull bread rolls. Nine healthy participants (53·9 ± 16·7 years) were recruited for a randomised controlled crossover study attending two 3 days intervention sessions, involving the consumption of two bread rolls per day (control or bean hull rolls). Blood and faecal samples were collected before and after each session and analysed for systemic and microbial metabolites of bread roll components using targeted LC-MS/MS and GC analysis. Satiety, gut hormones, glucose, insulin and gastric emptying biomarkers were also measured. Two bean hull rolls provided over 85 % of the daily recommendation for dietary fibre; but despite being a rich source of plant metabolites (P = 0·04 v. control bread), these had poor systemic bioavailability. Consumption of bean hull rolls for 3 days significantly increased plasma concentration of indole-3-propionic acid (P = 0·009) and decreased faecal concentration of putrescine (P = 0·035) and deoxycholic acid (P = 0·046). However, it had no effect on postprandial plasma gut hormones, bacterial composition and faecal short chain fatty acids amount. Therefore, bean hulls require further processing to improve their bioactives systemic availability and fibre fermentation.
In this 2019 cross-sectional study, we analyzed hospital records for Medicaid beneficiaries who acquired nonventilator hospital-acquired pneumonia. The results suggest that preventive dental treatment in the 12 months prior or periodontal therapy in the 6 months prior to a hospitalization is associated with a reduced risk of NVHAP.
OBJECTIVES/GOALS: Osteoarthritis (OA) is a cartilage destroying disease. We are investigating abaloparatide (ABL) activation of parathyroid hormone receptor type 1 (PTH1R), which is expressed by articular chondrocytes in OA. We propose ABL treatment is chondroprotective in murine PTOA via stimulation of matrix production and inhibition of chondrocyte maturation. METHODS/STUDY POPULATION: 16-week-old C57BL/6 male mice received destabilization of the medial meniscus (DMM) surgery to induce knee PTOA. Beginning 2 weeks post-DMM, 40 μg/kg of ABL (or saline) was administered daily via subcutaneous injection and tissues were harvested after 6 weeks of daily injections and 8 weeks after DMM surgery. Harvested joint tissues were used for histological and molecular assessment of OA using three 5 μm thick sagittal sections from each joint, 50 μm apart, cut from the medial compartment of injured knees. Safranin O/Fast Green tissue staining and immunohistochemistry-based detection of type 10 collagen (Col10) and lubricin (Prg4) was performed using standard methods. Histomorphometric quantification of tibial cartilage area and larger hypertrophic-like cells was performed using the Osteomeasure system. RESULTS/ANTICIPATED RESULTS: Safranin O/Fast Green stained sections showed a decreased cartilage loss in DMM joints from ABL-treated versus saline-treated mice. Histomorphometric analysis of total tibial cartilage area revealed preservation of cartilage tissue on the tibial surface. Immunohistochemical analyses showed that upregulation of Col10 in DMM joints was mitigated in the cartilage of ABL-treated mice, and chondrocyte expression of Prg4 was increased in uncalcified cartilage areas in ABL-treated group. The Prg4 finding suggests a matrix anabolic effect that may counter OA cartilage loss. Quantification of chondrocytes in uncalcified and calcified tibial cartilage areas revealed a reduction in the number of larger hypertrophic-like cells in ABL treated mice, suggesting deceleration of hypertrophic differentiation. DISCUSSION/SIGNIFICANCE: Cartilage preservation/regeneration therapies would fill a critical unmet need. We demonstrate that an osteoporosis drug targeting PTH1R decelerates PTOA in mice. ABL treatment was associated with preservation of cartilage, decreased Col10, increased Prg4, and decreased number of large hypertrophic-like chondrocytes in the tibial cartilage.
To examine the association between adherence to plant-based diets and mortality.
Design:
Prospective study. We calculated a plant-based diet index (PDI) by assigning positive scores to plant foods and reverse scores to animal foods. We also created a healthful PDI (hPDI) and an unhealthful PDI (uPDI) by further separating the healthy plant foods from less-healthy plant foods.
Setting:
The VA Million Veteran Program.
Participants:
315 919 men and women aged 19–104 years who completed a FFQ at the baseline.
Results:
We documented 31 136 deaths during the follow-up. A higher PDI was significantly associated with lower total mortality (hazard ratio (HR) comparing extreme deciles = 0·75, 95 % CI: 0·71, 0·79, Ptrend < 0·001]. We observed an inverse association between hPDI and total mortality (HR comparing extreme deciles = 0·64, 95 % CI: 0·61, 0·68, Ptrend < 0·001), whereas uPDI was positively associated with total mortality (HR comparing extreme deciles = 1·41, 95 % CI: 1·33, 1·49, Ptrend < 0·001). Similar significant associations of PDI, hPDI and uPDI were also observed for CVD and cancer mortality. The associations between the PDI and total mortality were consistent among African and European American participants, and participants free from CVD and cancer and those who were diagnosed with major chronic disease at baseline.
Conclusions:
A greater adherence to a plant-based diet was associated with substantially lower total mortality in this large population of veterans. These findings support recommending plant-rich dietary patterns for the prevention of major chronic diseases.
The syndromes subsumed under the general umbrella term of impulse control disorders (ICDs), punding, compulsive disorders, and the dopamine dysregulation syndrome (DDS), all share the common theme of an overwhelming need to perform some activity. The actions are generally closer in nature to addictive disorders, being ego syntonic, and less like true impulsive disorders which patients may try to resist [1]. Punding represents a need to perform senseless activities repeatedly, such as folding and refolding clothes in a drawer for hours at a time, polishing pennies, or pulling weeds from a lawn or threads from a rug. The more common ICDs include gambling disorder, compulsive sexual disorder, consumerism, and hobbyism, but may include strikingly unusual activities that are extraordinarily narrow in their focus. The DDS seems to be a form of drug addictive behavior, similar to that of the usual addictive drugs.
Response to lithium in patients with bipolar disorder is associated with clinical and transdiagnostic genetic factors. The predictive combination of these variables might help clinicians better predict which patients will respond to lithium treatment.
Aims
To use a combination of transdiagnostic genetic and clinical factors to predict lithium response in patients with bipolar disorder.
Method
This study utilised genetic and clinical data (n = 1034) collected as part of the International Consortium on Lithium Genetics (ConLi+Gen) project. Polygenic risk scores (PRS) were computed for schizophrenia and major depressive disorder, and then combined with clinical variables using a cross-validated machine-learning regression approach. Unimodal, multimodal and genetically stratified models were trained and validated using ridge, elastic net and random forest regression on 692 patients with bipolar disorder from ten study sites using leave-site-out cross-validation. All models were then tested on an independent test set of 342 patients. The best performing models were then tested in a classification framework.
Results
The best performing linear model explained 5.1% (P = 0.0001) of variance in lithium response and was composed of clinical variables, PRS variables and interaction terms between them. The best performing non-linear model used only clinical variables and explained 8.1% (P = 0.0001) of variance in lithium response. A priori genomic stratification improved non-linear model performance to 13.7% (P = 0.0001) and improved the binary classification of lithium response. This model stratified patients based on their meta-polygenic loadings for major depressive disorder and schizophrenia and was then trained using clinical data.
Conclusions
Using PRS to first stratify patients genetically and then train machine-learning models with clinical predictors led to large improvements in lithium response prediction. When used with other PRS and biological markers in the future this approach may help inform which patients are most likely to respond to lithium treatment.