We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Preclinical evidence suggests that diazepam enhances hippocampal γ-aminobutyric acid (GABA) signalling and normalises a psychosis-relevant cortico-limbic-striatal circuit. Hippocampal network dysconnectivity, particularly from the CA1 subfield, is evident in people at clinical high-risk for psychosis (CHR-P), representing a potential treatment target. This study aimed to forward-translate this preclinical evidence.
Methods
In this randomised, double-blind, placebo-controlled study, 18 CHR-P individuals underwent resting-state functional magnetic resonance imaging twice, once following a 5 mg dose of diazepam and once following a placebo. They were compared to 20 healthy controls (HC) who did not receive diazepam/placebo. Functional connectivity (FC) between the hippocampal CA1 subfield and the nucleus accumbens (NAc), amygdala, and ventromedial prefrontal cortex (vmPFC) was calculated. Mixed-effects models investigated the effect of group (CHR-P placebo/diazepam vs. HC) and condition (CHR-P diazepam vs. placebo) on CA1-to-region FC.
Results
In the placebo condition, CHR-P individuals showed significantly lower CA1-vmPFC (Z = 3.17, PFWE = 0.002) and CA1-NAc (Z = 2.94, PFWE = 0.005) FC compared to HC. In the diazepam condition, CA1-vmPFC FC was significantly increased (Z = 4.13, PFWE = 0.008) compared to placebo in CHR-P individuals, and both CA1-vmPFC and CA1-NAc FC were normalised to HC levels. In contrast, compared to HC, CA1-amygdala FC was significantly lower contralaterally and higher ipsilaterally in CHR-P individuals in both the placebo and diazepam conditions (lower: placebo Z = 3.46, PFWE = 0.002, diazepam Z = 3.33, PFWE = 0.003; higher: placebo Z = 4.48, PFWE < 0.001, diazepam Z = 4.22, PFWE < 0.001).
Conclusions
This study demonstrates that diazepam can partially restore hippocampal CA1 dysconnectivity in CHR-P individuals, suggesting that modulation of GABAergic function might be useful in the treatment of this clinical group.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
The role of the gut microbiome in infant development has gained increasing interest in recent years. Most research on this topic has focused on the first three to four years of life because this is a critical period for developing gut-brain connections. Prior studies have identified associations between the composition and diversity of the gut microbiome in infancy and markers of temperament, including negative affect. However, the specific microbes affected, and the directionality of these associations have differed between studies, likely due to differences in the developmental period of focus and assessment approaches. In the current preregistered study, we examined connections between the gut microbiome, assessed at two time points in infancy (2 weeks and 18 months), and negative affect measured at 30 months of age in a longitudinal study of infants and their caregivers. We found that infants with higher gut microbiome diversity at 2 weeks showed more observed negative affect during a study visit at 30 months. We also found evidence for associations between specific genera of bacteria in infancy and negative affect. These results suggest associations between specific features of the gut microbiome and child behavior may differ based on timing of gut microbiome measurement.
Guideline-based tobacco treatment is infrequently offered. Electronic health record-enabled patient-generated health data (PGHD) has the potential to increase patient treatment engagement and satisfaction.
Methods:
We evaluated outcomes of a strategy to enable PGHD in a medical oncology clinic from July 1, 2021 to December 31, 2022. Among 12,777 patients, 82.1% received a tobacco screener about use and interest in treatment as part of eCheck-in via the patient portal.
Results:
We attained a broad reach (82.1%) and moderate response rate (30.9%) for this low-burden PGHD strategy. Patients reporting current smoking (n = 240) expressed interest in smoking cessation medication (47.9%) and counseling (35.8%). As a result of patient requests via PGHD, most tobacco treatment requests by patients were addressed by their providers (40.6–80.3%). Among patients with active smoking, those who received/answered the screener (n = 309 ) were more likely to receive tobacco treatment compared with usual care patients who did not have the patient portal (n = 323) (OR = 2.72, 95% CI = 1.93–3.82, P < 0.0001) using propensity scores to adjust for the effect of age, sex, race, insurance, and comorbidity. Patients who received yet ignored the screener (n = 1024) compared with usual care were also more likely to receive tobacco treatment, but to a lesser extent (OR = 2.20, 95% CI = 1.68–2.86, P < 0.0001). We mapped observed and potential benefits to the Translational Science Benefits Model (TSBM).
Discussion:
PGHD via patient portal appears to be a feasible, acceptable, scalable, and cost-effective approach to promote patient-centered care and tobacco treatment in cancer patients. Importantly, the PGHD approach serves as a real world example of cancer prevention leveraging the TSBM.
The fossil record of dinosaurs in Scotland mostly comprises isolated highly fragmentary bones from the Great Estuarine Group in the Inner Hebrides (Bajocian–Bathonian). Here we report the first definite dinosaur body fossil ever found in Scotland (historically), having been discovered in 1973, but not collected until 45 years later. It is the first and most complete partial dinosaur skeleton currently known from Scotland. NMS G.2023.19.1 was recovered from a challenging foreshore location in the Isle of Skye, and transported to harbour in a semi-rigid inflatable boat towed by a motor boat. After manual preparation, micro-CT scanning was carried out, but this did not aid in identification. Among many unidentifiable elements, a neural arch, two ribs and part of the ilium are described herein, and their features indicate that this was a cerapodan or ornithopod dinosaur. Histological thin sections of one of the ribs support this identification, indicating an individual at least eight years of age, growing slowly at the time of death. If ornithopodan, as our data suggest, it could represent the world's oldest body fossil of this clade.
Trifludimoxazin is a new herbicide that inhibits protoporphyrinogen oxidase and is being evaluated for the control of small-seeded annual broadleaf weeds and grasses in several crops. Currently, no information is available regarding peanut cultivar response to trifludimoxazin and its utility in peanut weed control systems. Three unique field experiments were conducted and replicated in time from 2019 through 2022 to determine the response of seven peanut cultivars (‘AU-NPL 17’, ‘FloRun 331’, ‘GA-06G’, ‘GA-16HO’, ‘GA-18RU’, ‘GA-20VHO’, and ‘TifNV High O/L’) to preemergence applications of trifludimoxazin and to determine the efficacy of trifludimoxazin at multiple rates and tank-mixtures with acetochlor, diclosulam, dimethenamid-P, pendimethalin, and S-metolachlor for weed management. Cultivar sensitivities to trifludimoxazin were not observed. Peanut density was not reduced by any trifludimoxazin rate. Compared with nontreated controls, in 2019 when trifludimoxazin was applied at 75 g ai ha−1, leaf necrosis increased by 18% and peanut stunting increased by 10%, and yield was reduced by 6%. However, this rate increased leaf necrosis by only 4%, stunting by 3% to 5%, and it had no negative effect on yield in 2020–2021. Generally, peanut injury from preemergence-applied trifludimoxazin was similar to or less than that observed from flumioxazin at 2 wk after application (WAA). Peanut yield in the weed control study was reduced by 11% to 12% when treated with trifludimoxazin at 150 g ha−1 (4× the standard rate) when compared to the 75 g ha−1 rate. However, yield was not different from the flumioxazin treatment. Palmer amaranth control with trifludimoxazin combinations was ≥91% at 13 WAA, wild radish control was ≥96% at 5 WAA, and annual grass control was ≥97% at 13 WAA. Peanut is sufficiently tolerant of 38 g ha−1 of trifludimoxazin, and when tank-mixed with other residual herbicides provides weed control similar to flumioxazin-based systems.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Novel management strategies for controlling smutgrass have potential to influence sward dynamics in bahiagrass forage systems. This experiment evaluated population shifts in bahiagrass forage following implementation of integrated herbicide and fertilizer management plans for controlling smutgrass. Herbicide treatments included indaziflam applied PRE, hexazinone applied POST, a combination of PRE + POST herbicides, and a nonsprayed control. Fertilizer treatments included nitrogen, nitrogen + potassium, and an unfertilized control. The POST treatment reduced smutgrass coverage regardless of PRE or fertilizer application by the end of the first season and remained low for the 3-yr duration of the experiment (P < 0.01). All treatments, including nontreated controls, reduced smutgrass coverage during year 3 (P < 0.05), indicating that routine harvesting to remove the biomass reduced smutgrass coverage. Bahiagrass cover increased at the end of year 1 with POST treatment (P < 0.01), but only the POST + fertilizer treatment maintained greater bahiagrass coverage than the nontreated control by the end of year 3 (P < 0.05). Expenses associated with the POST + fertilizer treatment totaled US$348 ha−1 across the 3-yr experiment. Other smutgrass control options could include complete removal of biomass (hay production) and pasture renovation, which can cost 3-fold or greater more than POST + fertilizer treatment. Complete removal of biomass may reduce smutgrass coverage by removing mature seedheads, but at a much greater expense of US$2,835 to US$5,825 ha−1, depending on herbicide and fertilizer inputs. Bahiagrass renovation is US$826 ha−1 in establishment costs alone. When pasture production expenses are included for two seasons postrenovation, the total increases to US$1,120 ha−1 across three seasons. The importance of hexazinone and fertilizer as components of smutgrass control in bahiagrass forage was confirmed in this study. Future research should focus on the biology of smutgrass and the role of a PRE treatment in a long-term, larger-scale forage system.
Carbapenem-resistant Enterobacterales (CRE) are an urgent threat to healthcare, but the epidemiology of these antimicrobial-resistant organisms may be evolving in some settings since the COVID-19 pandemic. An updated analysis of hospital-acquired CRE (HA-CRE) incidence in community hospitals is needed.
Methods:
We retrospectively analyzed data on HA-CRE cases and antimicrobial utilization (AU) from two community hospital networks, the Duke Infection Control Outreach Network (DICON) and the Duke Antimicrobial Stewardship Outreach Network (DASON) from January 2013 to June 2023. The zero-inflated negative binomial regression model was used owing to excess zeros.
Results:
126 HA-CRE cases from 36 hospitals were included in the longitudinal analysis. The pooled incidence of HA CRE was 0.69 per 100,000 patient days (95% confidence interval [95% CI], 0.57–0.82 HA-CRE rate significantly decreased over time before COVID-19 (rate ratio [RR], 0.94 [95% CI, 0.89–0.99]; p = 0.02), but there was a significant slope change indicating a trend increase in HA-CRE after COVID-19 (RR, 1.32 [95% CI, 1.06–1.66]; p = 0.01). In 21 hospitals participating in both DICON and DASON from January 2018 to June 2023, there was a correlation between HA-CRE rates and AU for CRE treatment (Spearman’s coefficient = 0.176; p < 0.01). Anti-CRE AU did not change over time, and there was no level or slope change after COVID.
Conclusions:
The incidence of HA-CRE decreased before COVID-19 in a network of community hospitals in the southeastern United States, but this trend was disrupted by the COVID-19 pandemic.
Sperlingite, (H2O)K(Mn2+Fe3+)(Al2Ti)(PO4)4[O(OH)][(H2O)9(OH)]⋅4H2O, is a new monoclinic member of the paulkerrite group, from the Hagendorf-Süd pegmatite, Oberpfalz, Bavaria, Germany. It was found in corrosion pits of altered zwieselite, in association with columbite, hopeite, leucophosphite, mitridatite, scholzite, orange–brown zincoberaunite sprays and tiny green crystals of zincolibethenite. Sperlingite forms colourless prisms with pyramidal terminations, which are predominantly only 5 to 20 μm in size, rarely to 60 μm and frequently are multiply intergrown and are overgrown with smaller crystals. The crystals are flattened on {010} and slightly elongated along [100] with forms {010}, {001} and {111}. Twinning occurs by rotation about c. The calculated density is 2.40 g⋅cm–3. Optically, sperlingite crystals are biaxial (+), α = 1.600(est), β = 1.615(5), γ = 1.635(5) (white light) and 2V (calc.) = 82.7°. The optical orientation is X = b, Y = c and Z = a. Neither dispersion nor pleochroism were observed. The empirical formula from electron microprobe analyses and structure refinement is A1[(H2O)0.96K0.04]Σ1.00A2(K0.52□0.48)Σ1.00M1(Mn2+0.60Mg0.33Zn0.29Fe3+0.77)Σ1.99M2+M3(Al1.05Ti4+1.33Fe3+0.62)Σ3.00(PO4)4X[F0.19(OH)0.94O0.87]Σ2.00[(H2O)9.23(OH)0.77]Σ10.00⋅3.96H2O. Sperlingite has monoclinic symmetry with space group P21/c and unit-cell parameters a = 10.428(2) Å, b = 20.281(4) Å, c = 12.223(2) Å, β = 90.10(3)°, V = 2585.0(8) Å3 and Z = 4. The crystal structure was refined using synchrotron single-crystal data to wRobs = 0.058 for 5608 reflections with I > 3σ(I). Sperlingite is the first paulkerrite-group mineral to have co-dominant divalent and trivalent cations at the M1 sites; All other reported members have Mn2+ or Mg dominant at M1. Local charge balance for Fe3+ at M1 is achieved by H2O → OH– at H2O coordinated to M1.
This study aimed to parse between-person heterogeneity in growth of impulsivity across childhood and adolescence among participants enrolled in five childhood preventive intervention trials targeting conduct problems. In addition, we aimed to test profile membership in relation to adult psychopathologies. Measurement items representing impulsive behavior across grades 2, 4, 5, 7, 8, and 10, and aggression, substance use, suicidal ideation/attempts, and anxiety/depression in adulthood were integrated from the five trials (N = 4,975). We applied latent class growth analysis to this sample, as well as samples separated into nonintervention (n = 2,492) and intervention (n = 2,483) participants. Across all samples, profiles were characterized by high, moderate, low, and low-increasing impulsive levels. Regarding adult outcomes, in all samples, the high, moderate, and low profiles endorsed greater levels of aggression compared to the low-increasing profile. There were nuanced differences across samples and profiles on suicidal ideation/attempts and anxiety/depression. Across samples, there were no significant differences between profiles on substance use. Overall, our study helps to inform understanding of the developmental course and prognosis of impulsivity, as well as adding to collaborative efforts linking data across multiple studies to better inform understanding of developmental processes.
Clinical outcomes of repetitive transcranial magnetic stimulation (rTMS) for treatment of treatment-resistant depression (TRD) vary widely and there is no mood rating scale that is standard for assessing rTMS outcome. It remains unclear whether TMS is as efficacious in older adults with late-life depression (LLD) compared to younger adults with major depressive disorder (MDD). This study examined the effect of age on outcomes of rTMS treatment of adults with TRD. Self-report and observer mood ratings were measured weekly in 687 subjects ages 16–100 years undergoing rTMS treatment using the Inventory of Depressive Symptomatology 30-item Self-Report (IDS-SR), Patient Health Questionnaire 9-item (PHQ), Profile of Mood States 30-item, and Hamilton Depression Rating Scale 17-item (HDRS). All rating scales detected significant improvement with treatment; response and remission rates varied by scale but not by age (response/remission ≥ 60: 38%–57%/25%–33%; <60: 32%–49%/18%–25%). Proportional hazards models showed early improvement predicted later improvement across ages, though early improvements in PHQ and HDRS were more predictive of remission in those < 60 years (relative to those ≥ 60) and greater baseline IDS burden was more predictive of non-remission in those ≥ 60 years (relative to those < 60). These results indicate there is no significant effect of age on treatment outcomes in rTMS for TRD, though rating instruments may differ in assessment of symptom burden between younger and older adults during treatment.
Assessment of performance validity during neuropsychology evaluation is essential to reliably interpret cognitive test scores. Studies (Webber et al., 2018; Wisdom, et al., 2012) have validated the use of abbreviated measures, such as Trial 1 (T1) of the Test of Memory Malingering (TOMM), to detect invalid performance. Only one study (Bauer et al., 2007) known to these authors has examined the utility of Green’s Word Memory Test (WMT) immediate recall (IR) as a screening tool for invalid performance. This study explores WMT IR as an independent indicator of performance validity in a mild TBI (mTBI) veteran population.
Participants and Methods:
Participants included 211 (Mage = 32.1, SD = 7.4; Medu = 13.1, SD = 1.64; 94.8% male; 67.8% White) OEF/OIF/OND veterans with a history of mTBI who participated in a comprehensive neuropsychological evaluation at one of five participating VA Medical Centers. Performance validity was assessed using validated cut scores from the following measures: WMT IR and delayed recall (DR); TOMM T1; WAIS-IV reliable digit span; CVLT-II forced choice raw score; Wisconsin Card Sorting Test failure to maintain set; and the Rey Memory for Fifteen Items test, combo score. Sensitivity and specificity were calculated for each IR score compared with failure on DR. In addition, sensitivity and specificity were calculated for each WMT IR score compared to failure of at least one additional performance validity measure (excluding DR), two or more measures, and three or more measures, respectively.
Results:
Results indicated that 46.8% participants failed to meet cut offs for adequate performance validity based on the standard WMT IR cut score (i.e., 82.5%; M = 81.8%, SD = 17.7%); however, 50.2% participants failed to meet criteria based on the standard WMT DR cut score (M = 79.8% SD = 18.6%). A cut score of 82.5% or below on WMT IR correctly identified 82.4% (i.e., sensitivity) of subjects who performed below cut score on DR, with a specificity of 94.2%. Examination of IR cutoffs compared to failure of one or more other PVTs revealed that the standardized cut score of 82.5% or below had a sensitivity of 78.2% and a specificity of 72.4%; whereas, a cut score of 65% or below had a sensitivity of 41% and a specificity of 91.3%. Similarly, examination of IR cutoffs compared to failure of two or more additional PVTs revealed that the cut score of 60% or below had a sensitivity of 45.7% and specificity of 93.1% ; whereas, a cut score of 57.5% or below had a sensitivity of 57.9% and specificity of 90.9% when using failure of three or more PVTs as the criterion.
Conclusions:
Results indicated that a cut score of 82.5% or below on WMT IR may be sufficient to detect invalid performance when considering WMT DR as criterion. Furthermore, WMT IR alone, with adjustments to cut scores, appears to be a reasonable way to assess symptom validity compared to other PVTs. Sensitivity and specificity of WMT IR scores may have been adversely impacted by lower sensitivity of other PVTs to independently identify invalid performance.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Population studies have shown that Black individuals are at higher risk for MCI and dementia than White individuals but are more likely to be underdiagnosed or misdiagnosed. Although multiple contributory factors have been identified in relation to neurocognitive diagnostic disparities among persons of color, few studies have investigated race-associated differences in MCI and dementia classification across diagnostic methods. The current study examined the agreement of cognitive classification made via semi-structured interview and neuropsychological assessment.
Participants and Methods:
Only participants assigned normal cognitive status or cognitive impairment with presumed Alzheimer’s etiology were included in the study. Baseline visit data in the National Alzheimer’s Coordinating Center (NACC) dataset was collected to compare correspondence of cognitive classification (normal cognition, MCI, dementia) via semi-structured interview (Clinical Dementia Rating; CDR) with formal NACC diagnostic determination. NACC diagnostic determination was further separated by single clinician and consensus diagnostic methods. Inter-rater agreement was evaluated using chi-squared tests, and respective analyses were stratified for race (Black vs White), ethnicity (Hispanic vs Non-Hispanic), and education (<12 years vs. >12 years).
Results:
The sample size included 4,739 Black and 26,393 White participants across 43 Alzheimer’s Disease Research Centers (ADRCs). Inter-rater analyses between CDR (semi-structured interview) versus single-clinician and formal consensus NACC diagnostic methods showed strong (all (pc>.70) consistency in cognitive diagnoses overall, irrespective of race, ethnicity, and education. The percentage of agreement between diagnostic methods was nearly 100% for those categorized as cognitively normal or with dementia. However, the agreement for MCI was considerably lower (ranging from 28-74%) and revealed a disparity in diagnostic method between Black and White individuals. White individuals diagnosed with MCI via CDR (CDR total =0.5) were more likely to be labeled as having dementia regardless of NACC diagnostic method. However, Black individuals diagnosed with MCI via CDR were equally likely to be diagnosed as cognitively normal or with dementia via the formal consensus method.
Conclusions:
Irrespective of race and other demographic variables, diagnostic methods had high agreement for groups labeled with normal cognition and dementia. Agreement was consistently lower for the group labeled with MCI, with Black individuals having greater variability in diagnostic differentials when diagnosed via formal consensus method. The results of the study suggest that neuropsychological assessment continues to be an integral component of diagnosing individuals with MCI, reducing possible sources of bias.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Female fertility is a complex trait with age-specific changes in spontaneous dizygotic (DZ) twinning and fertility. To elucidate factors regulating female fertility and infertility, we conducted a genome-wide association study (GWAS) on mothers of spontaneous DZ twins (MoDZT) versus controls (3273 cases, 24,009 controls). This is a follow-up study to the Australia/New Zealand (ANZ) component of that previously reported (Mbarek et al., 2016), with a sample size almost twice that of the entire discovery sample meta-analysed in the previous article (and five times the ANZ contribution to that), resulting from newly available additional genotyping and representing a significant increase in power. We compare analyses with and without male controls and show unequivocally that it is better to include male controls who have been screened for recent family history, than to use only female controls. Results from the SNP based GWAS identified four genomewide significant signals, including one novel region, ZFPM1 (Zinc Finger Protein, FOG Family Member 1), on chromosome 16. Previous signals near FSHB (Follicle Stimulating Hormone beta subunit) and SMAD3 (SMAD Family Member 3) were also replicated (Mbarek et al., 2016). We also ran the GWAS with a dominance model that identified a further locus ADRB2 on chr 5. These results have been contributed to the International Twinning Genetics Consortium for inclusion in the next GWAS meta-analysis (Mbarek et al., in press).
Diets deficient in fibre are reported globally. The associated health risks of insufficient dietary fibre are sufficiently grave to necessitate large-scale interventions to increase population intake levels. The Danish Whole Grain Partnership (DWP) is a public–private enterprise model that successfully augmented whole-grain intake in the Danish population. The potential transferability of the DWP model to Slovenia, Romania and Bosnia-Herzegovina has recently been explored. Here, we outline the feasibility of adopting the approach in the UK. Drawing on the collaborative experience of DWP partners, academics from the Healthy Soil, Healthy Food, Healthy People (H3) project and food industry representatives (Food and Drink Federation), this article examines the transferability of the DWP approach to increase whole grain and/or fibre intake in the UK. Specific consideration is given to the UK’s political, regulatory and socio-economic context. We note key political, regulatory, social and cultural challenges to transferring the success of DWP to the UK, highlighting the particular challenge of increasing fibre consumption among low socio-economic status groups – which were also most resistant to interventions in Denmark. Wholesale transfer of the DWP model to the UK is considered unlikely given the absence of the key ‘success factors’ present in Denmark. However, the DWP provides a template against which a UK-centric approach can be developed. In the absence of a clear regulatory context for whole grain in the UK, fibre should be prioritised and public–private partnerships supported to increase the availability and acceptability of fibre-rich foods.
High-resolution and multiplexed imaging techniques are giving us an increasingly detailed observation of a biological system. However, sharing, exploring, and customizing the visualization of large multidimensional images can be a challenge. Here, we introduce Samui, a performant and interactive image visualization tool that runs completely in the web browser. Samui is specifically designed for fast image visualization and annotation and enables users to browse through large images and their selected features within seconds of receiving a link. We demonstrate the broad utility of Samui with images generated with two platforms: Vizgen MERFISH and 10x Genomics Visium Spatial Gene Expression. Samui along with example datasets is available at https://samuibrowser.com.
To estimate the incidence, duration and risk factors for diagnostic delays associated with pertussis.
Design:
We used longitudinal retrospective insurance claims from the Marketscan Commercial Claims and Encounters, Medicare Supplemental (2001–2020), and Multi-State Medicaid (2014–2018) databases.
Setting:
Inpatient, emergency department, and outpatient visits.
Patients:
The study included patients diagnosed with pertussis (International Classification of Diseases [ICD] codes) and receipt of macrolide antibiotic treatment.
Methods:
We estimated the number of visits with pertussis-related symptoms before diagnosis beyond that expected in the absence of diagnostic delays. Using a bootstrapping approach, we estimated the number of visits representing a delay, the number of missed diagnostic opportunities per patient, and the duration of delays. Results were stratified by age groups. We also used a logistic regression model to evaluate potential factors associated with delay.
Results:
We identified 20,828 patients meeting inclusion criteria. On average, patients had almost 2 missed opportunities prior to diagnosis, and delay duration was 12 days. Across age groups, the percentage of patients experiencing a delay ranged from 29.7% to 37.6%. The duration of delays increased considerably with age from an average of 5.6 days for patients aged <2 years to 13.8 days for patients aged ≥18 years. Factors associated with increased risk of delays included emergency department visits, telehealth visits, and recent prescriptions for antibiotics not effective against pertussis.
Conclusions:
Diagnostic delays for pertussis are frequent. More work is needed to decrease diagnostic delays, especially among adults. Earlier case identification may play an important role in the response to outbreaks by facilitating treatment, isolation, and improved contact tracing.