We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
The First Large Absorption Survey in H i (FLASH) is a large-area radio survey for neutral hydrogen in and around galaxies in the intermediate redshift range $0.4\lt z\lt1.0$, using the 21-cm H i absorption line as a probe of cold neutral gas. The survey uses the ASKAP radio telescope and will cover 24,000 deg$^2$ of sky over the next five years. FLASH breaks new ground in two ways – it is the first large H i absorption survey to be carried out without any optical preselection of targets, and we use an automated Bayesian line-finding tool to search through large datasets and assign a statistical significance to potential line detections. Two Pilot Surveys, covering around 3000 deg$^2$ of sky, were carried out in 2019-22 to test and verify the strategy for the full FLASH survey. The processed data products from these Pilot Surveys (spectral-line cubes, continuum images, and catalogues) are public and available online. In this paper, we describe the FLASH spectral-line and continuum data products and discuss the quality of the H i spectra and the completeness of our automated line search. Finally, we present a set of 30 new H i absorption lines that were robustly detected in the Pilot Surveys, almost doubling the number of known H i absorption systems at $0.4\lt z\lt1$. The detected lines span a wide range in H i optical depth, including three lines with a peak optical depth $\tau\gt1$, and appear to be a mixture of intervening and associated systems. Interestingly, around two-thirds of the lines found in this untargeted sample are detected against sources with a peaked-spectrum radio continuum, which are only a minor (5–20%) fraction of the overall radio-source population. The detection rate for H i absorption lines in the Pilot Surveys (0.3 to 0.5 lines per 40 deg$^2$ ASKAP field) is a factor of two below the expected value. One possible reason for this is the presence of a range of spectral-line artefacts in the Pilot Survey data that have now been mitigated and are not expected to recur in the full FLASH survey. A future paper in this series will discuss the host galaxies of the H i absorption systems identified here.
We present the first results from a new backend on the Australian Square Kilometre Array Pathfinder, the Commensal Realtime ASKAP Fast Transient COherent (CRACO) upgrade. CRACO records millisecond time resolution visibility data, and searches for dispersed fast transient signals including fast radio bursts (FRB), pulsars, and ultra-long period objects (ULPO). With the visibility data, CRACO can localise the transient events to arcsecond-level precision after the detection. Here, we describe the CRACO system and report the result from a sky survey carried out by CRACO at 110-ms resolution during its commissioning phase. During the survey, CRACO detected two FRBs (including one discovered solely with CRACO, FRB 20231027A), reported more precise localisations for four pulsars, discovered two new RRATs, and detected one known ULPO, GPM J1839 $-$10, through its sub-pulse structure. We present a sensitivity calibration of CRACO, finding that it achieves the expected sensitivity of 11.6 Jy ms to bursts of 110 ms duration or less. CRACO is currently running at a 13.8 ms time resolution and aims at a 1.7 ms time resolution before the end of 2024. The planned CRACO has an expected sensitivity of 1.5 Jy ms to bursts of 1.7 ms duration or less and can detect $10\times$ more FRBs than the current CRAFT incoherent sum system (i.e. 0.5 $-$2 localised FRBs per day), enabling us to better constrain the models for FRBs and use them as cosmological probes.
The validity of a univocal multiple-choice test is determined for varying distributions of item difficulty and varying degrees of item precision. Validity is a function of σd2 + σy2, where σd measures item unreliability and σy measures the spread of item difficulties. When this variance is very small, validity is high for one optimum cutting score, but the test gives relatively little valid information for other cutting scores. As this variance increases, eta increases up to a certain point, and then begins to decrease. Screening validity at the optimum cutting score declines as this variance increases, but the test becomes much more flexible, maintaining the same validity for a wide range of cutting scores. For items of the type ordinarily used in psychological tests, the test with uniform item difficulty gives greater over-all validity, and superior validity for most cutting scores, compared to a test with a range of item difficulties. When a multiple-choice test is intended to reject the poorest F per cent of the men tested, items should on the average be located at or above the threshold for men whose true ability is at the Fth percentile.
Non-spurious methods are needed for estimating the coefficient of equivalence for speeded tests from single-trial data. Spuriousness in a split-half estimate depends on three conditions; the split-half method may be used if any of these is demonstrated to be absent. A lower-bounds formula, rc, is developed. An empirical trial of this coefficient and other bounds proposed by Gulliksen demonstrates that, for moderately speeded tests, the coefficient of equivalence can be determined approximately from single-trial data. It is proposed that the degree to which tests are speeded be investigated explicitly, and an index τ is advanced to define this concept.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
Around the world, people living in objectively difficult circumstances who experience symptoms of generalized anxiety disorder (GAD) do not qualify for a diagnosis because their worry is not ‘excessive’ relative to the context. We carried out the first large-scale, cross-national study to explore the implications of removing this excessiveness requirement.
Methods
Data come from the World Health Organization World Mental Health Survey Initiative. A total of 133 614 adults from 12 surveys in Low- or Middle-Income Countries (LMICs) and 16 surveys in High-Income Countries (HICs) were assessed with the Composite International Diagnostic Interview. Non-excessive worriers meeting all other DSM-5 criteria for GAD were compared to respondents meeting all criteria for GAD, and to respondents without GAD, on clinically-relevant correlates.
Results
Removing the excessiveness requirement increases the global lifetime prevalence of GAD from 2.6% to 4.0%, with larger increases in LMICs than HICs. Non-excessive and excessive GAD cases worry about many of the same things, although non-excessive cases worry more about health/welfare of loved ones, and less about personal or non-specific concerns, than excessive cases. Non-excessive cases closely resemble excessive cases in socio-demographic characteristics, family history of GAD, and risk of temporally secondary comorbidity and suicidality. Although non-excessive cases are less severe on average, they report impairment comparable to excessive cases and often seek treatment for GAD symptoms.
Conclusions
Individuals with non-excessive worry who meet all other DSM-5 criteria for GAD are clinically significant cases. Eliminating the excessiveness requirement would lead to a more defensible GAD diagnosis.
Major depressive disorder (MDD) is the leading cause of disability globally, with moderate heritability and well-established socio-environmental risk factors. Genetic studies have been mostly restricted to European settings, with polygenic scores (PGS) demonstrating low portability across diverse global populations.
Methods
This study examines genetic architecture, polygenic prediction, and socio-environmental correlates of MDD in a family-based sample of 10 032 individuals from Nepal with array genotyping data. We used genome-based restricted maximum likelihood to estimate heritability, applied S-LDXR to estimate the cross-ancestry genetic correlation between Nepalese and European samples, and modeled PGS trained on a GWAS meta-analysis of European and East Asian ancestry samples.
Results
We estimated the narrow-sense heritability of lifetime MDD in Nepal to be 0.26 (95% CI 0.18–0.34, p = 8.5 × 10−6). Our analysis was underpowered to estimate the cross-ancestry genetic correlation (rg = 0.26, 95% CI −0.29 to 0.81). MDD risk was associated with higher age (beta = 0.071, 95% CI 0.06–0.08), female sex (beta = 0.160, 95% CI 0.15–0.17), and childhood exposure to potentially traumatic events (beta = 0.050, 95% CI 0.03–0.07), while neither the depression PGS (beta = 0.004, 95% CI −0.004 to 0.01) or its interaction with childhood trauma (beta = 0.007, 95% CI −0.01 to 0.03) were strongly associated with MDD.
Conclusions
Estimates of lifetime MDD heritability in this Nepalese sample were similar to previous European ancestry samples, but PGS trained on European data did not predict MDD in this sample. This may be due to differences in ancestry-linked causal variants, differences in depression phenotyping between the training and target data, or setting-specific environmental factors that modulate genetic effects. Additional research among under-represented global populations will ensure equitable translation of genomic findings.
An “escape room” is a game requiring teamwork and problem-solving during which a series of puzzles are solved to escape a locked room. Various escape room activities have been designed for healthcare professionals, including internal medicine residents and nursing students (Anderson et al. Simulation & Gaming 2021; 52(1) 7-17; Rodríguez-Ferrer et al. BMC Med Educ 2022; 22:901; Khanna et al. Cureus 2021; 13 (9) e18314). Escape rooms provide an opportunity for social activity, an important component of resident wellness (Mari et al. BMC Med Educ 2019; 19(1):437). This abstract describes an escape room challenge designed and implemented at our psychiatry residency program quarterly wellness afternoon event, which is an afternoon session dedicated to resident wellness.
Objectives
The objective of this project was to design and implement an escape room challenge containing multiple game mechanics, including hidden roles, information asymmetry, acting, logical deduction, and spying. This activity was conducted to enhance bonding among residents while reinforcing knowledge in psychiatry.
Methods
We designed and implemented an escape room for 22 residents. Residents were divided into four teams each tasked with completing a sequence of puzzles to open the final lockbox. Two novel mechanics were added to the activity. Each team had a “clue holder” with clues to help solve all the puzzles. This team member had to conceal their identity because, if any of the other teams identified this person, the original winning team would have to give up the prize to the team that guessed the identity of this person. One member of each team was assigned a “spy” role whose mission was to make it hard for the clue holder to reveal all the clues. An anonymous post-activity survey was completed using Google Forms.
Results
The script was set in a fictional, abandoned psychiatric emergency room. The first task was a visual puzzle of a historic figure in psychiatry. The second activity involved residents guessing the psychotropic medication being acted out by another resident in the style of charades. The third activity required residents to apply developmental milestones to decode a combination lock. The fourth puzzle involved residents solving riddles by using information gathered from resident profiles on the residency program website.
Eleven (50%) residents completed the post-game survey. All residents answered true or very true that they enjoyed the game and that participation helped them better connect with their peers. Eight (73%) residents answered true or very true that they learned something from the activity.
Conclusions
An adapted escape room challenge is a novel wellness activity that enhance resident collegiality, teamwork, and bonding. All residents who completed the post-activity survey indicated that they enjoyed the activity and felt more connected to their peers afterwards.
Majority of international guidelines for bipolar disorders are based on evidences from clinical trials. In contrast, the Korean Medication Algorithm Project for Bipolar Disorder (KMAP-BP) was developed to adopt an expert-consensus paradigm which was more practical and specific to the atmosphere in Korea.
Objectives
In this study, preferred medication strategies for acute mania over six consecutively published KMAP-BP (2002, 2006, 2010, 2014, 2018, and 2022) were investigated.
Methods
A written survey using a nine-point scale was asked to Korean experts about the appropriateness of various treatment strategies and treatment agents. A written survey asked about the appropriateness of various treatment strategies and treatment agents commonly used by clinicians as the first-line.
Results
The most preferred option for the initial treatment of mania was a combination of a mood stabilizer (MS) and an atypical antipsychotic (AAP) in every edition. Preference for combined treatment for euphoric mania increased, peaked in KMAP-BP 2010, and declined slightly. Either MS or AAP monotherapy was also considered a first-line strategy for mania, but not for all types of episodes, including mixed/psychotic mania. Among MSs, lithium and valproate are almost equally preferred except in the mixed subtype where valproate is the most recommended MS. The preference of valproate showed reverse U-shaped curve. This preference change of valproate may indicate the concern about teratotoxicity in women. Quetiapine, aripiprazole, and olanzapine were the preferred AAP for acute mania since 2014. This change might depend on the recent evidences and safety profile. In cases of unsatisfactory response to initial medications, switching or adding another first-line agent was recommended. The most notable changes over time included the increasing preference for AAPs.
Conclusions
The Korean experts have been increasingly convinced of the effectiveness of a combination therapy for acute mania. There have been evident preference changes: increased for AAP and decreased for carbamazepine.
With the efflorescence of palaeoscientific approaches to the past, historians have been confronted with a wealth of new evidence on both human and natural phenomena, from human disease and migration to landscape change and climate. These new data require a rewriting of our narratives of the past, questioning what constitutes an authoritative historical source and who is entitled to recount history to contemporary societies. Humanities-based historical inquiry must embrace this new evidence, but to do so historians need to engage with it critically, just as they do with textual and material sources. This article highlights the most vital methodological issues, ranging from the spatiotemporal scales and heterogeneity of the new evidence to the new roles attributed to quantitative methods and the place of scientific data in narrative construction. It considers areas of study where the palaeosciences have “intruded” into fields and subjects previously reserved for historians, especially socioeconomic, climate, and environmental history. The authors argue that active engagement with new approaches is urgently needed if historians want to contribute to our evolving understanding of the challenges of the Anthropocene.
Clinical outcomes of repetitive transcranial magnetic stimulation (rTMS) for treatment of treatment-resistant depression (TRD) vary widely and there is no mood rating scale that is standard for assessing rTMS outcome. It remains unclear whether TMS is as efficacious in older adults with late-life depression (LLD) compared to younger adults with major depressive disorder (MDD). This study examined the effect of age on outcomes of rTMS treatment of adults with TRD. Self-report and observer mood ratings were measured weekly in 687 subjects ages 16–100 years undergoing rTMS treatment using the Inventory of Depressive Symptomatology 30-item Self-Report (IDS-SR), Patient Health Questionnaire 9-item (PHQ), Profile of Mood States 30-item, and Hamilton Depression Rating Scale 17-item (HDRS). All rating scales detected significant improvement with treatment; response and remission rates varied by scale but not by age (response/remission ≥ 60: 38%–57%/25%–33%; <60: 32%–49%/18%–25%). Proportional hazards models showed early improvement predicted later improvement across ages, though early improvements in PHQ and HDRS were more predictive of remission in those < 60 years (relative to those ≥ 60) and greater baseline IDS burden was more predictive of non-remission in those ≥ 60 years (relative to those < 60). These results indicate there is no significant effect of age on treatment outcomes in rTMS for TRD, though rating instruments may differ in assessment of symptom burden between younger and older adults during treatment.
Identifying the risks of completed suicide in suicide survivors is essential for policies supporting family members of suicide victims. We aimed to determine the suicide risk of suicide survivors and identify the number of suicides per 100,000 population of suicide survivors, bereaved families of traffic accident victims, and bereaved families with non-suicide deaths.
Methods:
This was a nationwide population-based cohort study in South Korea. The data were taken from the Korean National Health Insurance and Korea National Statistical Office between January 2008 and December 2017. The relationship between the decedent and the bereaved family was identified using the family database of the National Health Insurance Data. Age and gender were randomly matched 1:1 among 133,386 suicide deaths and non-suicide deaths. A proportional hazard model regression analysis was conducted after confirming the cumulative hazard using Kaplan-Meier curves to obtain the hazard ratio (HR) of completed suicide in suicide survivors.
Results:
Using 423,331 bereaved families of suicide victims and 420,978 bereaved families of non-suicide deaths as the control group, HR of completed suicide in suicidal survivors was found to be 2.755 [95% confidence limit (CL): 2.550-2.977]. HR for wives committing suicide after husbands' suicide was 5.096 (95% CL: 3.982-6.522), which was the highest HR among all relationships with suicide decedents. The average duration from suicide death to suicide of family members was 25.4 months. Among suicide survivors, the number of suicides per 100,000 people was 586, thrice that of people in bereaved families of traffic accident victims and in bereaved families of non-suicide deaths.
Conclusion:
The risk of completed suicide was three times higher in suicide survivors than in bereaved families with non-suicide deaths, and it was highest in wives of suicide decedents. Thus, socio-environmental interventions for suicidal survivors must be expanded.
We use contingent valuation to estimate hunter and trapper willingness to pay (WTP) for a hypothetical bobcat harvest permit being considered in Indiana. Harvest permits would be rationed, with limits on aggregate and individual harvests. A model of permit demand shows that WTP may be subject to “congestion effects” which attenuate welfare gains from relaxing harvest limits. Intuitively, relaxing limits may directly change an individual’s expected harvest and, hence, WTP. Participation may subsequently change, with congestion offsetting welfare increases. These effects may lead to apparent scope insensitivity that may be endemic in the context of rationed goods.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Anterior temporal lobectomy is a common surgical approach for medication-resistant temporal lobe epilepsy (TLE). Prior studies have shown inconsistent findings regarding the utility of presurgical intracarotid sodium amobarbital testing (IAT; also known as Wada test) and neuroimaging in predicting postoperative seizure control. In the present study, we evaluated the predictive utility of IAT, as well as structural magnetic resonance imaging (MRI) and positron emission tomography (PET), on long-term (3-years) seizure outcome following surgery for TLE.
Participants and Methods:
Patients consisted of 107 adults (mean age=38.6, SD=12.2; mean education=13.3 years, SD=2.0; female=47.7%; White=100%) with TLE (mean epilepsy duration =23.0 years, SD=15.7; left TLE surgery=50.5%). We examined whether demographic, clinical (side of resection, resection type [selective vs. non-selective], hemisphere of language dominance, epilepsy duration), and presurgical studies (normal vs. abnormal MRI, normal vs. abnormal PET, correctly lateralizing vs. incorrectly lateralizing IAT) were associated with absolute (cross-sectional) seizure outcome (i.e., freedom vs. recurrence) with a series of chi-squared and t-tests. Additionally, we determined whether presurgical evaluations predicted time to seizure recurrence (longitudinal outcome) over a three-year period with univariate Cox regression models, and we compared survival curves with Mantel-Cox (log rank) tests.
Results:
Demographic and clinical variables (including type [selective vs. whole lobectomy] and side of resection) were not associated with seizure outcome. No associations were found among the presurgical variables. Presurgical MRI was not associated with cross-sectional (OR=1.5, p=.557, 95% CI=0.4-5.7) or longitudinal (HR=1.2, p=.641, 95% CI=0.4-3.9) seizure outcome. Normal PET scan (OR= 4.8, p=.045, 95% CI=1.0-24.3) and IAT incorrectly lateralizing to seizure focus (OR=3.9, p=.018, 95% CI=1.2-12.9) were associated with higher odds of seizure recurrence. Furthermore, normal PET scan (HR=3.6, p=.028, 95% CI =1.0-13.5) and incorrectly lateralized IAT (HR= 2.8, p=.012, 95% CI=1.2-7.0) were presurgical predictors of earlier seizure recurrence within three years of TLE surgery. Log rank tests indicated that survival functions were significantly different between patients with normal vs. abnormal PET and incorrectly vs. correctly lateralizing IAT such that these had seizure relapse five and seven months earlier on average (respectively).
Conclusions:
Presurgical normal PET scan and incorrectly lateralizing IAT were associated with increased risk of post-surgical seizure recurrence and shorter time-to-seizure relapse.
Long-term exposure to the psychoactive ingredient in cannabis, delta-9-tetrahydrocanabinol (THC), has been consistently raised as a notable risk factor for schizophrenia. Additionally, cannabis is frequently used as a coping mechanism for individuals diagnosed with schizophrenia. Cannabis use in schizophrenia has been associated with greater severity of psychotic symptoms, non-compliance with medication, and increased relapse rates. Neuropsychological changes have also been implicated in long-term cannabis use and the course of illness of schizophrenia. However, the impact of co-occurring cannabis use in individuals with schizophrenia on cognitive functioning is less thoroughly explored. The purpose of this meta-analysis was to examine whether neuropsychological test performance and symptoms in schizophrenia differ as a function of THC use status. A second aim of this study was to examine whether symptom severity moderates the relationship between THC use and cognitive test performance among people with schizophrenia.
Participants and Methods:
Peer-reviewed articles comparing schizophrenia with and without cannabis use disorder (SZ SUD+; SZ SUD-) were selected from three scholarly databases; Ovid, Google Scholar, and PubMed. The following search terms were applied to yield studies for inclusion: neuropsychology, cognition, cognitive, THC, cannabis, marijuana, and schizophrenia. 11 articles containing data on psychotic symptoms and neurocognition, with SZ SUD+ and SZ SUD- groups, were included in the final analyses. Six domains of neurocognition were identified across included articles (Processing Speed, Attention, Working Memory, Verbal Learning Memory, and Reasoning and Problem Solving). Positive and negative symptom data was derived from eligible studies consisting of the Positive and Negative Syndrome Scale (PANSS), the Scale for the Assessment of Positive Symptoms (SAPS), the Scale for the Assessment of Negative Symptoms (SANS), Self-Evaluation of Negative Symptoms (SNS), Brief Psychiatric Rating Scale (BPRS), and Structured Clinical Interview for DSM Disorders (SCID) scores. Meta analysis and meta-regression was conducted using R.
Results:
No statistically significant differences were observed between SZ SUD+ and SZ SUD-across the cognitive domains of Processing Speed, Attention, Working Memory, Verbal Learning Memory, and Reasoning and Problem Solving. Positive symptom severity was found to moderate the relationship between THC use and processing speed, but not negative symptoms. Positive and negative symptom severity did not significantly moderate the relationship between THC use and the other cognitive domains.
Conclusions:
Positive symptoms moderated the relationship between cannabis use and processing speed among people with schizophrenia. The reasons for this are unclear, and require further exploration. Additional investigation is warranted to better understand the impact of THC use on other tests of neuropsychological performance and symptoms in schizophrenia.
The coronavirus disease 2019 (COVID-19) pandemic has demonstrated the importance of stewardship of viral diagnostic tests to aid infection prevention efforts in healthcare facilities. We highlight diagnostic stewardship lessons learned during the COVID-19 pandemic and discuss how diagnostic stewardship principles can inform management and mitigation of future emerging pathogens in acute-care settings. Diagnostic stewardship during the COVID-19 pandemic evolved as information regarding transmission (eg, routes, timing, and efficiency of transmission) became available. Diagnostic testing approaches varied depending on the availability of tests and when supplies and resources became available. Diagnostic stewardship lessons learned from the COVID-19 pandemic include the importance of prioritizing robust infection prevention mitigation controls above universal admission testing and considering preprocedure testing, contact tracing, and surveillance in the healthcare facility in certain scenarios. In the future, optimal diagnostic stewardship approaches should be tailored to specific pathogen virulence, transmissibility, and transmission routes, as well as disease severity, availability of effective treatments and vaccines, and timing of infectiousness relative to symptoms. This document is part of a series of papers developed by the Society of Healthcare Epidemiology of America on diagnostic stewardship in infection prevention and antibiotic stewardship.1