We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Functional impairment in daily activities, such as work and socializing, is part of the diagnostic criteria for major depressive disorder and most anxiety disorders. Despite evidence that symptom severity and functional impairment are partially distinct, functional impairment is often overlooked. To assess whether functional impairment captures diagnostically relevant genetic liability beyond that of symptoms, we aimed to estimate the heritability of, and genetic correlations between, key measures of current depression symptoms, anxiety symptoms, and functional impairment.
Methods
In 17,130 individuals with lifetime depression or anxiety from the Genetic Links to Anxiety and Depression (GLAD) Study, we analyzed total scores from the Patient Health Questionnaire-9 (depression symptoms), Generalized Anxiety Disorder-7 (anxiety symptoms), and Work and Social Adjustment Scale (functional impairment). Genome-wide association analyses were performed with REGENIE. Heritability was estimated using GCTA-GREML and genetic correlations with bivariate-GREML.
Results
The phenotypic correlations were moderate across the three measures (Pearson’s r = 0.50–0.69). All three scales were found to be under low but significant genetic influence (single-nucleotide polymorphism-based heritability [h2SNP] = 0.11–0.19) with high genetic correlations between them (rg = 0.79–0.87).
Conclusions
Among individuals with lifetime depression or anxiety from the GLAD Study, the genetic variants that underlie symptom severity largely overlap with those influencing functional impairment. This suggests that self-reported functional impairment, while clinically relevant for diagnosis and treatment outcomes, does not reflect substantial additional genetic liability beyond that captured by symptom-based measures of depression or anxiety.
Background: While efgartigimod usage is expected to reduce immunoglobulin (IG) utilization, evidence in clinical practice is limited. Methods: In this retrospective cohort study, patients with gMG treated with efgartigimod for ≥1-year were identified from US medical/pharmacy claims data (April 2016-January 2024) and data from the My VYVGART Path patient support program (PSP). The number of IG courses during 1-year before and after efgartigimod initiation (index date) were evaluated. Patients with ≥6 annual IG courses were considered chronic IG users. Myasthenia Gravis Activities of Daily Living (MG-ADL) scores before and after index were obtained from the PSP where available. Descriptive statistics were used without adjustment for covariates. Results: 167 patients with ≥1 IG claim before index were included. Prior to efgartigimod initiation, the majority of patients (62%) received IG chronically. During the 1-year after index, the number of IG courses fell by 95% (pre: 1531, post: 75). 89% (n=149/167) of patients fully discontinued IG usage. Mean (SD) best-follow up MG-ADL scores were significantly reduced after index (8.0 [4.1] to 2.8 [2.1], P<0.05, n=73/167, 44%). Conclusions: Based on US claims, IG utilization was substantially reduced among patients who continued efgartigimod for ≥1-year, with patients demonstrating a favorable MG-ADL response.
Background: Our prior six-year review (n=2165) revealed 24% of patients undergoing posterior decompression surgeries (laminectomy or discectomy) sought emergency department (ED) care within three months post-surgery. We established an integrated Spine Assessment Clinic (SAC) to enhance patient outcomes and minimize unnecessary ED visits through pre-operative education, targeted QI interventions, and early post-operative follow-up. Methods: We reviewed 13 months of posterior decompression data (n=205) following SAC implementation. These patients received individualized, comprehensive pre-operative education and follow-up phone calls within 7 days post-surgery. ED visits within 90 days post-surgery were tracked using provincial databases and compared to our pre-SAC implementation data. Results: Out of 205 patients, 24 (11.6%) accounted for 34 ED visits within 90 days post-op, showing a significant reduction in ED visits from 24% to 11.6%, and decreased overall ED utilization from 42.1% to 16.6% (when accounting for multiple visits by the same patient). Early interventions including wound monitoring, outpatient bloodwork, and prescription adjustments for pain management, helped mitigate ED visits. Patient satisfaction surveys (n=62) indicated 92% were “highly satisfied” and 100% would recommend the SAC. Conclusions: The SAC reduced ED visits after posterior decompression surgery by over 50%, with pre-operative education, focused QI initiatives, and its individualized, proactive approach.
DSM-5 specifies bulimia nervosa (BN) severity based on specific thresholds of compensatory behavior frequency. There is limited empirical support for such severity groupings. Limited support could be because the DSM-5’s compensatory behavior frequency cutpoints are inaccurate or because compensatory behavior frequency does not capture true underlying differences in severity. In support of the latter possibility, some work has suggested shape/weight overvaluation or use of single versus multiple purging methods may be better severity indicators. We used structural equation modeling (SEM) Trees to empirically determine the ideal variables and cutpoints for differentiating BN severity, and compared the SEM Tree groupings to alternate severity classifiers: the DSM-5 indicators, single versus multiple purging methods, and a binary indicator of shape/weight overvaluation.
Methods
Treatment-seeking adolescents and adults with BN (N = 1017) completed self-report measures assessing BN and comorbid symptoms. SEM Trees specified an outcome model of BN severity and recursively partitioned this model into subgroups based on shape/weight overvaluation and compensatory behaviors. We then compared groups on clinical characteristics (eating disorder symptoms, depression, anxiety, and binge eating frequency).
Results
SEM Tree analyses resulted in five severity subgroups, all based on shape/weight overvaluation: overvaluation <1.25; overvaluation 1.25–3.74; overvaluation 3.75–4.74; overvaluation 4.75–5.74; and overvaluation ≥5.75. SEM Tree groups explained 1.63–6.41 times the variance explained by other severity schemes.
Conclusions
Shape/weight overvaluation outperformed the DSM-5 severity scheme and single versus multiple purging methods, suggesting the DSM-5 severity scheme should be reevaluated. Future research should examine the predictive utility of this severity scheme.
Posttraumatic stress disorder (PTSD) has been associated with advanced epigenetic age cross-sectionally, but the association between these variables over time is unclear. This study conducted meta-analyses to test whether new-onset PTSD diagnosis and changes in PTSD symptom severity over time were associated with changes in two metrics of epigenetic aging over two time points.
Methods
We conducted meta-analyses of the association between change in PTSD diagnosis and symptom severity and change in epigenetic age acceleration/deceleration (age-adjusted DNA methylation age residuals as per the Horvath and GrimAge metrics) using data from 7 military and civilian cohorts participating in the Psychiatric Genomics Consortium PTSD Epigenetics Workgroup (total N = 1,367).
Results
Meta-analysis revealed that the interaction between Time 1 (T1) Horvath age residuals and new-onset PTSD over time was significantly associated with Horvath age residuals at T2 (meta β = 0.16, meta p = 0.02, p-adj = 0.03). The interaction between T1 Horvath age residuals and changes in PTSD symptom severity over time was significantly related to Horvath age residuals at T2 (meta β = 0.24, meta p = 0.05). No associations were observed for GrimAge residuals.
Conclusions
Results indicated that individuals who developed new-onset PTSD or showed increased PTSD symptom severity over time evidenced greater epigenetic age acceleration at follow-up than would be expected based on baseline age acceleration. This suggests that PTSD may accelerate biological aging over time and highlights the need for intervention studies to determine if PTSD treatment has a beneficial effect on the aging methylome.
The prevalence of youth anxiety and depression has increased globally, with limited causal explanations. Long-term physical health conditions (LTCs) affect 20–40% of youth, with rates also rising. LTCs are associated with higher rates of youth depression and anxiety; however, it is uncertain whether observed associations are causal or explained by unmeasured confounding or reverse causation.
Methods
Using data from the Norwegian Mother, Father, and Child Cohort Study (MoBa) and Norwegian National Patient Registry, we investigated phenotypic associations between childhood LTCs, and depression and anxiety diagnoses in youth (<19 years), defined using ICD-10 diagnoses and self-rated measures. We then conducted two-sample Mendelian Randomization (MR) analyses using SNPs associated with childhood LTCs from existing genome-wide association studies (GWAS) as instrumental variables. Outcomes were: (i) diagnoses of major depressive disorder (MDD) and anxiety disorders or elevated symptoms in MoBa, and (ii) youth-onset MDD using summary statistics from a GWAS in iPSYCH2015 cohort.
Results
Having any childhood LTC phenotype was associated with elevated youth MDD (OR = 1.48 [95% CIs 1.19, 1.85], p = 4.2×10−4) and anxiety disorder risk (OR = 1.44 [1.20, 1.73], p = 7.9×10−5). Observational and MR analyses in MoBa were consistent with a causal relationship between migraine and depression (IVW OR = 1.38 [1.19, 1.60], pFDR = 1.8x10−4). MR analyses using iPSYCH2015 did not support a causal link between LTC genetic liabilities and youth-onset depression or in the reverse direction.
Conclusions
Childhood LTCs are associated with depression and anxiety in youth, however, little evidence of causation between LTCs genetic liability and youth depression/anxiety was identified from MR analyses, except for migraine.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
The global population and status of Snowy Owls Bubo scandiacus are particularly challenging to assess because individuals are irruptive and nomadic, and the breeding range is restricted to the remote circumpolar Arctic tundra. The International Union for Conservation of Nature (IUCN) uplisted the Snowy Owl to “Vulnerable” in 2017 because the suggested population estimates appeared considerably lower than historical estimates, and it recommended actions to clarify the population size, structure, and trends. Here we present a broad review and status assessment, an effort led by the International Snowy Owl Working Group (ISOWG) and researchers from around the world, to estimate population trends and the current global status of the Snowy Owl. We use long-term breeding data, genetic studies, satellite-GPS tracking, and survival estimates to assess current population trends at several monitoring sites in the Arctic and we review the ecology and threats throughout the Snowy Owl range. An assessment of the available data suggests that current estimates of a worldwide population of 14,000–28,000 breeding adults are plausible. Our assessment of population trends at five long-term monitoring sites suggests that breeding populations of Snowy Owls in the Arctic have decreased by more than 30% over the past three generations and the species should continue to be categorised as Vulnerable under the IUCN Red List Criterion A2. We offer research recommendations to improve our understanding of Snowy Owl biology and future population assessments in a changing world.
Excluding all animal-sourced foods may be associated with increased risks of nutrient deficiencies. As indispensable amino acids (IAAs) cannot be stored or endogenously produced, consistent protein consumption throughout the day is important to improve protein quality for optimal metabolic function(1). Assessment of protein adequacy needs to be undertaken at the meal rather than daily intake level because food combinations within each meal can be complementary and influence the overall amino acid profile of the meal(2).
Outcomes of our previous review found that among plant-sourced foods, soy, legumes, nuts and seeds provide greater protein content and quality(3). We hypothesise that variation in protein intake will exist both between vegan individuals and between observation days for the same individuals. Previous investigations of the relationship between meals and nutrient intake based on specified time windows for eating may have been subject to researcher bias in the definition of these windows. The main outcome of this study is to utilise time series clustering to determine the impact of dietary patterns on protein distribution, across the day.
Intake data was obtained using a four-day food diary from a cross-sectional survey of 193 New Zealand vegans (Ethical approval: HDEC 2022 EXP 12312). The inclusion criteria required participants to have followed an exclusive vegan diet for at least two years. A kernel density contour estimation was used to visualise protein distribution across eating occasions for all participants over four days. Dynamic Time Warping (DTW) was then used to align two temporal sequences (time series) to compute an output of distance(4) which was used for hierarchical clustering using the Ward.D2 method. An optimal cluster of 3 was identified using silhouette coefficient and domain knowledge.
Participants had a mean age of 39.4 years (SD=12.3), with 90.1% having attained a tertiary-level education or higher. Overall, mean protein intake was 1.11 g/kg/d (SD=0.39), with 8.29% of participants below the Estimated Average Requirements (EAR) and 24.3% of participants below the Recommended Dietary Allowance (RDA) for adults. The mean Acceptable Macronutrient Distribution Range (AMDR) for protein is 15.5% (SD=4.16), with 96.9% of participants within the recommended AMDR range (10-35%). Peak protein consumption was observed at 1230 and 1900. Sequential colour scale representing density found higher distribution of data points representing protein intake of less than 10g per eating occasion. Time series similar in shape and amplitude were assigned to the same cluster. Preliminary findings identified three different protein intake profiles across the day.
A small percentage of participants has protein intake below the daily requirements for adults. More occasions with lower protein intake per eating moment was observed. This approach classifies dietary patterns objectively for analysis of daily protein and IAA intake.
We present the Pilot Survey Phase 2 data release for the Wide-field ASKAP L-band Legacy All-sky Blind surveY (WALLABY), carried-out using the Australian SKA Pathfinder (ASKAP). We present 1760 H i detections (with a default spatial resolution of 30′′) from three pilot fields including the NGC 5044 and NGC 4808 groups as well as the Vela field, covering a total of $\sim 180$ deg$^2$ of the sky and spanning a redshift up to $z \simeq 0.09$. This release also includes kinematic models for over 126 spatially resolved galaxies. The observed median rms noise in the image cubes is 1.7 mJy per 30′′ beam and 18.5 kHz channel. This corresponds to a 5$\sigma$ H i column density sensitivity of $\sim 9.1\times10^{19}(1 + z)^4$ cm$^{-2}$ per 30′′ beam and $\sim 20$ km s$^{-1}$ channel and a 5$\sigma$ H i mass sensitivity of $\sim 5.5\times10^8 (D/100$ Mpc)$^{2}$ M$_{\odot}$ for point sources. Furthermore, we also present for the first time 12′′ high-resolution images (“cut-outs”) and catalogues for a sub-sample of 80 sources from the Pilot Survey Phase 2 fields. While we are able to recover sources with lower signal-to-noise ratio compared to sources in the Public Data Release 1, we do note that some data quality issues still persist, notably, flux discrepancies that are linked to the impact of side lobes associated with the dirty beams due to inadequate deconvolution. However, in spite of these limitations, the WALLABY Pilot Survey Phase 2 has already produced roughly a third of the number of HIPASS sources, making this the largest spatially resolved H i sample from a single survey to date.
Herbicide drift to sensitive crops can result in significant injury, yield loss, and even crop destruction. When pesticide drift is reported to the Georgia Department of Agriculture (GDA), tissue samples are collected and analyzed for residues. Seven field studies were conducted in 2020 and 2021 in cooperation with the GDA to evaluate the effect of (1) time interval between simulated drift event and sampling, (2) low-dose herbicide rates, and (3) the sample collection methods on detecting herbicide residues in cotton (Gossypium hirsutum L.) foliage. Simulated drift rates of 2,4-D, dicamba, and imazapyr were applied to non-tolerant cotton in the 8- to 9-leaf stage with plant samples collected at 7 or 21 d after treatment (DAT). During collection, plant sampling consisted of removing entire plants or removing new growth occurring after the 7-leaf stage. Visual cotton injury from 2,4-D reached 43% to 75% at 0.001 and 0.004 kg ae ha−1, respectively; for dicamba, it was 9% to 41% at 0.003 or 0.014 kg ae ha−1, respectively; and for imazapyr, it was 1% to 74% with 0.004 and 0.03 kg ae ha−1 rates, respectively. Yield loss was observed with both rates of 2,4-D (11% to 51%) and with the high rate of imazapyr (52%); dicamba did not influence yield. Herbicide residues were detected in 88%, 88%, and 69% of samples collected from plants treated with 2,4-D, dicamba, and imazapyr, respectively, at 7 DAT compared with 25%, 16%, and 22% when samples were collected at 21 DAT, highlighting the importance of sampling quickly after a drift event. Although the interval between drift event and sampling, drift rate, and sampling method can all influence residue detection for 2,4-D, dicamba, and imazapyr, the factor with the greatest influence is the amount of time between drift and sample collection.
The field of healthcare epidemiology is increasingly focused on identifying, characterizing, and addressing social determinants of health (SDOH) to address inequities in healthcare quality. To identify evidence gaps, we examined recent systematic reviews examining the association of race, ethnicity, and SDOH with inpatient quality measures.
Methods:
We searched Medline via OVID for English language systematic reviews from 2010 to 2022 addressing race, ethnicity, or SDOH domains and inpatient quality measures in adults using specific topic questions. We imported all citations to Covidence (www.covidence.org, Veritas Health Innovation) and removed duplicates. Two blinded reviewers assessed all articles for inclusion in 2 phases: title/abstract, then full-text review. Discrepancies were resolved by a third reviewer.
Results:
Of 472 systematic reviews identified, 39 were included. Of these, 23 examined all-cause mortality; 6 examined 30-day readmission rates; 4 examined length of stay, 4 examined falls, 2 examined surgical site infections (SSIs) and one review examined risk of venous thromboembolism. The most evaluated SDOH measures were sex (n = 9), income and/or employment status (n = 9), age (n = 6), race and ethnicity (n = 6), and education (n = 5). No systematic reviews assessed medication use errors or healthcare-associated infections. We found very limited assessment of other SDOH measures such as economic stability, neighborhood, and health system access.
Conclusion:
A limited number of systematic reviews have examined the association of race, ethnicity and SDOH measures with inpatient quality measures, and existing reviews highlight wide variability in reporting. Future systematic evaluations of SDOH measures are needed to better understand the relationships with inpatient quality measures.
To identify and quantify general practitioner (GP) preferences related to service attributes of clinical consultations, including telehealth consultations, in Australia.
Background:
GPs have been increasingly using telehealth to deliver patient care since the onset of the 2019 coronavirus disease (COVID-19) pandemic. GP preferences for telehealth service models will play an important role in the uptake and sustainability of telehealth services post-pandemic.
Methods:
An online survey was used to ask GPs general telehealth questions and have them complete a discrete choice experiment (DCE). The DCE elicited GP preferences for various service attributes of telehealth (telephone and videoconference) consultations. The DCE investigated five service attributes, including consultation mode, consultation purpose, consultation length, quality of care and rapport, and patient co-payment. Participants were presented with eight choice sets, each containing three options to choose from. Descriptive statistics was used, and mixed logit models were used to estimate and analyse the DCE data.
Findings:
A total of 60 GPs fully completed the survey. Previous telehealth experiences impacted direct preferences towards telehealth consultations across clinical presentations, although in-person modes were generally favoured (in approximately 70% of all scenarios). The DCE results lacked statistical significance which demonstrated undiscernible differences between GP preferences for some service attributes. However, it was found that GPs prefer to provide a consultation with good quality care and rapport (P < 002). GPs would also prefer to provide care to their patients rather than decline a consultation due to consultation mode, length or purpose (P < 0.0001). Based on the findings, GPs value the ability to provide high-quality care and develop rapport during a clinical consultation. This highlights the importance of recognising value-based care for future policy reforms, to ensure continued adoption and sustainability of GP telehealth services in Australia.
The brain can be represented as a network, with nodes as brain regions and edges as region-to-region connections. Nodes with the most connections (hubs) are central to efficient brain function. Current findings on structural differences in Major Depressive Disorder (MDD) identified using network approaches remain inconsistent, potentially due to small sample sizes. It is still uncertain at what level of the connectome hierarchy differences may exist, and whether they are concentrated in hubs, disrupting fundamental brain connectivity.
Methods
We utilized two large cohorts, UK Biobank (UKB, N = 5104) and Generation Scotland (GS, N = 725), to investigate MDD case–control differences in brain network properties. Network analysis was done across four hierarchical levels: (1) global, (2) tier (nodes grouped into four tiers based on degree) and rich club (between-hub connections), (3) nodal, and (4) connection.
Results
In UKB, reductions in network efficiency were observed in MDD cases globally (d = −0.076, pFDR = 0.033), across all tiers (d = −0.069 to −0.079, pFDR = 0.020), and in hubs (d = −0.080 to −0.113, pFDR = 0.013–0.035). No differences in rich club organization and region-to-region connections were identified. The effect sizes and direction for these associations were generally consistent in GS, albeit not significant in our lower-N replication sample.
Conclusion
Our results suggest that the brain's fundamental rich club structure is similar in MDD cases and controls, but subtle topological differences exist across the brain. Consistent with recent large-scale neuroimaging findings, our findings offer a connectomic perspective on a similar scale and support the idea that minimal differences exist between MDD cases and controls.
To investigate the symptoms of SARS-CoV-2 infection, their dynamics and their discriminatory power for the disease using longitudinally, prospectively collected information reported at the time of their occurrence. We have analysed data from a large phase 3 clinical UK COVID-19 vaccine trial. The alpha variant was the predominant strain. Participants were assessed for SARS-CoV-2 infection via nasal/throat PCR at recruitment, vaccination appointments, and when symptomatic. Statistical techniques were implemented to infer estimates representative of the UK population, accounting for multiple symptomatic episodes associated with one individual. An optimal diagnostic model for SARS-CoV-2 infection was derived. The 4-month prevalence of SARS-CoV-2 was 2.1%; increasing to 19.4% (16.0%–22.7%) in participants reporting loss of appetite and 31.9% (27.1%–36.8%) in those with anosmia/ageusia. The model identified anosmia and/or ageusia, fever, congestion, and cough to be significantly associated with SARS-CoV-2 infection. Symptoms’ dynamics were vastly different in the two groups; after a slow start peaking later and lasting longer in PCR+ participants, whilst exhibiting a consistent decline in PCR- participants, with, on average, fewer than 3 days of symptoms reported. Anosmia/ageusia peaked late in confirmed SARS-CoV-2 infection (day 12), indicating a low discrimination power for early disease diagnosis.
Helium or neopentane can be used as surrogate gas fill for deuterium (D2) or deuterium-tritium (DT) in laser-plasma interaction studies. Surrogates are convenient to avoid flammability hazards or the integration of cryogenics in an experiment. To test the degree of equivalency between deuterium and helium, experiments were conducted in the Pecos target chamber at Sandia National Laboratories. Observables such as laser propagation and signatures of laser-plasma instabilities (LPI) were recorded for multiple laser and target configurations. It was found that some observables can differ significantly despite the apparent similarity of the gases with respect to molecular charge and weight. While a qualitative behaviour of the interaction may very well be studied by finding a suitable compromise of laser absorption, electron density, and LPI cross sections, a quantitative investigation of expected values for deuterium fills at high laser intensities is not likely to succeed with surrogate gases.