We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Identifying persons with HIV (PWH) at increased risk for Alzheimer’s disease (AD) is complicated because memory deficits are common in HIV-associated neurocognitive disorders (HAND) and a defining feature of amnestic mild cognitive impairment (aMCI; a precursor to AD). Recognition memory deficits may be useful in differentiating these etiologies. Therefore, neuroimaging correlates of different memory deficits (i.e., recall, recognition) and their longitudinal trajectories in PWH were examined.
Design:
We examined 92 PWH from the CHARTER Program, ages 45–68, without severe comorbid conditions, who received baseline structural MRI and baseline and longitudinal neuropsychological testing. Linear and logistic regression examined neuroanatomical correlates (i.e., cortical thickness and volumes of regions associated with HAND and/or AD) of memory performance at baseline and multilevel modeling examined neuroanatomical correlates of memory decline (average follow-up = 6.5 years).
Results:
At baseline, thinner pars opercularis cortex was associated with impaired recognition (p = 0.012; p = 0.060 after correcting for multiple comparisons). Worse delayed recall was associated with thinner pars opercularis (p = 0.001) and thinner rostral middle frontal cortex (p = 0.006) cross sectionally even after correcting for multiple comparisons. Delayed recall and recognition were not associated with medial temporal lobe (MTL), basal ganglia, or other prefrontal structures. Recognition impairment was variable over time, and there was little decline in delayed recall. Baseline MTL and prefrontal structures were not associated with delayed recall.
Conclusions:
Episodic memory was associated with prefrontal structures, and MTL and prefrontal structures did not predict memory decline. There was relative stability in memory over time. Findings suggest that episodic memory is more related to frontal structures, rather than encroaching AD pathology, in middle-aged PWH. Additional research should clarify if recognition is useful clinically to differentiate aMCI and HAND.
Affective responses to the menstrual cycle vary widely. Some individuals experience severe symptoms like those with premenstrual dysphoric disorder, while others have minimal changes. The reasons for these differences are unclear, but prior studies suggest stressor exposure may play a role. However, research in at-risk psychiatric samples is lacking.
Methods
In a large clinical sample, we conducted a prospective study of how lifetime stressors relate to degree of affective change across the cycle. 114 outpatients with past-month suicidal ideation (SI) provided daily ratings (n = 6187) of negative affect and SI across 1–3 menstrual cycles. Participants completed the Stress and Adversity Inventory (STRAIN), which measures different stressor exposures (i.e. interpersonal loss, physical danger) throughout the life course, including before and after menarche. Multilevel polynomial growth models tested the relationship between menstrual cycle time and symptoms, moderated by stressor exposure.
Results
Greater lifetime stressor exposure predicted a more pronounced perimenstrual increase in active SI, along with marginally significant similar patterns for negative affect and passive SI. Additionally, pre-menarche stressors significantly increased the cyclicity of active SI compared to post-menarche stressors. Exposure to more interpersonal loss stressors predicted greater perimenstrual symptom change of negative affect, passive SI and active SI. Exploratory item-level analyses showed that lifetime stressors moderated a more severe perimenstrual symptom trajectory for mood swings, anger/irritability, rejection sensitivity, and interpersonal conflict.
Conclusion
These findings suggest that greater lifetime stressor exposure may lead to heightened emotional reactivity to ovarian hormone fluctuations, elevating the risk of psychopathology.
Both impulsivity and compulsivity have been identified as risk factors for problematic use of the internet (PUI). Yet little is known about the relationship between impulsivity, compulsivity and individual PUI symptoms, limiting a more precise understanding of mechanisms underlying PUI.
Aims
The current study is the first to use network analysis to (a) examine the unique association among impulsivity, compulsivity and PUI symptoms, and (b) identify the most influential drivers in relation to the PUI symptom community.
Method
We estimated a Gaussian graphical model consisting of five facets of impulsivity, compulsivity and individual PUI symptoms among 370 Australian adults (51.1% female, mean age = 29.8, s.d. = 11.1). Network structure and bridge expected influence were examined to elucidate differential associations among impulsivity, compulsivity and PUI symptoms, as well as identify influential nodes bridging impulsivity, compulsivity and PUI symptoms.
Results
Results revealed that four facets of impulsivity (i.e. negative urgency, positive urgency, lack of premeditation and lack of perseverance) and compulsivity were related to different PUI symptoms. Further, compulsivity and negative urgency were the most influential nodes in relation to the PUI symptom community due to their highest bridge expected influence.
Conclusions
The current findings delineate distinct relationships across impulsivity, compulsivity and PUI, which offer insights into potential mechanistic pathways and targets for future interventions in this space. To realise this potential, future studies are needed to replicate the identified network structure in different populations and determine the directionality of the relationships among impulsivity, compulsivity and PUI symptoms.
Anterior temporal lobectomy is a common surgical approach for medication-resistant temporal lobe epilepsy (TLE). Prior studies have shown inconsistent findings regarding the utility of presurgical intracarotid sodium amobarbital testing (IAT; also known as Wada test) and neuroimaging in predicting postoperative seizure control. In the present study, we evaluated the predictive utility of IAT, as well as structural magnetic resonance imaging (MRI) and positron emission tomography (PET), on long-term (3-years) seizure outcome following surgery for TLE.
Participants and Methods:
Patients consisted of 107 adults (mean age=38.6, SD=12.2; mean education=13.3 years, SD=2.0; female=47.7%; White=100%) with TLE (mean epilepsy duration =23.0 years, SD=15.7; left TLE surgery=50.5%). We examined whether demographic, clinical (side of resection, resection type [selective vs. non-selective], hemisphere of language dominance, epilepsy duration), and presurgical studies (normal vs. abnormal MRI, normal vs. abnormal PET, correctly lateralizing vs. incorrectly lateralizing IAT) were associated with absolute (cross-sectional) seizure outcome (i.e., freedom vs. recurrence) with a series of chi-squared and t-tests. Additionally, we determined whether presurgical evaluations predicted time to seizure recurrence (longitudinal outcome) over a three-year period with univariate Cox regression models, and we compared survival curves with Mantel-Cox (log rank) tests.
Results:
Demographic and clinical variables (including type [selective vs. whole lobectomy] and side of resection) were not associated with seizure outcome. No associations were found among the presurgical variables. Presurgical MRI was not associated with cross-sectional (OR=1.5, p=.557, 95% CI=0.4-5.7) or longitudinal (HR=1.2, p=.641, 95% CI=0.4-3.9) seizure outcome. Normal PET scan (OR= 4.8, p=.045, 95% CI=1.0-24.3) and IAT incorrectly lateralizing to seizure focus (OR=3.9, p=.018, 95% CI=1.2-12.9) were associated with higher odds of seizure recurrence. Furthermore, normal PET scan (HR=3.6, p=.028, 95% CI =1.0-13.5) and incorrectly lateralized IAT (HR= 2.8, p=.012, 95% CI=1.2-7.0) were presurgical predictors of earlier seizure recurrence within three years of TLE surgery. Log rank tests indicated that survival functions were significantly different between patients with normal vs. abnormal PET and incorrectly vs. correctly lateralizing IAT such that these had seizure relapse five and seven months earlier on average (respectively).
Conclusions:
Presurgical normal PET scan and incorrectly lateralizing IAT were associated with increased risk of post-surgical seizure recurrence and shorter time-to-seizure relapse.
Many people with HIV (PWH) are at risk for age-related neurodegenerative disorders such as Alzheimer’s disease (AD). Studies on the association between cognition, neuroimaging outcomes, and the Apolipoprotein E4 (APOE4) genotype, which is associated with greater risk of AD, have yielded mixed results in PWH; however, many of these studies have examined a wide age range of PWH and have not examined APOE by race interactions that are observed in HIV-negative older adults. Thus, we examined how APOE status relates to cognition and medial temporal lobe (MTL) structures (implicated in AD pathogenesis) in mid- to older-aged PWH. In exploratory analyses, we also examined race (African American (AA)/Black and non-Hispanic (NH) White) by APOE status interactions on cognition and MTL structures.
Participants and Methods:
The analysis included 88 PWH between the ages of 45 and 68 (mean age=51±5.9 years; 86% male; 51% AA/Black, 38% NH-White, 9% Hispanic/Latinx, 2% other) from the CNS HIV Antiretroviral Therapy Effects Research multi-site study. Participants underwent APOE genotyping, neuropsychological testing, and structural MRI; APOE groups were defined as APOE4+ (at least one APOE4 allele) and APOE4- (no APOE4 alleles). Eighty-nine percent of participants were on antiretroviral therapy, 74% had undetectable plasma HIV RNA (<50 copies/ml), and 25% were APOE4+ (32% AA/Black/15% NH-White). Neuropsychological testing assessed seven domains, and demographically-corrected T-scores were calculated. FreeSurfer 7.1.1 was used to measure MTL structures (hippocampal volume, entorhinal cortex thickness, and parahippocampal thickness) and the effect of scanner was regressed out prior to analyses. Multivariable linear regressions tested the association between APOE status and cognitive and imaging outcomes. Models examining cognition covaried for comorbid conditions and HIV disease characteristics related to global cognition (i.e., AIDS status, lifetime methamphetamine use disorder). Models examining the MTL covaried for age, sex, and
relevant imaging covariates (i.e., intracranial volume or mean cortical thickness).
Results:
APOE4+ carriers had worse learning (ß=-0.27, p=.01) and delayed recall (ß=-0.25, p=.02) compared to the APOE4- group, but APOE status was not significantly associated with any other domain (ps>0.24). APOE4+ status was also associated with thinner entorhinal cortex (ß=-0.24, p=.02). APOE status was not significantly associated with hippocampal volume (ß=-0.08, p=0.32) or parahippocampal thickness (ß=-0.18, p=.08). Lastly, race interacted with APOE status such that the negative association between APOE4+ status and cognition was stronger in NH-White PWH as compared to AA/Black PWH in learning, delayed recall, and verbal fluency (ps<0.05). There were no APOE by race interactions for any MTL structures (ps>0.10).
Conclusions:
Findings suggest that APOE4 carrier status is associated with worse episodic memory and thinner entorhinal cortex in mid- to older-aged PWH. While APOE4+ groups were small, we found that APOE4 carrier status had a larger association with cognition in NH-White PWH as compared to AA/Black PWH, consistent with studies demonstrating an attenuated effect of APOE4 in older AA/Black HIV-negative older adults. These findings further highlight the importance of recruiting diverse samples and suggest exploring other genetic markers (e.g., ABCA7) that may be more predictive of AD in some races to better understand AD risk in diverse groups of PWH.
Patients with Parkinson’s disease (PD) commonly show deficits on tests of visuospatial functioning. The Identi-Fi is a new measure of visual organization and recognition composed of two components. The Visual Recognition (VR) subtest asks persons to identify an object that has been broken its pieces and rearranged, akin to the Hooper Visual Organization Test, but using updated and colorful pictures. The Visual Matching (VM) subtest involves showing the same stimuli, but the examinee must select the correct response from among five choices (1 correct and 4 foils), placing greater demand on visuospatial discrimination. Together, the two subtests comprise the Visual Organization Index (VOI), reflecting overall visual processing and organization ability. The present study examined performance on the Identi-Fi in patients with PD and its association with other aspects of cognition.
Participants and Methods:
Participants were 23 patients with PD (95% male; mean age = 69.7 years [SD = 7.8], range = 47-79) and 12 patients with cognitive concerns (CC) who were intact on neuropsychological testing (excluding consideration of Identi-Fi scores; 50% male, mean age = 71.08 [SD = 6.27], range = 60-78) seen for a neuropsychological evaluation at a large Northeastern medical center. As part of a larger battery, patients completed the Identi-Fi, Trail Making Test (TMT), Category Fluency, Test of Premorbid Functioning (TOPF), and Brief Visuospatial Memory Test, Revised (BVMT-R).
Results:
The PD group performed significantly worse than the CC group on VR and VM, as well as VOI, of the Identi-Fi (p < .001). Within the PD group, poorer VR, VM, and VOI performance was associated with lower scores on the TOPF (p < .05), BVMT-R learning (p < .05) and delayed recall (p < .05), as well as TMT Parts A and B (p < .05). VR was significantly correlated with Category Fluency (p < .05), while a trend was seen for the association between VOI and Category Fluency (p = .094).
Conclusions:
Identi-Fi performance was worse in the PD group than the CC group, which is consistent with prior research indicating that visuospatial processing is often abnormal in patients with PD. Furthermore, findings indicate that poorer performance on the Identi-Fi in the PD group is associated with poorer cognitive functioning in other domains (i.e., visuospatial learning and memory, processing speed, cognitive flexibility, and semantic fluency), as well as lower premorbid intellectual functioning. While these findings suggest that the Identi-Fi is useful in identifying visuospatial dysfunction in PD, findings should be interpreted with caution given the small sample sizes and uneven gender distribution
Patients with Post-Acute COVID Syndrome (PACS) are reported to commonly experience a variety of cognitive, physical, and neuropsychiatric symptoms well beyond the acute phase of the illness. Notably, concerns involving mood, fatigue, and physical symptoms (e.g., pain, headaches) following COVID-19 appears to be especially prevalent. It is unclear, however, the extent to which such symptoms are associated with cognitive problems in patients with PACS. In the present study, we examined the prevalence of cognitive impairment in a sample of patients with PACS, as well as the relationship between cognitive functioning and several non-cognitive symptoms.
Participants and Methods:
Participants were 38 patients with PACS [71.1% female; mean age = 48.03 years (SD = 11.60) and years of education = 15.26 years (SD = 2.60)] seen for a neuropsychological evaluation at a large Northeastern medical center at least three months from the time of COVID-19 diagnosis (per PCR test). As part of a larger battery, patients completed the Hopkins Verbal Learning Test- Revised (HVLT, learning and delayed recall), Trail Making Test (TMT; time to complete parts A and B), Controlled Oral Word Association Test (COWAT total correct), and Animals (total correct). They also were administered the Chalder Fatigue Scale-11 (CFS-11), Beck Depression Inventory-II (BDI-II), Beck Anxiety Inventory (BAI), and Patient Health Questionnaire (PHQ-15). The percentage of patients with scores in the impaired range (z < -1.5) on cognitive tests was determined. Correlations between cognitive and non-cognitive measures were also examined.
Results:
The most frequent impairment was seen for COWAT (21.2%), followed by TMT-A and TMT-B (both 13.9%), then category fluency (9.1%). No patients were impaired on HVLT-R Learning and only one (4%) for HVLT-R Delayed Recall. Overall, the sample endorsed considerable depression, anxiety, fatigue, as well as physical symptoms. Greater fatigue was associated with worse verbal learning, processing speed, cognitive flexibility, and verbal fluency (letter and category). Worse physical symptom severity was related to poorer verbal delayed recall and cognitive flexibility. Greater anxiety was also associated with worse cognitive flexibility, while more severe depression was related to poorer category fluency.
Conclusions:
In our sample of patients with PACS, seen for evaluation several months since contracting COVID-19, phonemic fluency was the most common cognitive impairment, though less than a quarter were impaired on any given cognitive test. Importantly, several associations were observed between cognitive test performance and non-cognitive symptoms commonly endorsed by patients with PACS. These findings highlight the importance of assessing multiple factors potentially contributing to cognitive impairment in these patients. Interventions designed to address such symptoms may be helpful in ameliorating cognitive functioning in those with PACS.
Mild cognitive impairment (MCI) is characterized by subjective and objective memory concerns, though additional cognitive concerns are commonly reported, including changes in executive functions (EF). Rabin et al. (2006) showed that a sample of research participants with MCI endorsed problems with their EFs, especially working memory. Similarly, those with subjective cognitive dysfunction (SCD) also reported greater difficulty with aspects of their EF than a healthy comparison sample of older adults (HC). In the present study, we investigated subjective EF in clinical samples of older adults with MCI or SCD, which represents a more naturalistic sample relative to a research sample. Furthermore, we evaluated whether subjective EF varied in these groups depending on whether patients were "young-old" versus "old-old" given prior research indicating objective cognitive differences between these age groups.
Participants and Methods:
Participants were 135 older adults (53 MCI, 52 SCD, and 30 HC) matched for age (p = .116) and education (p = .863). Dichotomous categorization of age used the sample median (72 years) as cutoff score with 72 participants in the young-old group (mean age = 65.8 ± 4.7 years) and 63 in the old-old group (mean age = 78.1 ± 3.7 years). Participants completed the Behavior Rating Inventory of Executive Function-Adult (BRIEF-A), assessing executive functions in everyday life over the past month. The BRIEF-A yields an overall score (Global Executive Composite [GEC]) composed of two index scores (Behavioral Regulation Index [BRI] and Metacognition Index [MI]) and nine clinical scales (Inhibit, Shift, Emotional Control, Self-Monitor, Initiate, Working Memory, Plan/Organize, Task Monitor, and Organization of Materials). A diagnosis by age-group multivariate analysis of variance (MANOVA) with post-hoc comparisons for diagnosis using a Tukey HSD correction was conducted using SPSS Version 24.
Results:
MCI and SCD groups endorsed worse EF on all three index scores (ps < .005) and all nine clinical scales (ps < .05) relative to the HC group, and the MCI group reported worse initiation relative to the SCD group. Additionally, worse executive functions on all three index scores (ps < .05) and four clinical scales (ps < .05; emotional control, self-monitoring, planning/organization, and task monitoring) were reported by the young-old group relative to the old-old group. No diagnosis by age-group interactions were observed.
Conclusions:
Problems with aspects of EF were endorsed by older adults with MCI and SCD compared to HCs across all indices and clinical scales; however, only initiation was reported to be worse in MCI than those with SCD. Additionally, the young-old group endorsed having worse EF than the old-old group across BRIEF-A indices and several more specific aspects of EF, without a moderating effect of diagnosis. These findings highlight the importance of assessing subjective EF in older adults, as they may be early indicators of cognitive change, prior to objective evidence of cognitive decline. Furthermore, results also point to differences in how the young-old and old-old perceive their EF in everyday life.
Female fertility is a complex trait with age-specific changes in spontaneous dizygotic (DZ) twinning and fertility. To elucidate factors regulating female fertility and infertility, we conducted a genome-wide association study (GWAS) on mothers of spontaneous DZ twins (MoDZT) versus controls (3273 cases, 24,009 controls). This is a follow-up study to the Australia/New Zealand (ANZ) component of that previously reported (Mbarek et al., 2016), with a sample size almost twice that of the entire discovery sample meta-analysed in the previous article (and five times the ANZ contribution to that), resulting from newly available additional genotyping and representing a significant increase in power. We compare analyses with and without male controls and show unequivocally that it is better to include male controls who have been screened for recent family history, than to use only female controls. Results from the SNP based GWAS identified four genomewide significant signals, including one novel region, ZFPM1 (Zinc Finger Protein, FOG Family Member 1), on chromosome 16. Previous signals near FSHB (Follicle Stimulating Hormone beta subunit) and SMAD3 (SMAD Family Member 3) were also replicated (Mbarek et al., 2016). We also ran the GWAS with a dominance model that identified a further locus ADRB2 on chr 5. These results have been contributed to the International Twinning Genetics Consortium for inclusion in the next GWAS meta-analysis (Mbarek et al., in press).
Background: ALS is a progressive neurodegenerative disease without a cure and limited treatment options. Edaravone, a free radical scavenger, was shown to slow disease progression in a select group of patients with ALS over 6 months; however, the effect on survival was not investigated in randomized trials. The objective of this study is to describe real-world survival effectiveness over a longer timeframe. Methods: This retrospective cohort study included patients with ALS across Canada with symptom onset up to three years. Those with a minimum 6-month edaravone exposure between 2017 and 2022 were enrolled in the interventional arm, and those without formed the control arm. The primary outcome of tracheostomy-free survival was compared between the two groups, accounting for age, sex, ALS-disease progression rate, disease duration, pulmonary vital capacity, bulbar ALS-onset, and presence of frontotemporal dementia or C9ORF72 mutation using inverse propensity treatment weights. Results: 182 patients with mean ± SD age 60±11 years were enrolled in the edaravone arm and 860 in the control arm (mean ± SD age 63±12 years). Mean ± SD time from onset to edaravone initiation was 18±10 months. Tracheostomy-free survival will be calculated. Conclusions: This study will provide evidence for edaravone effectiveness on tracheostomy-free survival in patients with ALS.
Racially and ethnically minoritized (REM) patients are disproportionately affected by infectious diseases, including candidemia. REM patients with candidemia were significantly younger, with trends toward more risk factors for candidemia and longer lengths of stay. Although Candida parapsilosis was more common in REM patients, there were no differences in mortality rates.
Emotional functioning is linked to HIV-associated neurocognitive impairment, yet research on this association among diverse people with HIV (PWH) is scant. We examined emotional health and its association with neurocognition in Hispanic and White PWH.
Methods:
Participants included 107 Hispanic (41% primarily Spanish-speakers; 80% Mexican heritage/origin) and 216 White PWH (Overall age: M = 53.62, SD = 12.19; 86% male; 63% AIDS; 92% on antiretroviral therapy). Emotional health was assessed via the National Institute of Health Toolbox (NIHTB)-Emotion Battery, which yields T-scores for three factor-based summary scores (negative affect, social satisfaction, and psychological well-being) and 13 individual component scales. Neurocognition was measured via demographically adjusted fluid cognition T-scores from the NIHTB-cognition battery.
Results:
27%–39% of the sample had problematic socioemotional summary scores. Hispanic PWH showed less loneliness, better social satisfaction, higher meaning and purpose, and better psychological well-being than Whites (ps <.05). Within Hispanics, Spanish-speakers showed better meaning and purpose, higher psychological well-being summary score, less anger hostility, but greater fear affect than English speakers. Only in Whites, worse negative affect (fear affect, perceived stress, and sadness) was associated with worse neurocognition (p <.05); and in both groups, worse social satisfaction (emotional support, friendship, and perceived rejection) was linked with worse neurocognition (p <.05).
Conclusion:
Adverse emotional health is common among PWH, with subgroups of Hispanics showing relative strengths in some domains. Aspects of emotional health differentially relate to neurocogntition among PWH and cross-culturally. Understanding these varying associations is an important step towards the development of culturally relevant interventions that promote neurocognitive health among Hispanic PWH.
The frequency, intensity and location of fence line pacing were observed daily, in four groups of six farmed red deer hinds, over a 3-week period at calving. The groups were confined in neighbouring paddocks (5000m2 in area; two containing a wooden shelter) adjacent to deer yards containing an observation hide. At 1100h, a person entered each paddock to weigh, sex and tag newborn calves.
Pacing (moving parallel to and within 0.5m of a fence line) was mainly at walking speed, and its frequency differed according to the time relative to parturition. It was recorded in 13.6 (± 1.09) per cent of observations during the period 2-4 to days before calving, increased to 27.6 (± 1.9)per cent on the day before birth and then declined to 4.6 (± 0.39) per cent for the period of 0-3 days after calving. Pacing relative to total movement was greater before (65.7%) than after (43.5%) parturition (SED 3.7%; P < 0.001), indicating that it was not just a consequence of greater activity before birth. The hinds were observed to be grouped together rather than distributed randomly, but when some of the hinds were pacing, groups were spread out over more quarters of the paddock than when none were pacing (P < 0.001). However, there was no definite suggestion of avoidance of other deer. Within each group, most pacing occurred along certain fence lines, but no general pattern was observed. Regardless of whether hinds had given birth or not, there were graded increases in pacing depending on the degree of human presence (not present<within deer yards <visible<in paddock; P < 0.05); and deer favoured areas distant from human presence (P < 0.01). The findings in relation to fence line pacing and location support suggestions that human interference at calving should be minimized, but did not indicate which environmental features were responsible for this motivational drive.
This systematic literature review aimed to provide an overview of the characteristics and methods used in studies applying the disability-adjusted life years (DALY) concept for infectious diseases within European Union (EU)/European Economic Area (EEA)/European Free Trade Association (EFTA) countries and the United Kingdom. Electronic databases and grey literature were searched for articles reporting the assessment of DALY and its components. We considered studies in which researchers performed DALY calculations using primary epidemiological data input sources. We screened 3053 studies of which 2948 were excluded and 105 studies met our inclusion criteria. Of these studies, 22 were multi-country and 83 were single-country studies, of which 46 were from the Netherlands. Food- and water-borne diseases were the most frequently studied infectious diseases. Between 2015 and 2022, the number of burden of infectious disease studies was 1.6 times higher compared to that published between 2000 and 2014. Almost all studies (97%) estimated DALYs based on the incidence- and pathogen-based approach and without social weighting functions; however, there was less methodological consensus with regards to the disability weights and life tables that were applied. The number of burden of infectious disease studies undertaken across Europe has increased over time. Development and use of guidelines will promote performing burden of infectious disease studies and facilitate comparability of the results.
Adoption of cover crops in arid agroecosystems has been slow due to concerns regarding limited water resources and possible soil moisture depletion. In irrigated organic systems, potential ecosystem services from cover crops also must be considered in light of the concerns for water conservation. A constructive balance could be achieved with fall-sown small grain cover crops; however, their impacts on irrigated organic systems are poorly understood. Our first objective was to determine the ability of fall-sown small grains [cereal rye (Secale cereale L), winter wheat (Triticum aestivum L.), barley (Hordeum vulgare L.) and oat (Avena sativa L.)] to suppress winter weeds in an irrigated, organic transition field in the southwestern USA. Small grains were planted following the legume sesbania (Sesbania exaltata (Raf.) Rydb. ex A.W. Hill) during Fall 2012 and Fall 2013. In Spring 2013 and 2014, weed densities and biomass were determined within each cover crop treatment and compared against unplanted controls. Results indicated that both barley and oat were effective in suppressing winter weeds. Our second objective was to compare weed suppression and soil moisture levels among seven barley varieties developed in the western United States. Barley varieties (‘Arivat’, ‘Hayes Beardless’, ‘P919’, ‘Robust’, ‘UC603’, ‘UC937’, ‘Washford Beardless’) were fall-sown in replicated strip plots in Fall 2016. Weed densities were measured in Spring 2017 and volumetric soil moisture near the soil surface (5.1 cm depth) was measured at time intervals beginning in December 2016 and ending in March 2017. With the exception of ‘UC937’, barley varieties caused marked reductions in weed density in comparison with the unplanted control. Soil moisture content for the unplanted control was consistently lower than soil moisture contents for barley plots. Barley variety did not influence volumetric soil moisture. During the 2017–2018 growing season, we re-examined three barley varieties considered most amenable to the cropping system requirements (‘Robust’, ‘UC603’, ‘P919’), and these varieties were again found to support few weeds (≤ 5.0 weeds m−2). We conclude that several organically certified barley varieties could fill the need for a ‘non-thirsty’ cover crop that suppresses winter weeds in irrigated organic systems in the southwestern United States.
Renal cancer is responsible for over 100,000 yearly deaths and is principally discovered in computed tomography (CT) scans of the abdomen. CT screening would likely increase the rate of early renal cancer detection, and improve general survival rates, but it is expected to have a prohibitively high financial cost. Given recent advances in artificial intelligence (AI), it may be possible to reduce the cost of CT analysis and enable CT screening by automating the radiological tasks that constitute the early renal cancer detection pipeline. This review seeks to facilitate further interdisciplinary research in early renal cancer detection by summarising our current knowledge across AI, radiology, and oncology and suggesting useful directions for future novel work. Initially, this review discusses existing approaches in automated renal cancer diagnosis, and methods across broader AI research, to summarise the existing state of AI cancer analysis. Then, this review matches these methods to the unique constraints of early renal cancer detection and proposes promising directions for future research that may enable AI-based early renal cancer detection via CT screening. The primary targets of this review are clinicians with an interest in AI and data scientists with an interest in the early detection of cancer.
Implementation assessment plans are crucial for clinical trials to achieve their full potential. Without a proactive plan to implement trial results, it can take decades for one-fifth of effective interventions to be adopted into routine care settings. The Veterans Health Administration Office of Research and Development is undergoing a systematic transformation to embed implementation planning in research protocols through the Cooperative Studies Program, its flagship clinical research program. This manuscript has two objectives: 1) to introduce an Implementation Planning Assessment (IPA) Tool that any clinical trialist may use to facilitate post-trial implementation of interventions found to be effective and 2) to provide a case study demonstrating the IPA Tool’s use. The IPA Tool encourages study designers to initially consider rigorous data collection to maximize acceptability of the intervention by end-users. It also helps identify and prepare potential interested parties at local and national leadership levels to ensure, upon trial completion, interventions can be integrated into programs, technologies, and policies in a sustainable way. The IPA Tool can alleviate some of the overwhelming nature of implementation science by providing a practical guide based on implementation science principles for researchers desiring to scale up and spread effective, clinical trial-tested interventions to benefit patients.
In porcine in vitro production (IVP) systems, the use of oocytes derived from prepubertal gilts, whilst being commercially attractive, remains challenging due to their poor developmental competence following in vitro maturation (IVM). Follicular fluid contains important growth factors and plays a key role during oocyte maturation; therefore, it is a common supplementation for porcine IVM medium. However, follicular fluid contains many poorly characterized components, is batch variable, and its use raises biosecurity concerns. In an effort to design a defined IVM system, growth factors such as cytokines have been previously tested. These include leukaemia inhibitory factor (LIF), fibroblast growth factor 2 (FGF2), and insulin-like growth factor 1 (IGF1), the combination of which is termed ‘FLI’. Here, using abattoir-derived oocytes in a well established porcine IVP system, we compared follicular fluid and FLI supplementation during both IVM and embryo culture to test the hypothesis that FLI can substitute for follicular fluid without compromising oocyte nuclear and cytoplasmic maturation. We demonstrate that in oocytes derived from prepubertal gilts, FLI supplementation enhances oocyte meiotic maturation and has a positive effect on the quality and developmental competence of embryos. Moreover, for the first time, we studied the effects of follicular fluid and FLI combined showing no synergistic effects.
Life stress and blunted reward processing each have been associated with the onset and maintenance of major depressive disorder. However, much of this work has been cross-sectional, conducted in separate lines of inquiry, and focused on recent life stressor exposure, despite the fact that theories of depression posit that stressors can have cumulative effects over the lifespan. To address these limitations, we investigated whether acute and chronic stressors occurring over the lifespan interacted with blunted reward processing to predict increases in depression over time in healthy youth.
Method
Participants were 245 adolescent girls aged 8–14 years old (Mage = 12.4, s.d. = 1.8) who were evaluated at baseline and two years later. The reward positivity (RewP), an event-related potential measure of reward responsiveness, was assessed at baseline using the doors task. Cumulative lifetime exposure to acute and chronic stressors was assessed two years later using the Stress and Adversity Inventory for Adolescents (Adolescent STRAIN). Finally, depressive symptoms were assessed at both baseline and follow-up using the Children's Depression Inventory.
Results
As hypothesized, greater lifetime acute stressor exposure predicted increases in depressive symptoms over two years, but only for youth exhibiting a blunted RewP. This interaction, however, was not found for chronic stressors.
Conclusions
Lifetime acute stressor exposure may be particularly depressogenic for youth exhibiting a blunted RewP. Conversely, a robust RewP may be protective in the presence of greater acute lifetime stressor exposure.