We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
The aim of this study was to determine whether there was a significant change in cardiac [123I]-metaiodobenzylguanidine uptake between baseline and follow-up in individuals with mild cognitive impairment with Lewy bodies (MCI-LB) who had normal baseline scans. Eight participants with a diagnosis of probable MCI-LB and a normal baseline scan consented to a follow-up scan between 2 and 4 years after baseline. All eight repeat scans remained normal; however, in three cases uptake decreased by more than 10%. The mean change in uptake between baseline and repeat was −5.2% (range: −23.8% to +7.0%). The interpolated mean annual change in uptake was −1.6%.
Knowledge of sex differences in risk factors for posttraumatic stress disorder (PTSD) can contribute to the development of refined preventive interventions. Therefore, the aim of this study was to examine if women and men differ in their vulnerability to risk factors for PTSD.
Methods
As part of the longitudinal AURORA study, 2924 patients seeking emergency department (ED) treatment in the acute aftermath of trauma provided self-report assessments of pre- peri- and post-traumatic risk factors, as well as 3-month PTSD severity. We systematically examined sex-dependent effects of 16 risk factors that have previously been hypothesized to show different associations with PTSD severity in women and men.
Results
Women reported higher PTSD severity at 3-months post-trauma. Z-score comparisons indicated that for five of the 16 examined risk factors the association with 3-month PTSD severity was stronger in men than in women. In multivariable models, interaction effects with sex were observed for pre-traumatic anxiety symptoms, and acute dissociative symptoms; both showed stronger associations with PTSD in men than in women. Subgroup analyses suggested trauma type-conditional effects.
Conclusions
Our findings indicate mechanisms to which men might be particularly vulnerable, demonstrating that known PTSD risk factors might behave differently in women and men. Analyses did not identify any risk factors to which women were more vulnerable than men, pointing toward further mechanisms to explain women's higher PTSD risk. Our study illustrates the need for a more systematic examination of sex differences in contributors to PTSD severity after trauma, which may inform refined preventive interventions.
The presence, percentage, origins, and rate of formation of clay minerals have been important components in studies involving the geochemical and structural composition of waste-rock piles. The objective of the present study was to investigate the use of tritium as an indicator of the origin of clay minerals within such piles. Tritium values in pore water, interlayer water, and structural hydroxyl sites of clay minerals were examined to evaluate the origins of clay minerals within waste-rock piles located near Questa, New Mexico. Five clay minerals were identified: kaolinite, chlorite, illite, smectite, and mixedlayer illite-smectite, along with the hydrous sulfate minerals gypsum and jarosite. Analysis of waters derived from clay minerals was achieved by thermal reaction of dry-sieved bulk material obtained from the Questa site. In all Questa samples, the low-temperature water derived from pore-water and interlayer sites, as well as the intermediate-temperature water derived from interlayer cation sites occupied by hydronium and structural hydroxyl ions, show tritium values at or near modern levels for precipitation. Pore water and interlayer water ranged from 5.31 to 12.19 tritium units (TU) and interlayer hydronium and structurally derived water ranged from 3.92 to 7.93 TU. Tritium levels for local precipitation ranged from ~4 to 8 TU. One tritium unit (TU) represents one molecule of 3H1HO in 1018 molecules of 1H1HO. The elevated levels of tritium in structural sites can be accounted for by thermal incorporation of significant amounts of hydronium ions in interlayer cation sites for illite and mixed-layer clays, both common at the Questa site. In low-pH environments, such as those found within Questa waste-rock piles (typically pH ~3), the hydronium ion is an abundant species in the rock-pile pore-water system.
As part of the Research Domain Criteria (RDoC) initiative, the NIMH seeks to improve experimental measures of cognitive and positive valence systems for use in intervention research. However, many RDoC tasks have not been psychometrically evaluated as a battery of measures. Our aim was to examine the factor structure of 7 such tasks chosen for their relevance to schizophrenia and other forms of serious mental illness. These include the n-back, Sternberg, and self-ordered pointing tasks (measures of the RDoC cognitive systems working memory construct); flanker and continuous performance tasks (measures of the RDoC cognitive systems cognitive control construct); and probabilistic learning and effort expenditure for reward tasks (measures of reward learning and reward valuation constructs).
Participants and Methods:
The sample comprised 286 cognitively healthy participants who completed novel versions of all 7 tasks via an online recruitment platform, Prolific, in the summer of 2022. The mean age of participants was 38.6 years (SD = 14.5, range 18-74), 52% identified as female, and stratified recruitment ensured an ethnoracially diverse sample. Excluding time for instructions and practice, each task lasted approximately 6 minutes. Task order was randomized. We estimated optimal scores from each task including signal detection d-prime measures for the n-back, Sternberg, and continuous performance task, mean accuracy for the flanker task, win-stay to win-shift ratio for the probabilistic learning task, and trials completed for the effort expenditure for reward task. We used parallel analysis and a scree plot to determine the number of latent factors measured by the 7 task scores. Exploratory factor analysis with oblimin (oblique) rotation was used to examine the factor loading matrix.
Results:
The scree plot and parallel analyses of the 7 task scores suggested three primary factors. The flanker and continuous performance task both strongly loaded onto the first factor, suggesting that these measures are strong indicators of cognitive control. The n-back, Sternberg, and self-ordered pointing tasks strongly loaded onto the second factor, suggesting that these measures are strong indicators of working memory. The probabilistic learning task solely loaded onto the third factor, suggesting that it is an independent indicator of reinforcement learning. Finally, the effort expenditure for reward task modestly loaded onto the second but not the first and third factors, suggesting that effort is most strongly related to working memory.
Conclusions:
Our aim was to examine the factor structure of 7 RDoC tasks. Results support the RDoC suggestion of independent cognitive control, working memory, and reinforcement learning. However, effort is a factorially complex construct that is not uniquely or even most strongly related to positive valance. Thus, there is reason to believe that the use of at least 6 of these tasks are appropriate measures of constructs such as working memory, reinforcement learning and cognitive control.
Agricultural workers are immersed in environments associated with increased risk for adverse psychiatric and neurological outcomes. Agricultural work-related risks to brain health include exposure to pesticides, heavy metals, and organic dust. Despite this, there is a gap in our understanding of the underlying brain systems impacted by these risks. This study explores clinical and cognitive domains, and functional brain activity in agricultural workers. We hypothesized that a history of agricultural work-related risks would be associated with poorer clinical and cognitive outcomes as well as changes in functional brain activity within cortico-striatal regions.
Participants and Methods:
The sample comprised 17 agricultural workers and a comparison group of 45 non-agricultural workers recruited in the Northern Colorado area. All participants identified as White and non-Hispanic. The mean age of participants was 51.7 years (SD = 21.4, range 18-77), 60% identified as female, and 37% identified as male. Participants completed the National Institute of Health Toolbox (NIH Toolbox) and Montreal Cognitive Assessment (MoCA) on their first visit. During the second visit, they completed NIH Patient-Reported Outcomes Measurement Information System (PROMIS) measures and underwent functional magnetic resonance imaging (fMRI; N = 15 agriculture and N = 35 non-agriculture) while completing a working memory task (Sternberg). Blood oxygen-level dependent (BOLD) response was compared between participants. Given the small sample size, the whole brain voxel-wise group comparison threshold was set at alpha = .05, but not otherwise corrected for multiple comparisons. Cohen’s d effect sizes were estimated for all voxels.
Results:
Analyses of cognitive scores showed significant deficits in episodic memory for the agricultural work group. Additionally, the agricultural work group scored higher on measures of self-reported anger, cognitive concerns, and social participation. Analyses of fMRI data showed increased BOLD activity around the orbitofrontal cortex (medium to large effects) and bilaterally in the entorhinal cortex (large effects) for the agricultural work group. The agricultural work group also showed decreased BOLD activity in the cerebellum and basal ganglia (medium to large effects).
Conclusions:
To our knowledge, this study provides the first-ever evidence showing differences in brain activity associated with a history of working in agriculture. These findings of poorer memory, concerns about cognitive functioning, and increased anger suggest clinical relevance. Social participation associated with agricultural work should be explored as a potential protective factor for cognition and brain health. Brain imaging data analyses showed increased activation in areas associated with motor functioning, cognitive control, and emotion. These findings are limited by small sample size, lack of diversity in our sample, and coarsely defined risk. Despite these limitations, the results are consistent with an overall concern that risks associated with agricultural work can lead to cognitive and psychiatric harm via changes in brain health. Replications and future studies with larger sample sizes, more diverse participants, and more accurately defined risks (e.g., pesticide exposure) are needed.
Deficits in cognitive ability are common among patients with schizophrenia. The MATRICS Consensus Cognitive Battery (MCCB) was designed to assess cognitive ability in studies of patients diagnosed with schizophrenia and has demonstrated high test-retest reliability with minimal practice effects, even in multi-site trials. However, given the motivational challenges associated with schizophrenia, it is unknown whether performance on MCCB tasks affects performance at later stages of testing. The goal of this study was to determine whether there are differences between people with and without schizophrenia in how their performance on individual MCCB tasks influences their performance throughout the battery.
Participants and Methods:
The sample comprised 92 total participants including 49 cognitively healthy comparison participants and 43 outpatients diagnosed with schizophrenia. The mean age of participants was 44.2 years (SD = 12.0, range 21–69) and 61% identified as male. The Trail Making Test, Brief Assessment of Cognition in Schizophrenia, Hopkins Verbal Learning Test – Revised, Letter-Number Span, and Category Fluency from the MCCB were administered in the same order at 2 different sites and studies from 2016–2022. The autocorrelation between t-scores for task scores within each participant was computed and then compared between control and outpatient participants to determine if there are differences between groups. Group mean t-scores for each task were also compared between groups.
Results:
We found no significant difference in autocorrelations across MCCB tasks between healthy comparison participants and outpatients. However, mean performance in all tasks was lower for the outpatient group than for the healthy comparison group. None of the tasks used stood out as having significantly lower mean scores than other tasks for either group.
Conclusions:
Our findings suggest that performance on individual MCCB tasks do not affect performance throughout the battery differently between the healthy comparison group and outpatients. This suggests that participants with schizophrenia are not particularly reactive to past performance on MCCB tasks. Additionally, this finding further supports use of the MCCB in this population. Further research is needed to determine whether subgroups of patients and/or different batteries of measures show different patterns of reactivity.
Attentional impairments are common in dementia with Lewy bodies and its prodromal stage of mild cognitive impairment (MCI) with Lewy bodies (MCI-LB). People with MCI may be capable of compensating for subtle attentional deficits in most circumstances, and so these may present as occasional lapses of attention. We aimed to assess the utility of a continuous performance task (CPT), which requires sustained attention for several minutes, for measuring attentional performance in MCI-LB in comparison to Alzheimer’s disease (MCI-AD), and any performance deficits which emerged with sustained effort.
Method:
We included longitudinal data on a CPT sustained attention task for 89 participants with MCI-LB or MCI-AD and 31 healthy controls, estimating ex-Gaussian response time parameters, omission and commission errors. Performance trajectories were estimated both cross-sectionally (intra-task progress from start to end) and longitudinally (change in performance over years).
Results:
While response times in successful trials were broadly similar, with slight slowing associated with clinical parkinsonism, those with MCI-LB made considerably more errors. Omission errors were more common throughout the task in MCI-LB than MCI-AD (OR 2.3, 95% CI: 1.1–4.7), while commission errors became more common after several minutes of sustained attention. Within MCI-LB, omission errors were more common in those with clinical parkinsonism (OR 1.9, 95% CI: 1.3–2.9) or cognitive fluctuations (OR 4.3, 95% CI: 2.2–8.8).
Conclusions:
Sustained attention deficits in MCI-LB may emerge in the form of attentional lapses leading to omissions, and a breakdown in inhibitory control leading to commission errors.
The COVID-19 pandemic accelerated the development of decentralized clinical trials (DCT). DCT’s are an important and pragmatic method for assessing health outcomes yet comprise only a minority of clinical trials, and few published methodologies exist. In this report, we detail the operational components of COVID-OUT, a decentralized, multicenter, quadruple-blinded, randomized trial that rapidly delivered study drugs nation-wide. The trial examined three medications (metformin, ivermectin, and fluvoxamine) as outpatient treatment of SARS-CoV-2 for their effectiveness in preventing severe or long COVID-19. Decentralized strategies included HIPAA-compliant electronic screening and consenting, prepacking investigational product to accelerate delivery after randomization, and remotely confirming participant-reported outcomes. Of the 1417 individuals with the intention-to-treat sample, the remote nature of the study caused an additional 94 participants to not take any doses of study drug. Therefore, 1323 participants were in the modified intention-to-treat sample, which was the a priori primary study sample. Only 1.4% of participants were lost to follow-up. Decentralized strategies facilitated the successful completion of the COVID-OUT trial without any in-person contact by expediting intervention delivery, expanding trial access geographically, limiting contagion exposure, and making it easy for participants to complete follow-up visits. Remotely completed consent and follow-up facilitated enrollment.
In the heart of the boreal forest in 1949, trappers gathered at a spring meeting in Wabowden, Manitoba, to discuss many items of business, including wolf predation on beavers. Recent debate and disagreement had broken out among the trappers regarding whether wolves actually killed beavers. One trapper stated wolves ‘harassed’ a beaver colony so extensively that he had to fell trees into the water to ensure the colony's survival. Some trappers remained sceptical and unconvinced. The debate was put to a lively and emphatic end when a trapper walked into the spring meeting and presented a bushel sack stuffed with wolf scats containing beaver fur (Nash 1951). The proof was in the poop!
Surprisingly, our understanding of wolf predation on beavers has progressed relatively little since 1949. Most attempts to study wolf predation on beavers followed an approach akin to the Manitoba trappers: collecting and examining wolf scats. By doing this, researchers in many areas across North America and Eurasia concluded, like the trappers, that beavers were important prey for wolves during the ice-free season. However, wolf–beaver dynamics received little attention beyond this, largely because (1) most wolf predation research was focused on wolf–ungulate interactions and predation on smaller alternate prey was not a priority (Gable et al 2018c), and (2) rigorously studying wolf predation during spring to autumn in forested ecosystems with dense vegetation was a monumental, and often impossible, task prior to GPS collar technology. Of course, many researchers and biologists had interesting ideas or hypotheses about wolf–beaver interactions, but most were based on anecdotal observations, indirect evidence or conjecture (Gable et al 2018c). None the less, these ideas were compelling and relevant. Some suggested dense beaver populations increased wolf pup survival (Benson et al 2013) and, in turn, wolf pack and population size (Andersone 1999; Barber-Meyer et al 2016). Others posited that dense beaver populations reduced wolf predation on ungulate prey (Forbes and Theberge 1996) while some claimed it increased predation (Andersone and Ozoliņš 2004; Latham et al 2013). Still others suspected wolves changed ecosystems by altering the ecosystem engineering behaviour of beavers (Peterson et al 2014). Clearly, wolf–beaver dynamics needed to be studied in more detail.
Several hypotheses may explain the association between substance use, posttraumatic stress disorder (PTSD), and depression. However, few studies have utilized a large multisite dataset to understand this complex relationship. Our study assessed the relationship between alcohol and cannabis use trajectories and PTSD and depression symptoms across 3 months in recently trauma-exposed civilians.
Methods
In total, 1618 (1037 female) participants provided self-report data on past 30-day alcohol and cannabis use and PTSD and depression symptoms during their emergency department (baseline) visit. We reassessed participant's substance use and clinical symptoms 2, 8, and 12 weeks posttrauma. Latent class mixture modeling determined alcohol and cannabis use trajectories in the sample. Changes in PTSD and depression symptoms were assessed across alcohol and cannabis use trajectories via a mixed-model repeated-measures analysis of variance.
Results
Three trajectory classes (low, high, increasing use) provided the best model fit for alcohol and cannabis use. The low alcohol use class exhibited lower PTSD symptoms at baseline than the high use class; the low cannabis use class exhibited lower PTSD and depression symptoms at baseline than the high and increasing use classes; these symptoms greatly increased at week 8 and declined at week 12. Participants who already use alcohol and cannabis exhibited greater PTSD and depression symptoms at baseline that increased at week 8 with a decrease in symptoms at week 12.
Conclusions
Our findings suggest that alcohol and cannabis use trajectories are associated with the intensity of posttrauma psychopathology. These findings could potentially inform the timing of therapeutic strategies.
Despite an elevated risk of psychopathology stemming from COVID-19-related stress, many essential workers stigmatise and avoid psychiatric care. This randomised controlled trial was designed to compare five versions of a social-contact-based brief video intervention for essential workers, differing by protagonist gender and race/ethnicity.
Aims
We examined intervention efficacy on treatment-related stigma (‘stigma’) and openness to seeking treatment (‘openness’), especially among workers who had not received prior mental healthcare. We assessed effectiveness and whether viewer/protagonist demographic concordance heightened effectiveness.
Method
Essential workers (N = 2734) randomly viewed a control video or brief video of an actor portraying an essential worker describing hardships, COVID-related anxiety and depression, and psychotherapy benefits. Five video versions (Black/Latinx/White and male/female) followed an identical 3 min script. Half the intervention group participants rewatched their video 14 days later. Stigma and openness were assessed at baseline, post-intervention, and at 14- and 30-day follow-ups. Trial registration: NCT04964570.
Results
All video intervention groups reported immediately decreased stigma (P < 0.0001; Cohen's d = 0.10) and increased openness (P < 0.0001; d = 0.23). The initial increase in openness was largely maintained in the repeated-video group at day 14 (P < 0.0001; d = 0.18), particularly among viewers without history of psychiatric treatment (P < 0.0001; d = 0.32). Increases were not sustained at follow-up. Female participants viewing a female protagonist and Black participants viewing a Black protagonist demonstrated greater openness than other demographic pairings.
Conclusions
Brief video-based interventions improved immediate stigma and openness. Greater effects among female and Black individuals viewing demographically matched protagonists emphasise the value of tailored interventions, especially for socially oppressed groups. This easily disseminated intervention may proactively increase care-seeking, encouraging treatment among workers in need. Future studies should examine intervention mechanisms and whether linking referrals to psychiatric services generates treatment-seeking.
To examine the costs and cost-effectiveness of mirtazapine compared to placebo over 12-week follow-up.
Design:
Economic evaluation in a double-blind randomized controlled trial of mirtazapine vs. placebo.
Setting:
Community settings and care homes in 26 UK centers.
Participants:
People with probable or possible Alzheimer’s disease and agitation.
Measurements:
Primary outcome included incremental cost of participants’ health and social care per 6-point difference in CMAI score at 12 weeks. Secondary cost-utility analyses examined participants’ and unpaid carers’ gain in quality-adjusted life years (derived from EQ-5D-5L, DEMQOL-Proxy-U, and DEMQOL-U) from the health and social care and societal perspectives.
Results:
One hundred and two participants were allocated to each group; 81 mirtazapine and 90 placebo participants completed a 12-week assessment (87 and 95, respectively, completed a 6-week assessment). Mirtazapine and placebo groups did not differ on mean CMAI scores or health and social care costs over the study period, before or after adjustment for center and living arrangement (independent living/care home). On the primary outcome, neither mirtazapine nor placebo could be considered a cost-effective strategy with a high level of confidence. Groups did not differ in terms of participant self- or proxy-rated or carer self-rated quality of life scores, health and social care or societal costs, before or after adjustment.
Conclusions:
On cost-effectiveness grounds, the use of mirtazapine cannot be recommended for agitated behaviors in people living with dementia. Effective and cost-effective medications for agitation in dementia remain to be identified in cases where non-pharmacological strategies for managing agitation have been unsuccessful.
Nitrogen fixation from pasture legumes is a fundamental process that contributes to the profitability and sustainability of dryland agricultural systems. The aim of this research was to determine whether well-managed pastures, based on aerial-seeding pasture legumes, could partially or wholly meet the nitrogen (N) requirements of subsequent grain crops in an annual rotation. Fifteen experiments were conducted in Western Australia with wheat, barley or canola crops grown in a rotation that included the pasture legume species French serradella (Ornithopus sativus), biserrula (Biserrula pelecinus), bladder clover (Trifolium spumosum), annual medics (Medicago spp.) and the non-aerial seeded subterranean clover (Trifolium subterraneum). After the pasture phase, five rates of inorganic N fertilizer (Urea, applied at 0, 23, 46, 69 and 92 kg/ha) were applied to subsequent cereal and oil seed crops. The yields of wheat grown after serradella, biserrula and bladder clover, without the use of applied N fertilizer, were consistent with the target yields for growing conditions of the trials (2.3 to 5.4 t/ha). Crop yields after phases of these pasture legume species were similar or higher than those following subterranean clover or annual medics. The results of this study suggest a single season of a legume-dominant pasture may provide sufficient organic N in the soil to grow at least one crop, without the need for inorganic N fertilizer application. This has implications for reducing inorganic N requirements and the carbon footprint of cropping in dryland agricultural systems.
Response to lithium in patients with bipolar disorder is associated with clinical and transdiagnostic genetic factors. The predictive combination of these variables might help clinicians better predict which patients will respond to lithium treatment.
Aims
To use a combination of transdiagnostic genetic and clinical factors to predict lithium response in patients with bipolar disorder.
Method
This study utilised genetic and clinical data (n = 1034) collected as part of the International Consortium on Lithium Genetics (ConLi+Gen) project. Polygenic risk scores (PRS) were computed for schizophrenia and major depressive disorder, and then combined with clinical variables using a cross-validated machine-learning regression approach. Unimodal, multimodal and genetically stratified models were trained and validated using ridge, elastic net and random forest regression on 692 patients with bipolar disorder from ten study sites using leave-site-out cross-validation. All models were then tested on an independent test set of 342 patients. The best performing models were then tested in a classification framework.
Results
The best performing linear model explained 5.1% (P = 0.0001) of variance in lithium response and was composed of clinical variables, PRS variables and interaction terms between them. The best performing non-linear model used only clinical variables and explained 8.1% (P = 0.0001) of variance in lithium response. A priori genomic stratification improved non-linear model performance to 13.7% (P = 0.0001) and improved the binary classification of lithium response. This model stratified patients based on their meta-polygenic loadings for major depressive disorder and schizophrenia and was then trained using clinical data.
Conclusions
Using PRS to first stratify patients genetically and then train machine-learning models with clinical predictors led to large improvements in lithium response prediction. When used with other PRS and biological markers in the future this approach may help inform which patients are most likely to respond to lithium treatment.
Two introduced carnivores, the European red fox Vulpes vulpes and domestic cat Felis catus, have had extensive impacts on Australian biodiversity. In this study, we collate information on consumption of Australian birds by the fox, paralleling a recent study reporting on birds consumed by cats. We found records of consumption by foxes on 128 native bird species (18% of the non-vagrant bird fauna and 25% of those species within the fox’s range), a smaller tally than for cats (343 species, including 297 within the fox’s Australian range, a subset of that of the cat). Most (81%) bird species eaten by foxes are also eaten by cats, suggesting that predation impacts are compounded. As with consumption by cats, birds that nest or forage on the ground are most likely to be consumed by foxes. However, there is also some partitioning, with records of consumption by foxes but not cats for 25 bird species, indicating that impacts of the two predators may also be complementary. Bird species ≥3.4 kg were more likely to be eaten by foxes, and those <3.4 kg by cats. Our compilation provides an inventory and describes characteristics of Australian bird species known to be consumed by foxes, but we acknowledge that records of predation do not imply population-level impacts. Nonetheless, there is sufficient information from other studies to demonstrate that fox predation has significant impacts on the population viability of some Australian birds, especially larger birds, and those that nest or forage on the ground.
The present study aimed to clarify the neuropsychological profile of the emergent diagnostic category of Mild Cognitive Impairment with Lewy bodies (MCI-LB) and determine whether domain-specific impairments such as in memory were related to deficits in domain-general cognitive processes (executive function or processing speed).
Method:
Patients (n = 83) and healthy age- and sex-matched controls (n = 34) underwent clinical and imaging assessments. Probable MCI-LB (n = 44) and MCI-Alzheimer’s disease (AD) (n = 39) were diagnosed following National Institute on Aging-Alzheimer’s Association (NIA-AA) and dementia with Lewy bodies (DLB) consortium criteria. Neuropsychological measures included cognitive and psychomotor speed, executive function, working memory, and verbal and visuospatial recall.
Results:
MCI-LB scored significantly lower than MCI-AD on processing speed [Trail Making Test B: p = .03, g = .45; Digit Symbol Substitution Test (DSST): p = .04, g = .47; DSST Error Check: p < .001, g = .68] and executive function [Trail Making Test Ratio (A/B): p = .04, g = .52] tasks. MCI-AD performed worse than MCI-LB on memory tasks, specifically visuospatial (Modified Taylor Complex Figure: p = .01, g = .46) and verbal (Rey Auditory Verbal Learning Test: p = .04, g = .42) delayed recall measures. Stepwise discriminant analysis correctly classified the subtype in 65.1% of MCI patients (72.7% specificity, 56.4% sensitivity). Processing speed accounted for more group-associated variance in visuospatial and verbal memory in both MCI subtypes than executive function, while no significant relationships between measures were observed in controls (all ps > .05)
Conclusions:
MCI-LB was characterized by executive dysfunction and slowed processing speed but did not show the visuospatial dysfunction expected, while MCI-AD displayed an amnestic profile. However, there was considerable neuropsychological profile overlap and processing speed mediated performance in both MCI subtypes.